GCP外部HTTP云负载均衡器,在GKE上使用nginx-ingress



我的目标是在我们的GCP GKE中使用带有NGINX INGRESS的外部HTTP云负载均衡器。

我正在尝试Rami H提出的解决方案,谷歌开发者Garry Singh在这里证实:GKE Nginx入口控制器前的全局负载均衡器(HTTPS Loadbalancer)

你可以创建Nginx作为LoadBalancer类型的服务,并根据谷歌文档给它一个NEG注释。https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing你可以使用这个底片作为后端服务(目标),您可以使用HTTP (S)负载平衡gcloud命令从本文https://hodo.dev/posts/post-27-gcp-using-neg/

我遵循了上面提到的hodo.dev教程,并成功地部署了带有neg的HTTP LB作为后端服务。然后我发现这个脚本附加NGINX-INGRESS到neg,但它可能过时了,在部署时失败。https://gist.github.com/halvards/dc854f16d76bcc86ec59d846aa2011a0

请有人帮助我适应hodo.dev配置部署那里的nginx-ingress?这里是repo与我的配置脚本https://github.com/robinpecha/hododev_gke-negs-httplb

#First lets define some variables:
PROJECT_ID=$(gcloud config list project --format='value(core.project)') ; echo $PROJECT_ID
ZONE=europe-west2-b ; echo $ZONE
CLUSTER_NAME=negs-lb ; echo $CLUSTER_NAME
# and we need a cluster
gcloud container clusters create $CLUSTER_NAME --zone $ZONE --machine-type "e2-medium" --enable-ip-alias --num-nodes=2
# the --enable-ip-alias enables the VPC-native traffic routing option for your cluster. This option creates and attaches additional subnets to VPC, the pods will have IP address allocated from the VPC subnets, and in this way the pods can be addressed directly by the load balancer aka container-native load balancing.
# Next we need a simple deployment, we will use nginx
cat << EOF > app-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
EOF
kubectl apply -f app-deployment.yaml
# and the service
cat << EOF > app-service.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
annotations:
cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "app-service-80-neg"}}}'
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: nginx
EOF
kubectl apply -f app-service.yaml
# this annotation cloud.google.com/neg tells the GKE to create a NEG for this service and to add and remove endpoints (pods) to this group.
# Notice here that the type is ClusterIP. Yes it is possible to expose the service to the internet even if the type is ClusterIP. This one of the magic of NEGs.
# You can check if the NEG was created by using next command
gcloud compute network-endpoint-groups list
# Next let’s create the load balancer and all the required components.
# We need a firewall rule that will allow the traffic from the load balancer
# find the network tags used by our cluster
NETWORK_TAGS=$(gcloud compute instances describe 
$(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') 
--zone=$ZONE --format="value(tags.items[0])")
echo $NETWORK_TAGS
# create the firewall rule
gcloud compute firewall-rules create $CLUSTER_NAME-lb-fw 
--allow tcp:80 
--source-ranges 130.211.0.0/22,35.191.0.0/16 
--target-tags $NETWORK_TAGS
# and a health check configuration
gcloud compute health-checks create http app-service-80-health-check 
--request-path / 
--port 80 
--check-interval 60 
--unhealthy-threshold 3 
--healthy-threshold 1 
--timeout 5
# and a backend service
gcloud compute backend-services create $CLUSTER_NAME-lb-backend 
--health-checks app-service-80-health-check 
--port-name http 
--global 
--enable-cdn 
--connection-draining-timeout 300
# next we need to add our NEG to the backend service
gcloud compute backend-services add-backend $CLUSTER_NAME-lb-backend 
--network-endpoint-group=app-service-80-neg 
--network-endpoint-group-zone=$ZONE 
--balancing-mode=RATE 
--capacity-scaler=1.0 
--max-rate-per-endpoint=1.0 
--global
# This was the backend configuration, let’s setup also the fronted.
# First the url map
gcloud compute url-maps create $CLUSTER_NAME-url-map --default-service $CLUSTER_NAME-lb-backend
# and then the http proxy
gcloud compute target-http-proxies create $CLUSTER_NAME-http-proxy --url-map $CLUSTER_NAME-url-map
# and finally the global forwarding rule
gcloud compute forwarding-rules create $CLUSTER_NAME-forwarding-rule 
--global 
--ports 80 
--target-http-proxy $CLUSTER_NAME-http-proxy
# Done! Give some time for the load balancer to setup all the components and then you can test if your setup works as expected.
# get the public ip address
IP_ADDRESS=$(gcloud compute forwarding-rules describe $CLUSTER_NAME-forwarding-rule --global --format="value(IPAddress)")
# print the public ip address
echo $IP_ADDRESS
# make a request to the service
curl -s -I http://$IP_ADDRESS/

技巧是将ingress-nginx服务部署为ClusterIP而不是LoadBalancer,然后使用NEG和GCP外部负载平衡器功能公开ingresss-nginx-controller服务。

首先你需要更新helm repo

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

这个ingress-nginx的默认安装配置为使用LoadBalancer选项,这将自动为您创建一个负载平衡器,但在这种情况下不是预期的行为。如果我理解正确的话,您想要创建/配置您自己的GCP负载均衡器,在GKE之外手动配置它,并将流量路由到您的自定义入口-nginx。为此,您需要将服务类型更改为"clusterip";并添加NEG注释。

创建一个文件values.yaml

cat << EOF > values.yaml
controller:
service:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "ingress-nginx-80-neg"}}}'
EOF

并安装ingress-nginx

helm install -f values.yaml ingress-nginx ingress-nginx/ingress-nginx

之后,你需要配置负载均衡器,使用NEG指向你的入口-nginx控制器。

我在这个要点中添加了完整的步骤https://gist.github.com/gabihodoroaga/1289122db3c5d4b6c59a43b8fd659496

最新更新