负载均衡器服务不会重定向到所需的容器



我在玩kubernetes,我已经用4种部署设置了我的环境:

  • 你好:基本的"你好世界"服务
  • auth:提供身份验证和加密
  • 前端:一个nginx反向代理,它表示从外部进入的单个点,并在内部路由到精确的pod
  • nodehello:基本的"helloworld"服务,用nodejs编写(这是我的贡献)

对于helloauthnodehello部署,我分别设置了一个内部服务。

对于frontend部署,我已经设置了一个负载均衡器服务,它将向外部世界公开。它使用配置映射nginx-frontend-conf重定向到适当的pod,并具有以下内容:

upstream hello {
server hello.default.svc.cluster.local;
}
upstream auth {
server auth.default.svc.cluster.local;
}
upstream nodehello {
server nodehello.default.svc.cluster.local;
}          
server {
listen 443;
ssl    on;
ssl_certificate     /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location / {
proxy_pass http://hello;
}
location /login {
proxy_pass http://auth;
}
location /nodehello {
proxy_pass http://nodehello;
} 
}

当使用curl -k https://<frontend-external-ip>调用前端端点时,我会被路由到可用的hellopod,这是预期的行为。然而,当调用https://<frontend-external-ip>/nodehello时,我不会被路由到nodehello吊舱,而是再次被路由到一个hellopod

我怀疑upstream nodehello配置是失败的部分。我不确定服务发现在这里是如何工作的,即dns名称nodehello.default.svc.cluster.local将如何公开。如果能解释一下它是如何工作的以及我做错了什么,我将不胜感激。

使用的yaml文件

部署/hello.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello
spec:
replicas: 3
template:
metadata:
labels:
app: hello
track: stable
spec:
containers:
- name: hello
image: "udacity/example-hello:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1

部署/auth.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
track: stable
spec:
containers:
- name: auth
image: "udacity/example-auth:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1

部署/前端.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
track: stable
spec:
containers:
- name: nginx
image: "nginx:1.9.14"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
volumeMounts:
- name: "nginx-frontend-conf"
mountPath: "/etc/nginx/conf.d"
- name: "tls-certs"
mountPath: "/etc/tls"
volumes:
- name: "tls-certs"
secret:
secretName: "tls-certs"
- name: "nginx-frontend-conf"
configMap:
name: "nginx-frontend-conf"
items:
- key: "frontend.conf"
path: "frontend.conf"

部署/nodhello.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodehello
spec:
replicas: 1
template:
metadata:
labels:
app: nodehello 
track: stable
spec:
containers:
- name: nodehello 
image: "thezebra/nodehello:0.0.2"
ports:
- name: http
containerPort: 80
resources:
limits:
cpu: 0.2
memory: "10Mi"

服务/hello.yaml

kind: Service
apiVersion: v1
metadata:
name: "hello"
spec:
selector:
app: "hello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80

服务/auth.yaml

kind: Service
apiVersion: v1
metadata:
name: "auth"
spec:
selector:
app: "auth"
ports:
- protocol: "TCP"
port: 80
targetPort: 80

服务/前端.aml

kind: Service
apiVersion: v1
metadata:
name: "frontend"
spec:
selector:
app: "frontend"
ports:
- protocol: "TCP"
port: 443
targetPort: 443
type: LoadBalancer

服务/nodhello.yaml

kind: Service
apiVersion: v1
metadata:
name: "nodehello"
spec:
selector:
app: "nodehello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80

这非常有效:-)

$ curl -s http://frontend/
{"message":"Hello"}
$ curl -s http://frontend/login
authorization failed
$ curl -s http://frontend/nodehello
Hello World!

我怀疑您在添加/nodhello时可能已经更新了nginx前端conf,但尚未重新启动nginx。Pods不会自动拾取更改的ConfigMaps。尝试:

kubectl delete pod -l app=frontend

在版本化的ConfigMaps出现之前,没有更好的解决方案。

相关内容

最新更新