NGINX 入口控制器返回 502,后端应用程序 Pod 中没有日志



我已经在我的 kubernetes 集群上部署了 ECK(所有流浪虚拟机)。群集具有以下配置。

NAME       STATUS   ROLES                       AGE   VERSION 
kmaster1   Ready    control-plane,master        27d     v1.21.1 
kworker1   Ready    <none>                      27d     v1.21.1
kworker2   Ready    <none>                      27d     v1.21.1

我还用HAProxy设置了一个负载均衡器。负载均衡器配置如下(创建了我自己的私有证书)

frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back
frontend https_front
bind *:443 ssl crt /etc/ssl/private/mydomain.pem
stats uri /haproxy?stats
default_backend https_back

backend http_back
balance roundrobin
server kworker1 172.16.16.201:31953
server kworker2 172.16.16.202:31953

backend https_back
balance roundrobin
server kworker1 172.16.16.201:31503 check-ssl ssl verify none
server kworker2 172.16.16.202:31503 check-ssl ssl verify none

我还部署了一个nginx入口控制器和 31953 是 nginx 控制器的 HTTP 端口 31503 是 nginx 控制器的 https 端口

nginx-ingress    nginx-ingress-controller-service   NodePort    10.103.189.197   <none>        80:31953/TCP,443:31503/TCP   8d    app=nginx-ingress

我正在尝试在 https 上的集群外部提供 kibana 仪表板。它工作正常,我可以在群集内访问它。但是我无法通过负载均衡器访问它。

木花豆荚:

default          quickstart-kb-f74c666b9-nnn27              1/1     Running   4          27d   192.168.41.145   kworker1   <none>           <none>

我已将负载均衡器映射到主机

172.16.16.100   elastic.kubekluster.com

任何 https://elastic.kubekluster.com 请求都会导致以下错误(来自 nginx 入口控制器 pod 的日志)

10.0.2.15 - - [20/Jun/2021:17:38:14 +0000] "GET / HTTP/1.1" 502 157 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0" "-"
2021/06/20 17:38:14 [error] 178#178: *566 upstream prematurely closed connection while reading response header from upstream, client: 10.0.2.15, server: elastic.kubekluster.com, request: "GET / H
TTP/1.1", upstream: "http://192.168.41.145:5601/", host: "elastic.kubekluster.com"

HA代理日志如下

Jun 20 18:11:45 loadbalancer haproxy[18285]: 172.16.16.1:48662 [20/Jun/2021:18:11:45.782] https_front~ https_back/kworker2 0/0/0/4/4 502 294 - - ---- 1/1/0/0/0 0/0 "GET / HTTP/1.1"

入口

如下
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubekluster-elastic-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/default-backend: quickstart-kb-http
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600s"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600s"
nginx.ingress.kubernetes.io/proxy-body-size: 20m
spec:
tls:
- hosts:
- elastic.kubekluster.com
rules:
- host: elastic.kubekluster.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: quickstart-kb-http
port: 
number: 5601

我认为请求没有到达 kibana pod,因为我在 pod 中看不到任何日志。我也不明白为什么Haproxy将请求作为HTTP而不是HTTPS发送。 您能否指出我的配置的任何问题?

我希望这有帮助...以下是我如何使用nginx设置"负载均衡器"并将流量转发到HTTPS服务:

kubectl get nodes -o wide 
NAME           STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP      OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
asd-master-1   Ready    master   72d   v1.19.8   192.168.1.163   213.95.154.199   Ubuntu 20.04.2 LTS   5.8.0-45-generic   docker://20.10.6
asd-node-1     Ready    <none>   72d   v1.19.8   192.168.1.101   <none>           Ubuntu 20.04.1 LTS   5.8.0-45-generic   docker://19.3.15
asd-node-2     Ready    <none>   72d   v1.19.8   192.168.0.5     <none>           Ubuntu 20.04.1 LTS   5.8.0-45-generic   docker://19.3.15
asd-node-3     Ready    <none>   15d   v1.19.8   192.168.2.190   <none>           Ubuntu 20.04.1 LTS   5.8.0-45-generic   docker://19.3.15

这是nginx的服务:

# kubectl get service -n ingress-nginx
NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.101.161.113   <none>        80:30337/TCP,443:31996/TCP   72d

这是负载均衡器配置:

# cat /etc/nginx/nginx.conf
... trimmed ...
stream {
upstream nginx_http {
least_conn;
server asd-master-1:30337 max_fails=3 fail_timeout=5s;
server asd-node-1:30337 max_fails=3 fail_timeout=5s;
server asd-node-2:30337 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
proxy_pass nginx_http;
proxy_protocol on;
}
upstream nginx_https {
least_conn;
server 192.168.1.163:31996 max_fails=3 fail_timeout=5s;
server 192.168.1.101:31996 max_fails=3 fail_timeout=5s;
server 192.168.0.5:31996 max_fails=3 fail_timeout=5s;
}
server {
listen     443;
proxy_pass nginx_https;
proxy_protocol on;
}
}

相关的部分是我正在发送代理协议。您需要配置 nginx 入口(在配置图中)以接受这一点,并可能将正确的语法添加到 haproxy 配置中。

这可能是这样的:

backend https_back
balance roundrobin
server kworker1 172.16.16.201:31503 check-ssl ssl verify none send-proxy-v2
server kworker2 172.16.16.202:31503 check-ssl ssl verify none send-proxy-v2

Nginx 入口配置应为:

# kubectl get configmap -n ingress-nginx  nginx-configuration -o yaml
apiVersion: v1
data:
use-proxy-protocol: "true"
kind: ConfigMap
metadata:
...

我希望这能让你走上正确的轨道。

从@oz123的回答中得到启发,我对其进行了更多分析,最终能够通过以下配置实现它。

Loadbalancer config (HAProxy)

通过在Vagrantfile中配置LB来使用桥接网络公开LB。 在 Haproxy 中启用了 TLS 直通。

frontend kubernetes-frontend
bind 192.168.1.23:6443
mode tcp
option tcplog
default_backend kubernetes-backend
backend kubernetes-backend
mode tcp
option tcp-check
balance roundrobin
server kmaster1 172.16.16.101:6443 check fall 3 rise 2
frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back
frontend https_front
mode tcp
bind *:443
#ssl crt /etc/ssl/private/mydomain.pem
stats uri /haproxy?stats
default_backend https_back

backend http_back
balance roundrobin
server kworker1 172.16.16.201:32502
server kworker2 172.16.16.202:32502

backend https_back
mode tcp
balance roundrobin
server kworker1 172.16.16.201:31012
server kworker2 172.16.16.202:31012

入口控制器

创建了一个 Nodeport 入口控制器服务,并通过此控制器公开所有内部服务(例如 kibana)。除入口控制器之外的所有其他服务都是 ClusterIP。

apiVersion: v1
kind: Service
metadata:
annotations:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.1.1
helm.sh/chart: ingress-nginx-4.0.15
name: ingress-nginx-controller
namespace: ingress-nginx
resourceVersion: "8198"
uid: 245a554f-56a8-4bc4-a3dd-19ffc9116a08
spec:
clusterIP: 10.105.43.200
clusterIPs:
- 10.105.43.200
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
nodePort: 32502
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
nodePort: 31012
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

Kibana 的入口资源

kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
generation: 1
name: ingress-kibana
namespace: default
spec:
rules:
- host: kibana.kubekluster.com
http:
paths:
- backend:
service:
name: quickstart-kb-http
port:
number: 5601
path: /
pathType: Prefix
tls:
- secretName: quickstart-kb-http-certs-public

最后在/etc/hosts 中创建一个条目,并将 LB ip 映射到子域并访问 kibana 控制台,例如

https://kibana.kubekluster.com

最新更新