我使用一个K8S主节点和两个工作节点在本地运行K8S集群(云中没有任何内容)。
- k8s-master: 192.168.100.100
- 工作节点-1:192.168.100.101 worker-node-2: 192.168.100.102
我使用kubernetes/ingress-nginx将流量路由到我的简单应用程序。这些是我在两个工作节点上运行的pod:
[root@k8s-master ingress]# kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default hello-685445b9db-b7nql 1/1 Running 0 44m 10.5.2.7 worker-node-2 <none> <none>
default hello-685445b9db-ckndn 1/1 Running 0 44m 10.5.2.6 worker-node-2 <none> <none>
default hello-685445b9db-vd6h2 1/1 Running 0 44m 10.5.1.18 worker-node-1 <none> <none>
default ingress-nginx-controller-56c75d774d-p7whv 1/1 Running 1 30h 10.5.1.14 worker-node-1 <none> <none>
kube-system coredns-74ff55c5b-s8zss 1/1 Running 12 16d 10.5.0.27 k8s-master <none> <none>
kube-system coredns-74ff55c5b-w6rsh 1/1 Running 12 16d 10.5.0.26 k8s-master <none> <none>
kube-system etcd-k8s-master 1/1 Running 12 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-apiserver-k8s-master 1/1 Running 12 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 14 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-flannel-ds-76mt8 1/1 Running 1 30h 192.168.100.102 worker-node-2 <none> <none>
kube-system kube-flannel-ds-bfnjw 1/1 Running 10 16d 192.168.100.101 worker-node-1 <none> <none>
kube-system kube-flannel-ds-krgzg 1/1 Running 13 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-proxy-6bq6n 1/1 Running 1 30h 192.168.100.102 worker-node-2 <none> <none>
kube-system kube-proxy-df8fn 1/1 Running 13 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-proxy-z8q2z 1/1 Running 10 16d 192.168.100.101 worker-node-1 <none> <none>
kube-system kube-scheduler-k8s-master 1/1 Running 12 16d 192.168.100.100 k8s-master <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-799cd98cf6-zh8xs 1/1 Running 9 16d 192.168.100.101 worker-node-1 <none> <none>
kubernetes-dashboard kubernetes-dashboard-74d688b6bc-hvxgm 1/1 Running 10 16d 10.5.1.17 worker-node-1 <none> <none>
这些是在我的集群上运行的服务:
[root@k8s-master ingress]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello NodePort 10.105.236.241 <none> 80:31999/TCP 30h
ingress-nginx-controller NodePort 10.110.141.41 <none> 80:30428/TCP,443:32682/TCP 30h
ingress-nginx-controller-admission ClusterIP 10.109.15.31 <none> 443/TCP 30h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d
这是入口描述:
[root@k8s-master ingress]# kubectl describe ingress ingress-hello
Name: ingress-hello
Namespace: default
Address: 10.110.141.41
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/hello hello:80 (10.5.1.18:80,10.5.2.6:80,10.5.2.7:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
问题是通过访问具有入口控制器端口的工人节点1 IP地址访问第一个节点=30428,http://192.168.100.101:30428它工作得很好,没有问题。访问具有相同入口端口的IP30428访问worker-node-2,没有响应通过访问URL:http://192.168.100.102:30428,可以从节点外部也可以从节点内部访问.我还尝试执行telnet命令(在工作节点2内),也没有运气:
[root@worker-node-2 ~]# telnet 192.168.100.102 30428
Trying 192.168.100.102...
最有趣的是端口显示在netstat命令中,因为我从Node-2内部执行此命令,显示入口端口:30428在LISTEN中状态:[root@worker-node-2 ~]# netstat -tulnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1284/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 2578/kube-proxy
tcp 0 0 0.0.0.0:32682 0.0.0.0:* LISTEN 2578/kube-proxy
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 1856/dnsmasq
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1020/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1016/cupsd
tcp 0 0 127.0.0.1:41561 0.0.0.0:* LISTEN 1284/kubelet
tcp 0 0 0.0.0.0:30428 0.0.0.0:* LISTEN 2578/kube-proxy
tcp 0 0 0.0.0.0:31999 0.0.0.0:* LISTEN 2578/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 1284/kubelet
tcp6 0 0 :::111 :::* LISTEN 1/systemd
tcp6 0 0 :::10256 :::* LISTEN 2578/kube-proxy
tcp6 0 0 :::22 :::* LISTEN 1020/sshd
tcp6 0 0 ::1:631 :::* LISTEN 1016/cupsd
udp 0 0 0.0.0.0:5353 0.0.0.0:* 929/avahi-daemon: r
udp 0 0 0.0.0.0:44997 0.0.0.0:* 929/avahi-daemon: r
udp 0 0 192.168.122.1:53 0.0.0.0:* 1856/dnsmasq
udp 0 0 0.0.0.0:67 0.0.0.0:* 1856/dnsmasq
udp 0 0 0.0.0.0:111 0.0.0.0:* 1/systemd
根据我的理解,所有工作节点必须为"入口控制器"暴露NodePort。端口= 30428 ? ?
编辑:我发现ingress-nginx-controller-56c75d774d-p7whv">只部署在node-1上。我需要确保入口-nginx控制器在所有节点上运行吗?如果这种说法是正确的,如何实现这一点呢?
Kubernetes网络(更具体地说是kube-proxy)使用iptables来控制pod和节点之间的网络连接。由于Centos 8使用nftables
而不是iptables
,这会导致网络问题。
Calico在v.3.8.1+中包含了对在NFT模式下使用iptables的主机的支持。解决方法为设置FELIX_IPTABLESBACKEND=NFT
选项。这将告诉Calico使用支持的nftables。
该参数控制Felix使用的iptables二进制的哪个变体。将其设置为
Auto
以自动检测后端。如果是特定的需要后端,然后使用NFT
主机使用netfilter后端或Legacy
为其他人。[默认值:Legacy
]
请访问calico页面查看如何配置felix。要了解更多信息,请访问github issue。