linux如何将数据包转发到k8s服务



我已经在VM上安装了k8s。现在我可以从这个VM访问一些k8s服务,比如

[root@vm ~]# netcat -vz 10.96.0.10 9153
kube-dns.kube-system.svc.cluster.local [10.96.0.10] 9153 open

10.96.0.10是kube dns服务的ClusterIP。

我的问题是Linux如何将请求转发到10.96.0.10到正确的目的地?

我在虚拟机上没有看到任何IP为10.96.0.10的接口和10.96.010的路由规则。

[root@vm ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:00:00:10 brd ff:ff:ff:ff:ff:ff
inet 192.168.4.104/24 brd 192.168.4.255 scope global dynamic noprefixroute ens3
valid_lft 33899sec preferred_lft 28499sec
inet6 fe80::ca7d:cdfe:42a3:75f/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:cd:1d:8a:77 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 10.0.219.64/32 brd 10.0.219.64 scope global tunl0
valid_lft forever preferred_lft forever
7: calib9d0c90540c@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
8: cali81206f5bf92@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
[root@vm ~]# ip route
default via 192.168.4.1 dev ens3 proto dhcp src 192.168.4.104 metric 202
10.0.189.64/26 via 192.168.4.107 dev tunl0 proto bird onlink
blackhole 10.0.219.64/26 proto bird
10.0.219.107 dev calib9d0c90540c scope link
10.0.219.108 dev cali81206f5bf92 scope link
10.0.235.128/26 via 192.168.4.105 dev tunl0 proto bird onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.4.0/24 dev ens3 proto dhcp scope link src 192.168.4.104 metric 202

kubelet管理iptables NAT规则,将流量路由到服务的实际端点。因此,服务IP是纯虚拟的,并在服务的所有端点之间进行循环重写。

最新更新