Kubernetes kube代理正在运行,但找不到指定的kubeconfig文件



我想设置k8skube-proxy配置文件权限以进行强化。

我在详述kube-proxy进程如何在--config标志设置为找不到的路径(var/lib/kube-proxy/config.conf(的情况下运行。。。

事实上,检查kube-proxy过程给出的结果是:

[centos@cpu-node0 ~]$ ps -ef | grep kube-proxy
root     20890 20872  0 Oct20 ?        00:19:23 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=cpu-node0
centos   55623 51112  0 14:44 pts/0    00:00:00 grep --color=auto kube-proxy

但文件/var/lib/kube-proxy/config.conf不存在:

[centos@cpu-node0 ~]$ ll /var/lib/kube-proxy/config.conf
ls: cannot access /var/lib/kube-proxy/config.conf: No such file or directory

为什么?

我的设置(1.19(也面临问题

[root@project1kubemaster ~]# kubectl version --short
Client Version: v1.19.3
Server Version: v1.19.3
[root@project1kubemaster ~]# ps -ef | grep kube-proxy
root      2103  2046  0 11:30 ?        00:00:00 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=project1kubemaster
[root@project1kubemaster ~]# ll /var/lib/kube-proxy/config.conf
ls: cannot access /var/lib/kube-proxy/config.conf: No such file or directory

一个更有趣的事情是";kube代理";也找不到:

[root@project1kubemaster ~]# ls /usr/local/bin/kube-proxy
ls: cannot access /usr/local/bin/kube-proxy: No such file or directory

上面让我意识到kube代理二进制文件正在该节点的kubeproxy容器中运行

[root@project1kubemaster ~]# kubectl get pods -n kube-system -o wide | grep proxy
kube-proxy-ffbqr                             1/1     Running   0          27m   <IP>   project1kubeworker2   <none>           <none>
kube-proxy-r9pz9                             1/1     Running   0          29m   <IP>  project1kubemaster    <none>           <none>
kube-proxy-zcrtw                             1/1     Running   0          27m   <IP>   project1kubeworker1   <none>           <none>
[root@project1kubemaster ~]# kubectl exec -it kube-proxy-r9pz9 -n kube-system -- /bin/sh
#
#
# find / -name config.conf
/var/lib/kube-proxy/..2020_11_02_16_30_32.787002112/config.conf
/var/lib/kube-proxy/config.conf

简而言之,它看起来像kube代理二进制&配置文件在该节点的kube代理pod中,并且在该pod中运行。它可能显示主机的ps-ef输出的一个原因可能是pod使用了主机的PidName空间。我们还可以看到,kube代理进程的父pid只不过是相应容器的container。

[root@project1kubemaster ~]# ps -ef | grep 2046
root      2046 16904  0 11:30 ?        00:00:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/c3e9bf6ecdcdd0f56d0c76711cea4cadd023cd6ef82bf8312311248a7b0501a4 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root      2103  2046  0 11:30 ?        00:00:00 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=project1kubemaster

绝对@困惑的天才,kube-proxy进程及其配置文件位于kube-proxypod中。

[centos@hp-gpu-node2 ~]$ ps -ef | grep proxy
root       807     1  0 Oct20 ?        00:00:00 /usr/sbin/gssproxy -D
root     12256 12239  0 Oct20 ?        00:18:42 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=hp-gpu-node2
centos   15338 10073  0 22:01 pts/0    00:00:00 grep --color=auto proxy
[centos@hp-gpu-node2 ~]$ ps -ef | grep 12239
root     12239  4681  0 Oct20 ?        00:00:37 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/415bc97a940caf1493db295d4b794e7313a431c6189d775d6e66a1337e13802f -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root     12256 12239  0 Oct20 ?        00:18:42 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=hp-gpu-node2
centos   15452 10073  0 22:01 pts/0    00:00:00 grep --color=auto 12239
[centos@hp-gpu-node2 ~]$

现在列出容器并获得kube-proxy容器短id:

[centos@hp-gpu-node2 ~]$ sudo docker container ls
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES
b4ef8516ffba        nvidia/k8s-device-plugin                   "nvidia-device-plugin"   13 days ago         Up 13 days                              k8s_nvidia-device-plugin-ctr_nvidia-device-plugin-daemonset-1.12-k2jsn_kube-system_5eb15b43-12ce-11eb-a793-566f15970179_0
5361f340e00c        gcr.io/google_containers/pause-amd64:3.1   "/pause"                 13 days ago         Up 13 days                              k8s_POD_nvidia-device-plugin-daemonset-1.12-k2jsn_kube-system_5eb15b43-12ce-11eb-a793-566f15970179_0
415bc97a940c        gcr.io/google-containers/kube-proxy        "/usr/local/bin/kube…"   13 days ago         Up 13 days                              k8s_kube-proxy_kube-proxy-4d4hl_kube-system_3ebe3bf6-12cd-11eb-a793-566f15970179_0
762943484b1e        gcr.io/google_containers/pause-amd64:3.1   "/pause"                 13 days ago         Up 13 days                              k8s_POD_kube-proxy-4d4hl_kube-system_3ebe3bf6-12cd-11eb-a793-566f15970179_0
4bfdabe6597c        k8s.gcr.io/k8s-dns-node-cache              "/node-cache -locali…"   13 days ago         Up 13 days                              k8s_node-cache_nodelocaldns-hpvpb_kube-system_0c398cf7-12cd-11eb-a793-566f15970179_0
52c0f95f2d4c        gcr.io/google_containers/pause-amd64:3.1   "/pause"                 13 days ago         Up 13 days                              k8s_POD_nodelocaldns-hpvpb_kube-system_0c398cf7-12cd-11eb-a793-566f15970179_0
a34ec37154a8        calico/node                                "start_runit"            13 days ago         Up 13 days                              k8s_calico-node_calico-node-6vrn4_kube-system_fb95886a-12cc-11eb-a793-566f15970179_0
09895989f5b7        gcr.io/google_containers/pause-amd64:3.1   "/pause"                 13 days ago         Up 13 days                              k8s_POD_calico-node-6vrn4_kube-system_fb95886a-12cc-11eb-a793-566f15970179_0
eee5cc5a8e7a        53f3fd8007f7                               "nginx -g 'daemon of…"   13 days ago         Up 13 days                              k8s_nginx-proxy_nginx-proxy-hp-gpu-node2_kube-system_b853c9cd2cc0a3a71070731d4f6cfbca_0
d59e91a314a2        gcr.io/google_containers/pause-amd64:3.1   "/pause"                 13 days ago         Up 13 days                              k8s_POD_nginx-proxy-hp-gpu-node2_kube-system_b853c9cd2cc0a3a71070731d4f6cfbca_0
[centos@hp-gpu-node2 ~]$ 

检查kube-proxy配置文件权限:

[centos@hp-gpu-node2 ~]$ sudo docker exec -t -i 415bc97a940 stat -c %a  /var/lib/kube-proxy/config.conf
777
[centos@hp-gpu-node2 ~]$

相关内容

  • 没有找到相关文章

最新更新