由于cni丢失,Kubelet Master留在KubeleNotReady



Kubelet已经用Calico的pod网络初始化:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --image-repository=someserver

然后我得到calico.yaml v3.11并应用它:

sudo kubectl --kubeconfig="/etc/kubernetes/admin.conf" apply -f calico.yaml

在我检查吊舱状态之后:

sudo kubectl --kubeconfig="/etc/kubernetes/admin.conf" get nodes
NAME              STATUS     ROLES    AGE     VERSION
master-1   NotReady   master   7m21s   v1.17.2

在描述中,我已经统一了cni配置,但我认为calico应该这么做?

MemoryPressure   False   Fri, 21 Feb 2020 10:14:24 +0100   Fri, 21 Feb 2020 10:09:00 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
DiskPressure     False   Fri, 21 Feb 2020 10:14:24 +0100   Fri, 21 Feb 2020 10:09:00 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
PIDPressure      False   Fri, 21 Feb 2020 10:14:24 +0100   Fri, 21 Feb 2020 10:09:00 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
Ready            False   Fri, 21 Feb 2020 10:14:24 +0100   Fri, 21 Feb 2020 10:09:00 +0100   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

事实上,我在/etc/cni/net.d/下什么都没有,所以它好像忘了什么?

ll /etc/cni/net.d/
total 0
sudo kubectl --kubeconfig="/etc/kubernetes/admin.conf" -n kube-system get pods
NAME                                       READY   STATUS                  RESTARTS   AGE
calico-kube-controllers-5644fb7cf6-f7lqq   0/1     Pending                 0          3h
calico-node-f4xzh                          0/1     Init:ImagePullBackOff   0          3h
coredns-7fb8cdf968-bbqbz                   0/1     Pending                 0          3h24m
coredns-7fb8cdf968-vdnzx                   0/1     Pending                 0          3h24m
etcd-master-1                       1/1     Running                 0          3h24m
kube-apiserver-master-1            1/1     Running                 0          3h24m
kube-controller-manager-master-1    1/1     Running                 0          3h24m
kube-proxy-9m879                           1/1     Running                 0          3h24m
kube-scheduler-master-1             1/1     Running                 0          3h24m

正如所解释的,我正在运行一个本地回购,journalctl说:

kubelet[21935]: E0225 14:30:54.830683   21935 pod_workers.go:191] Error syncing pod cec2f72b-844a-4d6b-8606-3aff06d4a36d ("calico-node-f4xzh_kube-system(cec2f72b-844a-4d6b-8606-3aff06d4a36d)"), skipping: failed to "StartContainer" for "upgrade-ipam" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://repo:10000/v2/calico/cni/manifests/v3.11.2: no basic auth credentials"
kubelet[21935]: E0225 14:30:56.008989   21935 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

感觉不仅仅是CNI的问题

Core DNS pod将挂起,master将处于NotReady状态,直到calico pod成功运行并正确设置CNI。

从docker.io下载calico docker镜像似乎是网络问题。因此,您可以从docker.ino中提取calico镜像,并将其推送到您的内部容器注册表,然后在calico.yaml的images部分修改calico yaml以引用该注册表,最后将修改后的calico yaml应用到kubernetes集群。

因此Init:ImagePullBackOff的问题是它无法自动应用我的私人回购中的图像。我不得不从docker那里提取所有印花棉布的图片。然后我删除了印花棉布吊舱,它用新推送的图像重新创建了自己

sudo docker pull private-repo/calico/pod2daemon-flexvol:v3.11.2
sudo docker pull private-repo/calico/node:v3.11.2
sudo docker pull private-repo/calico/cni:v3.11.2
sudo docker pull private-repo/calico/kube-controllers:v3.11.2
sudo kubectl -n kube-system delete po/calico-node-y7g5

之后,节点重新进行所有初始化阶段,并:

sudo kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5644fb7cf6-qkf47   1/1     Running   0          11s
calico-node-mkcsr                          1/1     Running   0          21m
coredns-7fb8cdf968-bgqvj                   1/1     Running   0          37m
coredns-7fb8cdf968-v85jx                   1/1     Running   0          37m
etcd-lin-1k8w1dv-vmh                       1/1     Running   0          38m
kube-apiserver-lin-1k8w1dv-vmh             1/1     Running   0          38m
kube-controller-manager-lin-1k8w1dv-vmh    1/1     Running   0          38m
kube-proxy-9hkns                           1/1     Running   0          37m
kube-scheduler-lin-1k8w1dv-vmh             1/1     Running   0          38m

相关内容

  • 没有找到相关文章

最新更新