Kubernetes 集群主节点显示 - NotReady,coredns & weave 显示待定



我已经安装了Kubernetes cluster on CentOS-8但是节点状态显示NotReadycoredns的命名空间状态显示pending,Weave-net状态显示CrashLoopBackOff。 我也重新安装了,但结果仍然相同,taint命令不起作用. 如何解决此问题?

# kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
K8s-Master   NotReady   master   42m   v1.18.8
# kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                  READY   STATUS             RESTARTS   AGE   IP                NODE          NOMINATED NODE   READINESS GATES
kube-system   coredns-66bff467f8-5vtjf              0/1      Pending            0          42m   <none>            <none>        <none>           <none>
kube-system   coredns-66bff467f8-pr6pt              0/1      Pending            0          42m   <none>            <none>        <none>           <none>
kube-system   etcd-K8s-Master                       1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-apiserver-K8s-Master             1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-controller-manager-K8s-Master    1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-proxy-pw2bk                      1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-scheduler-K8s-Master             1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   weave-net-k4mdf                       1/2      CrashLoopBackOff   12         41m   90.91.92.93   K8s-Master        <none>           <none>
# kubectl describe pod coredns-66bff467f8-pr6pt --namespace=kube-system
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  70s (x33 over 43m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
# kubectl describe node | grep -i taint
Taints:             node.kubernetes.io/not-ready:NoExecute
# kubectl taint nodes --all node.kubernetes.io/not-ready:NoExecute
error: node K8s-Master already has node.kubernetes.io/not-ready taint(s) with same effect(s) and --overwrite is false
# kubectl describe pod weave-net-k4mdf --namespace=kube-system
Events:
Type     Reason     Age                   From                  Message
----     ------     ----                  ----                  -------
Normal   Scheduled  43m                   default-scheduler    Successfully assigned kube-system/weave-net-k4mdf to K8s-Master
Normal   Pulling    43m                   kubelet, K8s-Master  Pulling image "docker.io/weaveworks/weave-kube:2.7.0"
Normal   Pulled     43m                   kubelet, K8s-Master  Successfully pulled image "docker.io/weaveworks/weave-kube:2.7.0"
Normal   Pulling    43m                   kubelet, K8s-Master  Pulling image "docker.io/weaveworks/weave-npc:2.7.0"
Normal   Pulled     42m                   kubelet, K8s-Master  Successfully pulled image "docker.io/weaveworks/weave-npc:2.7.0"
Normal   Started    42m                   kubelet, K8s-Master  Started container weave-npc
Normal   Created    42m                   kubelet, K8s-Master  Created container weave-npc
Normal   Started    42m (x4 over 43m)     kubelet, K8s-Master  Started container weave
Normal   Created    42m (x4 over 43m)     kubelet, K8s-Master  Created container weave
Normal   Pulled     42m (x3 over 42m)     kubelet, K8s-Master  Container image "docker.io/weaveworks/weave-kube:2.7.0" already present on machine
Warning  BackOff    3m1s (x191 over 42m)  kubelet, K8s-Master  Back-off restarting failed container
Normal   Pulled     33s (x4 over 118s)    kubelet, K8s-Master  Container image "docker.io/weaveworks/weave-kube:2.7.0" already present on machine
Normal   Created    33s (x4 over 118s)    kubelet, K8s-Master  Created container weave
Normal   Started    33s (x4 over 118s)    kubelet, K8s-Master  Started container weave
Warning  BackOff    5s (x10 over 117s)    kubelet, K8s-Master  Back-off restarting failed container
# kubectl logs weave-net-k4mdf -c weave --namespace=kube-system
ipset v7.2: Set cannot be destroyed: it is in use by a kernel component
ipset v7.2: Set cannot be destroyed: it is in use by a kernel component

上述错误是由于争用条件造成的。

参考此问题,您可以编辑编织守护程序 yaml 以添加以下内容作为解决方法。

command:
- /bin/sh
- -c
- sed '/ipset destroy weave-kube-test$/ i sleep 1' /home/weave/launch.sh | /bin/sh

所以编织守护进程

看起来像
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.13/net.yaml",
"date": "Fri Aug 14 2020 07:36:34 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
spec:
minReadySeconds: 5
selector:
matchLabels:
name: weave-net
template:
metadata:
labels:
name: weave-net
spec:
containers:
- name: weave
command:
- /bin/sh
- -c
- sed '/ipset destroy weave-kube-test$/ i sleep 1' /home/weave/launch.sh | /bin/sh
...

最新更新