创建多个同类集群失败



我试图创建一个多集群与kind,但它只是创建一个,并存在创建第二个错误failed to create cluster: failed to join node with kubeadm: command "docker exec --privileged k8ssandra-multinode-worker01-worker kubeadm join --config /kind/kubeadm.conf --skip-phases=preflight --v=6" failed with error: exit status 1

cluster-one.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: k8ssandra-multinode-control
nodes:
- role: control-plane
- role: worker
extraMounts:
- hostPath: /root/data
containerPath: /files

cluster-two.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: k8ssandra-multinode-worker01
nodes:
- role: control-plane
- role: worker
- role: worker
extraMounts:
- hostPath: /root/data
containerPath: /files

误差

Creating cluster "k8ssandra-multinode-worker01" ...
✓ Ensuring node image (kindest/node:v1.25.3) 🖼
✓ Preparing nodes 📦 📦 📦  
✓ Writing configuration 📜 
✓ Starting control-plane 🕹️ 
✓ Installing CNI 🔌 
✓ Installing StorageClass 💾 
✗ Joining worker nodes 🚜 
ERROR: failed to create cluster: failed to join node with kubeadm: command "docker exec --privileged k8ssandra-multinode-worker01-worker kubeadm join --config /kind/kubeadm.conf --skip-phases=preflight --v=6" failed with error: exit status 1
Command Output: I1117 09:17:08.518417     135 join.go:416] [preflight] found NodeName empty; using OS hostname as NodeName
I1117 09:17:08.519112     135 joinconfiguration.go:76] loading configuration from "/kind/kubeadm.conf"
I1117 09:17:08.520143     135 controlplaneprepare.go:220] [download-certs] Skipping certs download
I1117 09:17:08.520165     135 join.go:533] [preflight] Discovering cluster-info
I1117 09:17:08.520196     135 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "k8ssandra-multinode-worker01-control-plane:6443"
I1117 09:17:08.528982     135 round_trippers.go:553] GET https://k8ssandra-multinode-worker01-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 7 milliseconds
I1117 09:17:08.529890     135 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "abcdef", will try again
I1117 09:17:14.440113     135 round_trippers.go:553] GET https://k8ssandra-multinode-worker01-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 2 milliseconds
I1117 09:17:14.440525     135 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "abcdef", will try again
I1117 09:17:20.857223     135 round_trippers.go:553] GET https://k8ssandra-multinode-worker01-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 2 milliseconds
I1117 09:17:20.858092     135 token.go:105] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "k8ssandra-multinode-worker01-control-plane:6443"
I1117 09:17:20.858105     135 discovery.go:52] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I1117 09:17:20.858121     135 join.go:547] [preflight] Fetching init configuration
I1117 09:17:20.858126     135 join.go:593] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I1117 09:17:20.865945     135 round_trippers.go:553] GET https://k8ssandra-multinode-worker01-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 7 milliseconds
I1117 09:17:20.868770     135 round_trippers.go:553] GET https://k8ssandra-multinode-worker01-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy?timeout=10s 200 OK in 1 milliseconds
I1117 09:17:20.869969     135 kubelet.go:74] attempting to download the KubeletConfiguration from ConfigMap "kubelet-config"
I1117 09:17:20.871559     135 round_trippers.go:553] GET https://k8ssandra-multinode-worker01-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config?timeout=10s 200 OK in 1 milliseconds
I1117 09:17:20.873364     135 interface.go:432] Looking for default routes with IPv4 addresses
I1117 09:17:20.873375     135 interface.go:437] Default route transits interface "eth0"
I1117 09:17:20.873507     135 interface.go:209] Interface eth0 is up
I1117 09:17:20.873584     135 interface.go:257] Interface "eth0" has 3 addresses :[172.18.0.6/16 fc00:f853:ccd:e793::6/64 fe80::42:acff:fe12:6/64].
I1117 09:17:20.873610     135 interface.go:224] Checking addr  172.18.0.6/16.
I1117 09:17:20.873622     135 interface.go:231] IP found 172.18.0.6
I1117 09:17:20.873650     135 interface.go:263] Found valid IPv4 address 172.18.0.6 for interface "eth0".
I1117 09:17:20.873660     135 interface.go:443] Found active IP 172.18.0.6 
I1117 09:17:20.881864     135 kubelet.go:120] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I1117 09:17:20.882821     135 kubelet.go:135] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
I1117 09:17:20.883136     135 loader.go:374] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf
I1117 09:17:20.883425     135 kubelet.go:156] [kubelet-start] Checking for an existing Node in the cluster with name "k8ssandra-multinode-worker01-worker" and status "Ready"
I1117 09:17:20.885860     135 round_trippers.go:553] GET https://k8ssandra-multinode-worker01-control-plane:6443/api/v1/nodes/k8ssandra-multinode-worker01-worker?timeout=10s 404 Not Found in 2 milliseconds
I1117 09:17:20.886355     135 kubelet.go:171] [kubelet-start] Stopping the kubelet
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I1117 09:17:26.056747     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:17:31.059092     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:17:36.058998     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:17:41.060172     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:17:46.060067     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:17:51.055910     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:17:56.059946     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
[kubelet-check] Initial timeout of 40s passed.
I1117 09:18:01.059081     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1117 09:18:06.059225     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1117 09:18:11.058780     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:18:16.059417     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1117 09:18:21.055401     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:18:26.058925     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:18:31.059803     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:18:36.056263     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1117 09:18:41.059902     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:18:46.058631     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:18:51.055696     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:18:56.056784     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:19:01.056467     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:19:06.059134     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:19:11.060130     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 09:19:16.056531     135 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
timed out waiting for the condition
error execution phase kubelet-start
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
cmd/kubeadm/app/cmd/join.go:181
github.com/spf13/cobra.(*Command).execute
vendor/github.com/spf13/cobra/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
vendor/github.com/spf13/cobra/command.go:974
github.com/spf13/cobra.(*Command).Execute
vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
cmd/kubeadm/app/kubeadm.go:50
main.main
cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:250
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1594
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

journalctl -u kubelet -f

Nov 17 09:36:36 k8ssandra systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Nov 17 09:36:36 k8ssandra systemd[1]: kubelet.service: Failed with result 'exit-code'.
Nov 17 09:36:46 k8ssandra systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 172.
Nov 17 09:36:46 k8ssandra systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Nov 17 09:36:46 k8ssandra systemd[1]: Started kubelet: The Kubernetes Node Agent.
Nov 17 09:36:46 k8ssandra kubelet[115167]: E1117 09:36:46.972947  115167 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"
Nov 17 09:36:46 k8ssandra systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Nov 17 09:36:46 k8ssandra systemd[1]: kubelet.service: Failed with result 'exit-code'.
Nov 17 09:36:57 k8ssandra systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 173.
Nov 17 09:36:57 k8ssandra systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Nov 17 09:36:57 k8ssandra systemd[1]: Started kubelet: The Kubernetes Node Agent.
Nov 17 09:36:57 k8ssandra kubelet[115206]: E1117 09:36:57.215363  115206 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"
Nov 17 09:36:57 k8ssandra systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Nov 17 09:36:57 k8ssandra systemd[1]: kubelet.service: Failed with result 'exit-code'.
^[[ANov 17 09:37:07 k8ssandra systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 174.
Nov 17 09:37:07 k8ssandra systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Nov 17 09:37:07 k8ssandra systemd[1]: Started kubelet: The Kubernetes Node Agent.
Nov 17 09:37:07 k8ssandra kubelet[115249]: E1117 09:37:07.463490  115249 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"
Nov 17 09:37:07 k8ssandra systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Nov 17 09:37:07 k8ssandra systemd[1]: kubelet.service: Failed with result 'exit-code'.

我也有同样的问题。我做了这两件事,我的控制平面和worker最终能够连接:

  1. 我添加了一些额外的东西到我的daemon.json文件。查看这篇文章(韩文),添加额外的命令。如果这些命令不能修复它,用户建议尝试重新启动。
  2. 似乎有一个多节点网络问题,使ip复位。由此,我决定重新启动绝对值得一试,所以我做到了,我能够在它恢复后第一次尝试构建/连接。

我知道每次都重新启动是不理想的,但这是尝试解决这个问题的方法。

相关内容

  • 没有找到相关文章

最新更新