内核没有部署在新的EKS集群?



我正在使用具有公共和私有子网的现有VPC在Fargate(没有EC2节点)中部署AWS EKS集群,并且能够使用eksctl成功创建集群。然而,我看到coredns部署卡在0/2 Pods ready在EKS控制台。我读到我需要在我的安全组规则中启用端口53,我做到了。这是我的配置文件。

$ eksctl create cluster -f eks-sandbox-cluster.yaml
eks-sandbox-cluster.yaml
------------------------
kind: ClusterConfig
apiVersion: eksctl.io/v1alpha5
metadata:
name: sandbox
region: us-east-1
version: "1.18"
# The VPC and subnets are for the data plane, where the pods will
# ultimately be deployed.
vpc:
id: "vpc-12345678"
clusterEndpoints:
privateAccess: true
publicAccess: false
subnets:
# us-east-1a is full
private:
us-east-1b:
id: "subnet-xxxxxxxx"
us-east-1c:
id: "subnet-yyyyyyy"
public:
us-east-1b:
id: "subnet-aaaaaaaa"
us-east-1c:
id: "subnet-bbbbbbbb"
fargateProfiles:
- name: fp-default
selectors:
- namespace: default
- name: fp-kube
- namespace: kube-system
- name: fp-myapps
selectors:
- namespace: myapp
labels:
app: myapp
cloudWatch:
clusterLogging:
enableTypes: ["api", "audit", "authenticator", "controllerManager", "scheduler"]

为什么没有出现coredns部署?

我确实在kube-schedulerCloudWatch日志中看到了这一点。

I0216 16:46:43.841076       1 factory.go:459] Unable to schedule kube-system/coredns-c79dcb98c-9pfrz: no nodes are registered to the cluster; waiting

我想因为这个,我不能通过kubectl与我的集群交谈?

$ kubectl get pods
Unable to connect to the server: dial tcp 10.23.x.x:443: i/o timeout

当我使用配置文件部署EKS集群时,使用我们现有的只有私有端点的VPC, coredns部署被设置为在EC2节点上启动。当然,在Fargate中没有EC2节点。我必须编辑coredns部署以使用fargate并重新启动部署。

最新更新