简而言之,这些是我完成的步骤:
-
在 aws 中启动了2个新的
t3 - small
实例,并预先标记了 keykubernetes.io/cluster/<cluster-name>
和价值member
. -
使用相同的标签标记新的安全性并打开所有提到的端口 这里- https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports
-
将
hostname
更改为curl http://169.254.169.254/latest/meta-data/local-hostname
的输出并已验证 与hostnamectl
-
重新 启动
-
配置 aws
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
-
创建具有完全(
"*"
)权限并分配给EC2的IAM role
实例。 -
使用
apt-get
安装kubelet kubeadm kubectl
-
使用内容创建
/etc/default/kubelet
KUBELET_EXTRA_ARGS=--cloud-provider=aws
-
在一个实例上运行
kubeadm init --pod-network-cidr=10.244.0.0/16
并使用输出来kubeadm join ...
其他节点。 -
已安装头盔。
-
安装了具有默认后端的入口控制器。
以前我已经尝试了上述步骤,但是,从 kubernetes.github.io 上的说明中安装了入口。两人最终都获得了相同的地位,EXTERNAL-IP
与<pending>
相同。
当前状态为 :
kubectl get pods --all-namespaces -o wide
NAMESPACE NAME IP NODE
ingress ingress-nginx-ingress-controller-77d989fb4d-qz4f5 10.244.1.13 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
ingress ingress-nginx-ingress-default-backend-7f7bf55777-dhj75 10.244.1.12 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system coredns-86c58d9df4-bklt8 10.244.1.14 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system coredns-86c58d9df4-ftn8q 10.244.1.16 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system etcd-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-apiserver-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-controller-manager-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-flannel-ds-amd64-87k8p 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-flannel-ds-amd64-f4wft 172.31.3.106 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system kube-proxy-79cp2 172.31.3.106 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system kube-proxy-sv7md 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-scheduler-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system tiller-deploy-5b7c66d59c-fgwcp 10.244.1.15 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 73m <none>
ingress ingress-nginx-ingress-controller LoadBalancer 10.97.167.197 <pending> 80:32722/TCP,443:30374/TCP 59m app=nginx-ingress,component=controller,release=ingress
ingress ingress-nginx-ingress-default-backend ClusterIP 10.109.198.179 <none> 80/TCP 59m app=nginx-ingress,component=default-backend,release=ingress
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 73m k8s-app=kube-dns
kube-system tiller-deploy ClusterIP 10.96.216.119 <none> 44134/TCP 67m app=helm,name=tiller
kubectl describe service -n ingress ingress-nginx-ingress-controller
Name: ingress-nginx-ingress-controller
Namespace: ingress
Labels: app=nginx-ingress
chart=nginx-ingress-1.4.0
component=controller
heritage=Tiller
release=ingress
Annotations: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: *
Selector: app=nginx-ingress,component=controller,release=ingress
Type: LoadBalancer
IP: 10.104.55.18
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32318/TCP
Endpoints: 10.244.1.20:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 32560/TCP
Endpoints: 10.244.1.20:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
附加的 IAM 角色内联策略
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-172-31-12-119.ap-south-1.compute.internal Ready master 6d19h v1.13.4 172.31.12.119 XX.XXX.XXX.XX Ubuntu 16.04.5 LTS 4.4.0-1077-aws docker://18.6.3
ip-172-31-3-106.ap-south-1.compute.internal Ready <none> 6d19h v1.13.4 172.31.3.106 XX.XXX.XX.XXX Ubuntu 16.04.5 LTS 4.4.0-1077-aws docker://18.6.3
有人可以指出我在这里错过了什么,因为互联网上的任何地方都说将自动部署Classic ELB
?
对于 AWS ELB(经典类型),您必须
-
在 kube 服务清单中显式指定
--cloud-provider=aws
位于主节点上的/etc/kubernetes/manifests
:kube-controller-manager.yaml kube-apiserver.yaml
-
重新启动服务:
sudo systemctl daemon-reload
sudo systemctl restart kubelet
与其他命令一起,根据需要在底部或顶部添加。结果应类似于:
在kube-controller-manager.yaml中
spec:
containers:
- command:
- kube-controller-manager
- --cloud-provider=aws
Inkube-apiserver.yaml
spec:
containers:
- command:
- kube-apiserver
- --cloud-provider=aws