如何使用Terraform配置EKS ALB



我很难让EKS向公共互联网公开IP地址。我需要自己设置ALB吗?还是作为EKS集群的一部分,您可以免费获得ALB?如果我必须自己做,我需要在terraform模板文件中还是在kubernetes对象yaml中定义它?

以下是我在Terraform中定义的EKS集群,以及我认为需要的权限。

// eks.tf
resource "aws_iam_role" "eks_cluster_role" {
name = "${local.env_name}-eks-cluster-role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "eks.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role       = aws_iam_role.eks_cluster_role.name
}
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role       = aws_iam_role.eks_cluster_role.name
}
resource "aws_kms_key" "eks_key" {
description             = "EKS KMS Key"
deletion_window_in_days = 7
enable_key_rotation     = true
tags = {
Environment = local.env_name
Service     = "EKS"
}
}
resource "aws_kms_alias" "eks_key_alias" {
target_key_id = aws_kms_key.eks_key.id
name          = "alias/eks-kms-key-${local.env_name}"
}
resource "aws_eks_cluster" "eks_cluster" {
name                      = "${local.env_name}-eks-cluster"
role_arn                  = aws_iam_role.eks_cluster_role.arn
enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
vpc_config {
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
}
encryption_config {
resources = ["secrets"]
provider {
key_arn = aws_kms_key.eks_key.arn
}
}
tags = {
Environment = local.env_name
}
}
resource "aws_iam_role" "eks_node_group_role" {
name = "${local.env_name}-eks-node-group"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "ec2.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role       = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role       = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role       = aws_iam_role.eks_node_group_role.name
}
resource "aws_eks_node_group" "eks_node_group" {
instance_types  = var.node_group_instance_types
node_group_name = "${local.env_name}-eks-node-group"
node_role_arn   = aws_iam_role.eks_node_group_role.arn
cluster_name    = aws_eks_cluster.eks_cluster.name
subnet_ids      = [aws_subnet.private_a.id, aws_subnet.private_b.id]
scaling_config {
desired_size = 1
max_size     = 1
min_size     = 1
}
// Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
// Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.eks-node-group-AmazonEC2ContainerRegistryReadOnly,
aws_iam_role_policy_attachment.eks-node-group-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.eks-node-group-AmazonEKSWorkerNodePolicy,
]

这是我的kubernetes对象yaml:

# hello-kubernetes.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.9
ports:
- containerPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-ingress
spec:
backend:
serviceName: hello-kubernetes
servicePort: 80

我已经运行了terraform apply,集群已经启动并运行。我已经安装了eksctlkubectl并运行了kubectl apply -f hello-kubernetes.yaml。吊舱、服务和入口似乎运行良好。

$ kubectl get pods
NAME                                READY   STATUS             RESTARTS   AGE
hello-kubernetes-6cb7cd595b-25bd9   1/1     Running            0          6h13m
hello-kubernetes-6cb7cd595b-lccdj   1/1     Running            0          6h13m
hello-kubernetes-6cb7cd595b-snwvr   1/1     Running            0          6h13m
$ kubectl get services
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
hello-kubernetes   LoadBalancer   172.20.102.37   <pending>     80:32086/TCP   6h15m
$ kubectl get ingresses
NAME            CLASS    HOSTS   ADDRESS   PORTS   AGE
hello-ingress   <none>   *                 80      3h45m

我缺少什么?它属于哪个文件?

您需要按照安装说明安装AWS负载均衡器控制器;首先你需要创建IAM角色和权限,这可以用Terraform完成;然后你需要应用Kubernetes Yaml将控制器安装到集群中,这可以用Helm或Kuectl完成。

您还需要了解创建面向公共或私有负载均衡器所需的子网标记。

通常的做法是放置ALB并将流量重定向到EKS集群,使用ALB入口控制器进行管理。这个入口控制器将充当集群和ALB之间的通信,这是非常直接的 AWS文档

EKS w/ALB

如果ALB不适合您的应用程序需求,其他解决方案可能是使用带有NLB的NGINX入口控制器,如以下文章中所述

NGINX w/NLB

我也遇到了这种情况,在所有的设置之后,我无法看到入口地址。调试此问题的最佳方法是检查ingress控制器的日志。您可以通过以下方式完成:

使用:kubectl Get-po-nkube system获取Ingress控制器的po-name使用以下命令检查po的日志:kubectl logs<po_name&gt-n kube系统这将为您指出为什么您没有看到地址的确切问题。

如果您没有发现任何以ingress名称运行的po,那么您必须首先创建ingress控制器。

最新更新