目标
我希望能够使用Fargate部署AWS EKS。我已经成功地使用node_group
进行了部署。然而,当我转向使用Fargate时,pod似乎都处于挂起状态。
我当前的代码是什么样子的
我正在使用Terraform进行配置(不一定要寻找Terraform的答案(。这就是我创建EKS集群的方式:
module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
version = "13.2.1"
cluster_name = "${var.project_name}-${var.env_name}"
cluster_version = var.cluster_version
vpc_id = var.vpc_id
cluster_enabled_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
enable_irsa = true
subnets = concat(var.private_subnet_ids, var.public_subnet_ids)
create_fargate_pod_execution_role = true
write_kubeconfig = false
fargate_pod_execution_role_name = "${var.project_name}-role"
# Assigning worker groups
node_groups = {
my_nodes = {
desired_capacity = 1
max_capacity = 1
min_capacity = 1
instance_type = var.nodes_instance_type
subnets = var.private_subnet_ids
}
}
}
这就是我提供法盖特简介的方式:
//# Create EKS Fargate profile
resource "aws_eks_fargate_profile" "fargate_profile" {
cluster_name = module.eks_cluster.cluster_id
fargate_profile_name = "${var.project_name}-fargate-profile-${var.env_name}"
pod_execution_role_arn = aws_iam_role.fargate_iam_role.arn
subnet_ids = var.private_subnet_ids
selector {
namespace = var.project_name
}
}
这就是我创建和附加所需策略的方式:
//# Create IAM Role for Fargate Profile
resource "aws_iam_role" "fargate_iam_role" {
name = "${var.project_name}-fargate-role-${var.env_name}"
force_detach_policies = true
assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "eks-fargate-pods.amazonaws.com"
}
}]
Version = "2012-10-17"
})
}
# Attach IAM Policy for Fargate
resource "aws_iam_role_policy_attachment" "fargate_pod_execution" {
role = aws_iam_role.fargate_iam_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
}
我尝试过但似乎不起作用
运行kubectl describe pod
我得到:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 14s fargate-scheduler Misconfigured Fargate Profile: fargate profile fargate-airflow-fargate-profile-dev blocked for new launches due to: Pod execution role is not found in auth config or does not have all required permissions for launching fargate pods.
我尝试过但没有成功的其他事情
我试着通过模块的功能来映射角色,比如:
module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
version = "13.2.1"
cluster_name = "${var.project_name}-${var.env_name}"
cluster_version = var.cluster_version
vpc_id = var.vpc_id
cluster_enabled_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
enable_irsa = true
subnets = concat(var.private_subnet_ids, var.public_subnet_ids)
create_fargate_pod_execution_role = true
write_kubeconfig = false
fargate_pod_execution_role_name = "${var.project_name}-role"
# Assigning worker groups
node_groups = {
my_nodes = {
desired_capacity = 1
max_capacity = 1
min_capacity = 1
instance_type = var.nodes_instance_type
subnets = var.private_subnet_ids
}
}
# Trying to map role
map_roles = [
{
rolearn = aws_eks_fargate_profile.airflow.arn
username = aws_eks_fargate_profile.airflow.fargate_profile_name
groups = ["system:*"]
}
]
}
但我的尝试没有成功。如何调试此问题?这背后的原因是什么?
好的,我看到你的问题了。我也只是修复了我的,尽管我使用了不同的方法。
在eks_cluster
模块中,您已经告诉模块创建角色并为其提供名称,因此以后无需创建角色资源。模块应该为您处理它,包括在Kubernetes中填充aws-auth
配置映射。
在aws_eks_fargate_profile
资源中,您应该使用模块提供的角色,即pod_execution_role_arn = module.eks_cluster.fargate_profile_arns[0]
。
我认为修复这些应该可以解决您第一次尝试配置时的问题。
对于您的第二次尝试,map_roles
输入用于IAM角色,但您提供的是有关Fargate配置文件的信息。你想做两件事之一:
- 禁用创建您的角色(
create_fargate_pod_execution_role
和fargate_pod_execution_role_name
(的模块,而是创建您自己的IAM角色,类似于您在第一次配置中所做的操作,并将该信息提供给map_roles
- 删除
map_roles
,并在Fargate配置文件中引用模块生成的IAM角色,类似于第一次配置的解决方案
如果有任何令人困惑的地方,请告诉我。看来你真的很接近!