错误:配置映射"aws-auth"不存在



我在这里看到了这个错误的其他例子,但他们没有解决我的问题,希望有人能解释我哪里出错了。

我正在使用aws weeks模块:https://github.com/terraform-aws-modules

我使用了Terraform的默认示例,并理解我需要清理一些代码,因为不是所有代码都想要,但它仍然是有效的代码。

这是我的模块代码:
module "eks" {
source                         = "terraform-aws-modules/eks/aws"
cluster_name                   = local.name
cluster_version                = "1.24"
cluster_endpoint_public_access = true
cluster_addons = {
coredns = {
preserve    = true
most_recent = true
timeouts = {
create = "25m"
delete = "10m"
}
}
kube-proxy = {
most_recent = true
}
vpc-cni = {
most_recent = true
}
}
# External encryption key
create_kms_key = false
cluster_encryption_config = {
resources        = ["secrets"]
provider_key_arn = module.kms.key_arn
}
iam_role_additional_policies = {
additional = aws_iam_policy.additional.arn
}
vpc_id                   = data.aws_ssm_parameter.vpc_id.value
subnet_ids               = split(", ", data.aws_ssm_parameter.private_subnets.value)
control_plane_subnet_ids = split(", ", data.aws_ssm_parameter.intra_subnets.value)
# Extend cluster security group rules
cluster_security_group_additional_rules = {
ingress_nodes_ephemeral_ports_tcp = {
description                = "Nodes on ephemeral ports"
protocol                   = "tcp"
from_port                  = 1025
to_port                    = 65535
type                       = "ingress"
source_node_security_group = true
}
# Test: https://github.com/terraform-aws-modules/terraform-aws-eks/pull/2319
ingress_source_security_group_id = {
description              = "Ingress from another computed security group"
protocol                 = "tcp"
from_port                = 22
to_port                  = 22
type                     = "ingress"
source_security_group_id = aws_security_group.additional.id
}
}
# Extend node-to-node security group rules
node_security_group_additional_rules = {
ingress_self_all = {
description = "Node to node all ports/protocols"
protocol    = "-1"
from_port   = 0
to_port     = 0
type        = "ingress"
self        = true
}
# Test: https://github.com/terraform-aws-modules/terraform-aws-eks/pull/2319
ingress_source_security_group_id = {
description              = "Ingress from another computed security group"
protocol                 = "tcp"
from_port                = 22
to_port                  = 22
type                     = "ingress"
source_security_group_id = aws_security_group.additional.id
}
}
# EKS Managed Node Group(s)
eks_managed_node_group_defaults = {
ami_type                              = "AL2_x86_64"
instance_types                        = [var.micro_ec2_instance, var.small_ec2_instance, var.medium_ec2_instance, var.large_ec2_instance]
attach_cluster_primary_security_group = true
vpc_security_group_ids                = [aws_security_group.additional.id]
iam_role_additional_policies = {
additional = aws_iam_policy.additional.arn
}
}
eks_managed_node_groups = {
blue = {}
green = {
min_size       = 1
max_size       = 10
desired_size   = 1
instance_types = ["t3.large"]
capacity_type  = "SPOT"
labels = {
Environment = "test"
GithubRepo  = "terraform-aws-eks"
GithubOrg   = "terraform-aws-modules"
}
taints = {
dedicated = {
key    = "dedicated"
value  = "gpuGroup"
effect = "NO_SCHEDULE"
}
}
update_config = {
max_unavailable_percentage = 33 # or set `max_unavailable`
}
tags = {
ExtraTag = "example"
}
}
}
# aws-auth configmap
manage_aws_auth_configmap = true
aws_auth_node_iam_role_arns_non_windows = [
module.eks_managed_node_group.iam_role_arn,
#  module.self_managed_node_group.iam_role_arn,
]
aws_auth_roles = var.map_roles
tags = local.tags
}
module "eks_managed_node_group" {
source                            = "terraform-aws-modules/eks/aws//modules/eks-managed-node-group"
name                              = "separate-eks-mng"
cluster_name                      = module.eks.cluster_name
cluster_version                   = module.eks.cluster_version
subnet_ids                        = split(", ", data.aws_ssm_parameter.private_subnets.value)
cluster_primary_security_group_id = module.eks.cluster_primary_security_group_id
vpc_security_group_ids = [
module.eks.cluster_security_group_id,
]
ami_type = "BOTTLEROCKET_x86_64"
platform = "bottlerocket"
# this will get added to what AWS provides
bootstrap_extra_args = <<-EOT
# extra args added
[settings.kernel]
lockdown = "integrity"
[settings.kubernetes.node-labels]
"label1" = "foo"
"label2" = "bar"
EOT
tags                 = merge(local.tags, { Separate = "eks-managed-node-group" })
}
module "disabled_eks" {
source = "terraform-aws-modules/eks/aws"
create = false
}
module "disabled_fargate_profile" {
source = "terraform-aws-modules/eks/aws//modules/fargate-profile"
create = false
}
module "disabled_eks_managed_node_group" {
source = "terraform-aws-modules/eks/aws//modules/eks-managed-node-group"
create = false
}
module "disabled_self_managed_node_group" {
source = "terraform-aws-modules/eks/aws//modules/eks-managed-node-group"
create = false
}
module "kms" {
source                = "terraform-aws-modules/kms/aws"
version               = "1.1.0"
aliases               = ["eks/${var.project}-${var.application}-${local.name}"]
description           = "${local.name} cluster encryption key"
enable_default_policy = true
key_owners            = [data.aws_caller_identity.current.arn]
tags                  = local.tags
}

我明白问题在这里:

aws_auth_roles = var.map_roles

我从我的变量传递这个变量。

variable "map_roles" {
description = "List of role maps to add to the aws_auth configmap"
type = list(object({
rolearn  = string
username = string
groups   = list(string)
}))
default = []
}

这个变量的值在这里:

map_roles = [
{
rolearn  = "arn:aws:iam::*account_id*:role/QADevelopersRole"
username = "QADevelopersRole"
groups   = ["system:masters"]
},
{
rolearn  = "arn:aws:iam::*account_id*:role/QAAdminRole"
username = "QAAdminRole"
groups   = ["system:masters"]
},
{
rolearn  = "arn:aws:iam::*account_id*:role/QASreRole"
username = "QASreRole"
groups   = ["system:masters"]
},
{
rolearn  = "arn:aws:iam::*account_id*:role/sandbox-eu-west-2-ContinuousDeliverySlavesRole"
username = "sandbox-eu-west-2-ContinuousDeliverySlavesRole"
groups   = ["system:masters"]
}
]

注意:我故意没有分享我的帐户id

我可以运行地形初始化很好。但是,当我运行地形计划时,我得到以下错误:

Plan: 1 to add, 1 to change, 0 to destroy.
module.kms.aws_kms_key.this[0]: Modifying... [id=51a16f6d-fd34-4f79-b750-1e5aee6cb1a8]
module.eks.kubernetes_config_map_v1_data.aws_auth[0]: Creating...
module.kms.aws_kms_key.this[0]: Modifications complete after 5s [id=51a16f6d-fd34-4f79-b750-1e5aee6cb1a8]
Error: The configmap "aws-auth" does not exist
with module.eks.kubernetes_config_map_v1_data.aws_auth[0],
on .terraform/modules/eks/main.tf line 527, in resource "kubernetes_config_map_v1_data" "aws_auth":
527: resource "kubernetes_config_map_v1_data" "aws_auth" {
make: *** [deploy] Error 1

我的理解是模块应该自动创建所有需要的kubernetes配置,包括configmap。

运行应用程序时,将生成以下输出:

Validating terraform code.
Success! The configuration is valid.
Deploying the infrastructure on ** 20_eks_cluster **
module.eks.module.kms.data.aws_partition.current: Reading...
data.aws_ssm_parameter.public_subnet_2: Reading...
data.aws_ssm_parameter.public_subnet_3: Reading...
aws_iam_policy.additional: Refreshing state... [id=arn:aws:iam::*accountid*:policy/qa-k8s-poc-Cluster-additional]
module.disabled_eks.module.kms.data.aws_caller_identity.current: Reading...
data.aws_ssm_parameter.public_subnets: Reading...
module.eks.data.aws_caller_identity.current: Reading...
module.eks.data.aws_partition.current: Reading...
data.aws_ssm_parameter.intra_subnet_2: Reading...
module.eks.module.kms.data.aws_partition.current: Read complete after 0s [id=aws]
data.aws_ssm_parameter.vpc_id: Reading...
module.eks.data.aws_partition.current: Read complete after 0s [id=aws]
data.aws_ssm_parameter.private_subnets: Reading...
module.kms.data.aws_partition.current: Reading...
module.kms.data.aws_partition.current: Read complete after 0s [id=aws]
module.eks.module.kms.data.aws_caller_identity.current: Reading...
data.aws_ssm_parameter.public_subnets: Read complete after 0s [id=/config/qa-k8s-poc/sandbox/vpc-infrastructure/public-subnets]
data.aws_ssm_parameter.public_subnet_3: Read complete after 0s [id=/config/qa-k8s-poc/sandbox/vpc-infrastructure/public-subnet-3]
module.disabled_eks_managed_node_group.data.aws_caller_identity.current: Reading...
data.aws_ssm_parameter.private_subnet_1: Reading...
data.aws_ssm_parameter.intra_subnet_2: Read complete after 0s [id=/config/qa-k8s-poc/sandbox/vpc-infrastructure/intra-subnet-2]
module.disabled_self_managed_node_group.data.aws_caller_identity.current: Reading...
data.aws_ssm_parameter.private_subnets: Read complete after 0s [id=/config/qa-k8s-poc/sandbox/vpc-infrastructure/public-subnets]
data.aws_ssm_parameter.vpc_id: Read complete after 0s [id=/config/qa-k8s-poc/sandbox/vpc-infrastructure/vpc-id]
data.aws_caller_identity.current: Reading...
data.aws_ssm_parameter.public_subnet_2: Read complete after 0s [id=/config/qa-k8s-poc/sandbox/vpc-infrastructure/public-subnet-2]
module.disabled_self_managed_node_group.data.aws_partition.current: Reading...
module.eks.module.eks_managed_node_group["blue"].data.aws_caller_identity.current: Reading...
module.disabled_self_managed_node_group.data.aws_partition.current: Read complete after 0s [id=aws]
data.aws_ssm_parameter.public_subnet_1: Reading...
data.aws_ssm_parameter.private_subnet_1: Read complete after 0s [id=/config/qa-k8s-poc/sandbox/vpc-infrastructure/private-subnet-1]
data.aws_availability_zones.available: Reading...
data.aws_ssm_parameter.public_subnet_1: Read complete after 0s [id=/config/qa-k8s-poc/sandbox/vpc-infrastructure/public-subnet-1]
module.disabled_fargate_profile.data.aws_partition.current: Reading...
module.disabled_fargate_profile.data.aws_partition.current: Read complete after 0s [id=aws]
module.eks.module.eks_managed_node_group["green"].data.aws_caller_identity.current: Reading...
data.aws_availability_zones.available: Read complete after 0s [id=eu-west-2]
module.disabled_fargate_profile.data.aws_caller_identity.current: Reading...
module.eks.module.kms.data.aws_caller_identity.current: Read complete after 0s [id=*accountid*]
data.aws_ssm_parameter.intra_subnets: Reading...
module.eks.data.aws_caller_identity.current: Read complete after 0s [id=*accountid*]
module.disabled_eks_managed_node_group.data.aws_partition.current: Reading...
module.disabled_eks_managed_node_group.data.aws_partition.current: Read complete after 0s [id=aws]
module.disabled_eks.data.aws_caller_identity.current: Reading...
data.aws_ssm_parameter.intra_subnets: Read complete after 0s [id=/config/qa-k8s-poc/sandbox/vpc-infrastructure/intra-subnets]
module.kms.data.aws_caller_identity.current: Reading...
module.disabled_eks.module.kms.data.aws_caller_identity.current: Read complete after 0s [id=*accountid*]
module.eks.module.eks_managed_node_group["green"].data.aws_partition.current: Reading...
module.eks.module.eks_managed_node_group["green"].data.aws_partition.current: Read complete after 0s [id=aws]
module.eks.module.eks_managed_node_group["blue"].data.aws_partition.current: Reading...
module.eks.module.eks_managed_node_group["blue"].data.aws_partition.current: Read complete after 0s [id=aws]
module.disabled_eks.data.aws_partition.current: Reading...
module.disabled_eks.data.aws_partition.current: Read complete after 0s [id=aws]
data.aws_ssm_parameter.private_subnet_2: Reading...
module.disabled_eks_managed_node_group.data.aws_caller_identity.current: Read complete after 0s [id=*accountid*]
module.disabled_eks.module.kms.data.aws_partition.current: Reading...
module.disabled_eks.module.kms.data.aws_partition.current: Read complete after 0s [id=aws]
data.aws_ssm_parameter.intra_subnet_1: Reading...
data.aws_ssm_parameter.private_subnet_2: Read complete after 0s [id=/config/qa-k8s-poc/sandbox/vpc-infrastructure/private-subnet-2]
data.aws_ssm_parameter.private_subnet_3: Reading...
data.aws_caller_identity.current: Read complete after 1s [id=*accountid*]
module.eks.aws_cloudwatch_log_group.this[0]: Refreshing state... [id=/aws/eks/qa-k8s-poc-Cluster/cluster]
module.disabled_self_managed_node_group.data.aws_caller_identity.current: Read complete after 1s [id=*accountid*]
aws_security_group.additional: Refreshing state... [id=sg-0ddadd651c07cad73]
data.aws_ssm_parameter.intra_subnet_1: Read complete after 1s [id=/config/qa-k8s-poc/sandbox/vpc-infrastructure/intra-subnet-1]
module.eks.data.aws_iam_session_context.current: Reading...
data.aws_ssm_parameter.private_subnet_3: Read complete after 1s [id=/config/qa-k8s-poc/sandbox/vpc-infrastructure/private-subnet-3]
module.eks.data.aws_iam_policy_document.assume_role_policy[0]: Reading...
module.eks.data.aws_iam_policy_document.assume_role_policy[0]: Read complete after 0s [id=2764486067]
module.eks.aws_security_group.node[0]: Refreshing state... [id=sg-04759c13ed6856cc0]
module.disabled_fargate_profile.data.aws_caller_identity.current: Read complete after 1s [id=*accountid*]
module.eks.aws_security_group.cluster[0]: Refreshing state... [id=sg-0933687d3eb7462ef]
module.eks.module.eks_managed_node_group["blue"].data.aws_caller_identity.current: Read complete after 1s [id=*accountid*]
module.eks.aws_iam_role.this[0]: Refreshing state... [id=qa-k8s-poc-Cluster-cluster-20230110184259863000000002]
module.eks.module.eks_managed_node_group["green"].data.aws_caller_identity.current: Read complete after 1s [id=*accountid*]
module.disabled_eks.data.aws_caller_identity.current: Read complete after 1s [id=*accountid*]
module.disabled_eks.data.aws_iam_session_context.current: Reading...
module.kms.data.aws_caller_identity.current: Read complete after 1s [id=*accountid*]
module.eks.module.eks_managed_node_group["blue"].data.aws_iam_policy_document.assume_role_policy[0]: Reading...
module.eks.module.eks_managed_node_group["green"].data.aws_iam_policy_document.assume_role_policy[0]: Reading...
module.kms.data.aws_iam_policy_document.this[0]: Reading...
module.eks.module.eks_managed_node_group["green"].data.aws_iam_policy_document.assume_role_policy[0]: Read complete after 0s [id=2560088296]
module.eks.module.eks_managed_node_group["blue"].data.aws_iam_policy_document.assume_role_policy[0]: Read complete after 0s [id=2560088296]
module.kms.data.aws_iam_policy_document.this[0]: Read complete after 0s [id=1077172153]
module.kms.aws_kms_key.this[0]: Refreshing state... [id=51a16f6d-fd34-4f79-b750-1e5aee6cb1a8]
module.eks.module.eks_managed_node_group["green"].aws_iam_role.this[0]: Refreshing state... [id=green-eks-node-group-2023011018430547080000000c]
module.eks.module.eks_managed_node_group["blue"].aws_iam_role.this[0]: Refreshing state... [id=blue-eks-node-group-2023011018430548410000000d]
module.eks.aws_security_group_rule.cluster["ingress_nodes_ephemeral_ports_tcp"]: Refreshing state... [id=sgrule-200464832]
module.eks.aws_security_group_rule.cluster["ingress_nodes_443"]: Refreshing state... [id=sgrule-902431085]
module.eks.aws_security_group_rule.cluster["ingress_source_security_group_id"]: Refreshing state... [id=sgrule-3215013355]
module.eks.aws_security_group_rule.node["ingress_nodes_ephemeral"]: Refreshing state... [id=sgrule-737063987]
module.eks.aws_security_group_rule.node["ingress_cluster_4443_webhook"]: Refreshing state... [id=sgrule-1308494228]
module.eks.data.aws_iam_session_context.current: Read complete after 0s [id=arn:aws:sts::*accountid*:assumed-role/QASreRole/aws-go-sdk-1673428067877551000]
module.eks.aws_security_group_rule.node["egress_all"]: Refreshing state... [id=sgrule-1662029283]
module.eks.aws_security_group_rule.node["ingress_source_security_group_id"]: Refreshing state... [id=sgrule-1683737215]
module.eks.aws_security_group_rule.node["ingress_cluster_kubelet"]: Refreshing state... [id=sgrule-3554252842]
module.eks.aws_security_group_rule.node["ingress_self_coredns_tcp"]: Refreshing state... [id=sgrule-1081401244]
module.eks.aws_security_group_rule.node["ingress_cluster_443"]: Refreshing state... [id=sgrule-1835219689]
module.eks.aws_security_group_rule.node["ingress_cluster_9443_webhook"]: Refreshing state... [id=sgrule-4081148392]
module.eks.aws_security_group_rule.node["ingress_self_coredns_udp"]: Refreshing state... [id=sgrule-3828055155]
module.eks.aws_security_group_rule.node["ingress_self_all"]: Refreshing state... [id=sgrule-424609732]
module.eks.aws_security_group_rule.node["ingress_cluster_8443_webhook"]: Refreshing state... [id=sgrule-2548060707]
module.disabled_eks.data.aws_iam_session_context.current: Read complete after 0s [id=arn:aws:sts::*accountid*:assumed-role/QASreRole/aws-go-sdk-1673428067877551000]
module.kms.aws_kms_alias.this["eks/qa-k8s-poc-aws-alb-controller-qa-k8s-poc-Cluster"]: Refreshing state... [id=alias/eks/qa-k8s-poc-aws-alb-controller-qa-k8s-poc-Cluster]
module.eks.aws_iam_role_policy_attachment.additional["additional"]: Refreshing state... [id=qa-k8s-poc-Cluster-cluster-20230110184259863000000002-2023011018430070560000000a]
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSVPCResourceController"]: Refreshing state... [id=qa-k8s-poc-Cluster-cluster-20230110184259863000000002-20230110184300694600000009]
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSClusterPolicy"]: Refreshing state... [id=qa-k8s-poc-Cluster-cluster-20230110184259863000000002-2023011018430072490000000b]
module.eks.aws_iam_policy.cluster_encryption[0]: Refreshing state... [id=arn:aws:iam::*accountid*:policy/qa-k8s-poc-Cluster-cluster-ClusterEncryption20230110184313822500000016]
module.eks.module.eks_managed_node_group["green"].aws_iam_role_policy_attachment.additional["additional"]: Refreshing state... [id=green-eks-node-group-2023011018430547080000000c-20230110184306195800000011]
module.eks.module.eks_managed_node_group["blue"].aws_iam_role_policy_attachment.additional["additional"]: Refreshing state... [id=blue-eks-node-group-2023011018430548410000000d-20230110184306298500000012]
module.eks.module.eks_managed_node_group["green"].aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"]: Refreshing state... [id=green-eks-node-group-2023011018430547080000000c-2023011018430617070000000f]
module.eks.module.eks_managed_node_group["green"].aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"]: Refreshing state... [id=green-eks-node-group-2023011018430547080000000c-2023011018430616410000000e]
module.eks.module.eks_managed_node_group["blue"].aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"]: Refreshing state... [id=blue-eks-node-group-2023011018430548410000000d-20230110184306299300000013]
module.eks.module.eks_managed_node_group["blue"].aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"]: Refreshing state... [id=blue-eks-node-group-2023011018430548410000000d-20230110184306309800000014]
module.eks.module.eks_managed_node_group["blue"].aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"]: Refreshing state... [id=blue-eks-node-group-2023011018430548410000000d-20230110184306324700000015]
module.eks.module.eks_managed_node_group["green"].aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"]: Refreshing state... [id=green-eks-node-group-2023011018430547080000000c-20230110184306175400000010]
module.eks.aws_eks_cluster.this[0]: Refreshing state... [id=qa-k8s-poc-Cluster]
module.eks.aws_iam_role_policy_attachment.cluster_encryption[0]: Refreshing state... [id=qa-k8s-poc-Cluster-cluster-20230110184259863000000002-20230110184314107500000017]
module.eks.aws_ec2_tag.cluster_primary_security_group["Environment"]: Refreshing state... [id=sg-091358fdcf58991cf,Environment]
module.eks.aws_ec2_tag.cluster_primary_security_group["Creator"]: Refreshing state... [id=sg-091358fdcf58991cf,Creator]
module.eks.aws_ec2_tag.cluster_primary_security_group["Project"]: Refreshing state... [id=sg-091358fdcf58991cf,Project]
module.eks.data.aws_eks_addon_version.this["coredns"]: Reading...
module.eks.data.aws_eks_addon_version.this["kube-proxy"]: Reading...
module.eks.data.aws_eks_addon_version.this["vpc-cni"]: Reading...
module.eks.aws_ec2_tag.cluster_primary_security_group["Application"]: Refreshing state... [id=sg-091358fdcf58991cf,Application]
module.eks.data.tls_certificate.this[0]: Reading...
module.eks.module.eks_managed_node_group["blue"].aws_launch_template.this[0]: Refreshing state... [id=lt-029f98658fd4d579c]
module.eks.module.eks_managed_node_group["green"].aws_launch_template.this[0]: Refreshing state... [id=lt-042af07b1378b110d]
module.eks.data.tls_certificate.this[0]: Read complete after 0s [id=0fec62e3a5b6c39cf9c786f5984abfdb03f2915d]
module.eks.aws_iam_openid_connect_provider.oidc_provider[0]: Refreshing state... [id=arn:aws:iam::*accountid*:oidc-provider/oidc.eks.eu-west-2.amazonaws.com/id/6AC4517F47D7AABAF1E426B0CB58641C]
module.eks.module.eks_managed_node_group["blue"].aws_eks_node_group.this[0]: Refreshing state... [id=qa-k8s-poc-Cluster:blue-20230110185401924400000020]
module.eks.module.eks_managed_node_group["green"].aws_eks_node_group.this[0]: Refreshing state... [id=qa-k8s-poc-Cluster:green-20230110185401925100000022]
module.eks.data.aws_eks_addon_version.this["vpc-cni"]: Read complete after 0s [id=vpc-cni]
module.eks.data.aws_eks_addon_version.this["kube-proxy"]: Read complete after 0s [id=kube-proxy]
module.eks.data.aws_eks_addon_version.this["coredns"]: Read complete after 0s [id=coredns]
module.eks.aws_eks_addon.this["vpc-cni"]: Refreshing state... [id=qa-k8s-poc-Cluster:vpc-cni]
module.eks.aws_eks_addon.this["kube-proxy"]: Refreshing state... [id=qa-k8s-poc-Cluster:kube-proxy]
module.eks.aws_eks_addon.this["coredns"]: Refreshing state... [id=qa-k8s-poc-Cluster:coredns]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
~ update in-place
Terraform will perform the following actions:
# module.eks.kubernetes_config_map_v1_data.aws_auth[0] will be created
+ resource "kubernetes_config_map_v1_data" "aws_auth" {
+ data          = {
+ "mapAccounts" = jsonencode([])
+ "mapRoles"    = <<-EOT
- "groups":
- "system:bootstrappers"
- "system:nodes"
"rolearn": "arn:aws:iam::*accountid*:role/blue-eks-node-group-2023011018430548410000000d"
"username": "system:node:{{EC2PrivateDNSName}}"
- "groups":
- "system:bootstrappers"
- "system:nodes"
"rolearn": "arn:aws:iam::*accountid*:role/green-eks-node-group-2023011018430547080000000c"
"username": "system:node:{{EC2PrivateDNSName}}"
- "groups":
- "system:masters"
"rolearn": "arn:aws:iam::*accountid*:role/QADevelopersRole"
"username": "QADevelopersRole"
- "groups":
- "system:masters"
"rolearn": "arn:aws:iam::*accountid*:role/QAAdminRole"
"username": "QAAdminRole"
- "groups":
- "system:masters"
"rolearn": "arn:aws:iam::*accountid*:role/QASreRole"
"username": "QASreRole"
- "groups":
- "system:masters"
"rolearn": "arn:aws:iam::*accountid*:role/sandbox-eu-west-2-ContinuousDeliverySlavesRole"
"username": "sandbox-eu-west-2-ContinuousDeliverySlavesRole"
EOT
+ "mapUsers"    = jsonencode([])
}
+ field_manager = "Terraform"
+ force         = true
+ id            = (known after apply)
+ metadata {
+ name      = "aws-auth"
+ namespace = "kube-system"
}
}
# module.kms.aws_kms_key.this[0] will be updated in-place
~ resource "aws_kms_key" "this" {
id                                 = "51a16f6d-fd34-4f79-b750-1e5aee6cb1a8"
~ policy                             = jsonencode(
~ {
~ Statement = [
{
Action    = "kms:*"
Effect    = "Allow"
Principal = {
AWS = "arn:aws:iam::*accountid*:root"
}
Resource  = "*"
Sid       = "Default"
},
~ {
~ Principal = {
~ AWS = "arn:aws:sts::*accountid*:assumed-role/QASreRole/aws-go-sdk-1673381797922395000" -> "arn:aws:sts::*accountid*:assumed-role/QASreRole/aws-go-sdk-1673428067877551000"
}
# (4 unchanged elements hidden)
},
]
# (1 unchanged element hidden)
}
)
tags                               = {
"Application" = "aws-alb-controller"
"Creator"     = "Terraform via sandbox-qa-k8s-poc-aws-alb-controller"
"Environment" = "sandbox"
"Project"     = "qa-k8s-poc"
}
# (10 unchanged attributes hidden)
}
Plan: 1 to add, 1 to change, 0 to destroy.
module.kms.aws_kms_key.this[0]: Modifying... [id=51a16f6d-fd34-4f79-b750-1e5aee6cb1a8]
module.eks.kubernetes_config_map_v1_data.aws_auth[0]: Creating...
module.kms.aws_kms_key.this[0]: Modifications complete after 4s [id=51a16f6d-fd34-4f79-b750-1e5aee6cb1a8]

Error: The configmap "aws-auth" does not exist
with module.eks.kubernetes_config_map_v1_data.aws_auth[0],
on .terraform/modules/eks/main.tf line 527, in resource "kubernetes_config_map_v1_data" "aws_auth":
527: resource "kubernetes_config_map_v1_data" "aws_auth" {

这里错过了什么?非常感谢:)

你需要检查Kubernetes Provider的权限。因此,terraform中的AWS提供商仅拥有创建节点和主平面组件之前的特权。然而,尝试在kubernetes中运行任何东西取决于kubernetes提供程序。如果您正在使用假设角色来执行地形脚本,那么这可能会有所帮助

provider "kubernetes" {
host                   = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command     = "aws"
args = [
"eks",
"get-token",
"--role-arn",
local.assume_role_arn,
"--cluster-name",
module.eks.cluster_name
]
}
}

发生此错误的另一种可能情况是底层的aws-cli版本较旧或根本没有安装aws-cli。我在通过Github Actions使用Terraform部署kubernetes有效负载时遇到了这个错误。

相关内容

  • 没有找到相关文章

最新更新