Terraform v1.2.8
我有一个YAML配置文件,我用它通过eksctl
创建了一个AWS EKS集群,该集群使用现有的VPC,比如这个
kind: ClusterConfig
apiVersion: eksctl.io/v1alpha5
metadata:
name: sandbox
region: us-east-1
version: "1.23"
# The VPC and subnets are for the data plane, where the pods will
# ultimately be deployed.
vpc:
id: "vpc-123456"
clusterEndpoints:
privateAccess: true
publicAccess: false
subnets:
private:
...
然后我这样做来创建集群
$ eksctl create cluster -f eks-sandbox.yaml
现在我想使用Terraform,所以我查看了aws_eks_cluster资源,并正在做这个
resource "aws_eks_cluster" "eks_cluster" {
name = var.cluster_name
role_arn = var.iam_role_arn
vpc_config {
endpoint_private_access = true
endpoint_public_access = false
security_group_ids = var.sg_ids
subnet_ids = var.subnet_ids
}
}
但是资源不允许我指定现有的专有网络?因此,当我做时
$ terraform plan -out out.o
我看到
# module.k8s_cluster.aws_eks_cluster.eks_cluster will be created
+ resource "aws_eks_cluster" "eks_cluster" {
+ arn = (known after apply)
+ certificate_authority = (known after apply)
+ created_at = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ identity = (known after apply)
+ name = "sandbox"
+ platform_version = (known after apply)
+ role_arn = "arn:aws:iam::1234567890:role/EKSClusterAdminRole"
+ status = (known after apply)
+ tags_all = (known after apply)
+ version = (known after apply)
+ kubernetes_network_config {
+ ip_family = (known after apply)
+ service_ipv4_cidr = (known after apply)
+ service_ipv6_cidr = (known after apply)
}
+ vpc_config {
+ cluster_security_group_id = (known after apply)
+ endpoint_private_access = true
+ endpoint_public_access = false
+ public_access_cidrs = (known after apply)
+ security_group_ids = (known after apply)
+ subnet_ids = [
+ "subnet-1234567890",
+ "subnet-2345678901",
+ "subnet-3456789012",
+ "subnet-4567890123",
+ "subnet-5678901234",
]
+ vpc_id = (known after apply)
}
}
看到vpc_id
输出了吗?但我不希望它为我创建一个专有网络,我想使用一个现有的专有网络,就像我的YAML配置文件中一样。我可以使用现有的专有网络在Terraform中创建AWS EKS集群吗?TIA-
不幸的是,资源aws_eks_cluster
没有参数vpc_id
,但具有这样的属性。当您指定subnet_ids
时,它将在您指定的子网的VPC中创建