将地形 0.12.6 切换到 0.13.0 给了我提供程序[ "registry.terraform.io/-/null" ] 是必需的,但它已被删除



我在远程地形云中管理状态

我已经下载并安装了最新的terraform 0.13 CLI

然后我移除了.terraform.

然后我运行了terraform init,没有得到错误

然后我做了

➜ terraform apply -var-file env.auto.tfvars
Error: Provider configuration not present
To work with
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0] its
original provider configuration at provider["registry.terraform.io/-/null"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0],
after which you can remove the provider configuration again.
Releasing state lock. This may take a few moments...

这是模块/kubernetes/main.tf 的内容

###################################################################################
# EKS CLUSTER                                                                     #
#                                                                                 #
# This module contains configuration for EKS cluster running various applications #
###################################################################################
module "eks_label" {
source      = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master"
namespace   = var.project
environment = var.environment
attributes  = [var.component]
name        = "eks"
}

#
# Local computed variables
#
locals {
names = {
secretmanage_policy = "secretmanager-${var.environment}-policy"
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks-cluster.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-cluster.cluster_id
}
provider "kubernetes" {
host                   = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token                  = data.aws_eks_cluster_auth.cluster.token
load_config_file       = false
version                = "~> 1.9"
}
module "eks-cluster" {
source          = "terraform-aws-modules/eks/aws"
cluster_name    = module.eks_label.id
cluster_version = var.cluster_version
subnets         = var.subnets
vpc_id          = var.vpc_id
worker_groups = [
{
instance_type = var.cluster_node_type
asg_max_size  = var.cluster_node_count
}
]
tags = var.tags
}
# Grant secretmanager access to all pods inside kubernetes cluster
# TODO:
# Adjust implementation so that the policy is template based and we only allow
# kubernetes access to a single key based on the environment.
# we should export key from modules/secrets and then grant only specific ARN access
# so that only production cluster is able to read production secrets but not dev or staging
# https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html#permissions_grant-get-secret-value-to-one-secret
resource "aws_iam_policy" "secretmanager-policy" {
name        = local.names.secretmanage_policy
description = "allow to read secretmanager secrets ${var.environment}"
policy      = file("modules/kubernetes/policies/secretmanager.json")
}
#
# Attache the policy to k8s worker role
#
resource "aws_iam_role_policy_attachment" "attach" {
role       = module.eks-cluster.worker_iam_role_name
policy_arn = aws_iam_policy.secretmanager-policy.arn
}
#
# Attache the S3 Policy to Workers
# So we can use aws commands inside pods easily if/when needed
#
resource "aws_iam_role_policy_attachment" "attach-s3" {
role       = module.eks-cluster.worker_iam_role_name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}

此修复的所有信用都将转到cloudpose slack频道上提到这一点的信用:

terraform状态替换提供程序-自动批准-/null registry.terraform.io/hashicorp/null

这修复了我的此错误问题,并延续到下一个错误。所有这些都是为了在地形上升级一个版本。

对于我们来说,我们更新了我们在代码中使用的所有提供商URL,如下所示:

terraform state replace-provider 'registry.terraform.io/-/null' 
'registry.terraform.io/hashicorp/null'
terraform state replace-provider 'registry.terraform.io/-/archive' 
'registry.terraform.io/hashicorp/archive'
terraform state replace-provider 'registry.terraform.io/-/aws' 
'registry.terraform.io/hashicorp/aws'

我想非常具体地替换,所以我在替换新的URL时使用了损坏的URL。

更具体地说,这只适用于terraform 13

https://www.terraform.io/docs/providers/index.html#providers-在地形注册表中

当有一个处于最新Terraform状态的对象不再在配置中,但Terraform无法销毁它(正如通常预期的那样(时,就会出现此错误,因为用于销毁它的提供程序配置也不存在。

解决方案:

只有当您最近删除了对象时,才会出现这种情况"data.null_data_source";连同提供者";空";块到继续这个you’ll need to temporarily restore that provider "null" block,运行terraform apply来拥有Terraform destroy object data "null_data_source",然后你就可以删除提供者"空";阻止,因为不再需要它。

最新更新