文件在上传到GCP之前没有以地形形式存档



尽管使用了depends_on指令,但在尝试将zip放入bucket之前,它看起来并没有创建。考虑到管道输出,不知何故,它只是在启动上传到bucket之前省略了文件归档。这两个文件(index.js和package.json(都存在。

resource "google_storage_bucket" "cloud-functions" {
project       = var.project-1-id
name          = "${var.project-1-id}-cloud-functions"
location      = var.project-1-region
}
resource "google_storage_bucket_object" "start_instance" {
name       = "start_instance.zip"
bucket     = google_storage_bucket.cloud-functions.name
source     = "${path.module}/start_instance.zip"
depends_on = [
data.archive_file.start_instance,
]
}
data "archive_file" "start_instance" {
type        = "zip"
output_path = "${path.module}/start_instance.zip"
source {
content  = file("${path.module}/scripts/start_instance/index.js")
filename = "index.js"
}
source {
content  = file("${path.module}/scripts/start_instance/package.json")
filename = "package.json"
}
}
Terraform has been successfully initialized!
$ terraform apply -input=false "planfile"
google_storage_bucket_object.stop_instance: Creating...
google_storage_bucket_object.start_instance: Creating...
Error: open ./start_instance.zip: no such file or directory
on cloud_functions.tf line 41, in resource "google_storage_bucket_object" "start_instance":
41: resource "google_storage_bucket_object" "start_instance" {

日志:

2020-11-18T13:02:56.796Z [DEBUG] plugin.terraform-provider-google_v3.40.0_x5: 2020/11/18 13:02:56 [WARN] Failed to read source file "./start_instance.zip". Cannot compute md5 hash for it.
2020/11/18 13:02:56 [WARN] Provider "registry.terraform.io/hashicorp/google" produced an invalid plan for google_storage_bucket_object.stop_instance, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .detect_md5hash: planned value cty.StringVal("different hash") does not match config value cty.NullVal(cty.String)
2020/11/18 13:02:56 [WARN] Provider "registry.terraform.io/hashicorp/google" produced an invalid plan for google_storage_bucket_object.start_instance, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .detect_md5hash: planned value cty.StringVal("different hash") does not match config value cty.NullVal(cty.String)

我对GitLab CI/CD管道也有同样的问题。经过一番挖掘,根据讨论,我发现通过这种设置,计划和应用阶段在单独的容器中运行,归档步骤在计划阶段执行。

一种解决方法是创建一个具有null_resource的伪触发器,并强制archive_file依赖它,从而在应用阶段执行。

resource null_resource dummy_trigger {
triggers = {
timestamp = timestamp()
}
}
resource "google_storage_bucket" "cloud-functions" {
project       = var.project-1-id
name          = "${var.project-1-id}-cloud-functions"
location      = var.project-1-region
}
resource "google_storage_bucket_object" "start_instance" {
name       = "start_instance.zip"
bucket     = google_storage_bucket.cloud-functions.name
source     = "${path.module}/start_instance.zip"
depends_on = [
data.archive_file.start_instance,
]
}
data "archive_file" "start_instance" {
type        = "zip"
output_path = "${path.module}/start_instance.zip"
source {
content  = file("${path.module}/scripts/start_instance/index.js")
filename = "index.js"
}
source {
content  = file("${path.module}/scripts/start_instance/package.json")
filename = "package.json"
}

depends_on = [
resource.null_resource.dummy_trigger,
]
}

最新更新