如何使用脚本在terraform中使用aws_instance调用二进制对象



我正在使用terraform的计数循环功能创建多个aws_instance。我希望这些实例中的每一个都运行一个需要自定义二进制文件的脚本。该脚本使用一些固定参数多次调用二进制文件,但我认为它所做的具体操作与问题无关

我已经能够创建一个bucket:

# Create an S3 bucket to hold foo binary and bar script
resource "aws_s3_bucket" "foobar-bucket" {
bucket = "foobar-bucket"
acl    = "private"
tags = {
Name        = "Foobar Bucket"
}
}

将脚本和二进制文件上传到所述bucket:

# Upload foo binary to S3 bucket
resource "aws_s3_bucket_object" "foo-object" {
bucket = "foobar-bucket"
key    = "foo"
source = "./misc/foo" # local file location
depends_on = [
aws_s3_bucket.foobar-bucket,
]
}
# Upload bar script to S3 bucket
resource "aws_s3_bucket_object" "bar-script" {
bucket = "foobar-bucket"
key    = "bar.sh"
source = "./misc/bar.sh"
depends_on = [
aws_s3_bucket.foobar-bucket,
]
}

然后使用远程exec下载脚本和二进制文件,并调用脚本:

resource "aws_instance" "default" {
count = 10
...
provisioner "remote-exec" {
inline = [
"aws s3 cp s3://foobar-bucket/foo ./",
"aws s3 cp s3://foobar-bucket/bar.sh ./",
"chmod +x foo bar.sh",
"sudo ./bar.sh 100",
]
}
...
}

正如预期的那样,所有s3访问等都设置正确,但这似乎不是正确的解决方案,尤其是terraform的文档建议provisioner和remoteexec应该是最后的选择。

在ec2实例上提供文件并使用terraform运行脚本的正确方法是什么?S3似乎是一个很好的解决方案,只上传一次文件,并允许尽可能多的ec2实例访问,但也许还有更好的解决方案?

aws_instance资源中使用user_data选项怎么样?

resource "aws_instance" "example" {
ami           = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
user_data = <<-EOF
#!/bin/bash
aws s3 cp s3://foobar-bucket/foo ./
aws s3 cp s3://foobar-bucket/bar.sh ./
chmod +x foo bar.sh
sudo ./bar.sh 100
EOF
tags = {
Name = "terraform-example"
}
}

如果脚本有点长,并且您更喜欢在完全不同的文件中定义它

run_commands.sh

#!/usr/bin/env bash
aws s3 cp s3://foobar-bucket/foo ./
aws s3 cp s3://foobar-bucket/bar.sh ./
chmod +x foo bar.sh
sudo ./bar.sh 100
resource "aws_instance" "my-instance" {
ami = "ami-04169656fea786776"
instance_type = "t2.nano"
key_name = "${aws_key_pair.terraform-demo.key_name}"
user_data = "${file("run_commands.sh")}"
tags = {
Name = "Terraform"  
}
}

最新更新