我正在对 AWS 数据库服务进行评估以选择最有效的服务,目标是每 5 分钟将数据从 S3 存储桶的 json 文件加载到 Redshift 中。
我目前正在尝试使用 AWS 数据管道实现 ETL 的自动化。我一直在关注这个 AWS 教程"使用 AWS 数据管道控制台将数据复制到 Amazon Redshift",一切都简单明了。
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-copydata-redshift-create.html
我在 Redshift 上设置了一个集群,在 S3 上设置了一个存储桶,创建了所需的所有角色和策略以及所需的所有权限。
现在,在创建管道后,按"激活"会出现一条警告,说
错误/警告:
Object:Ec2Instance
WARNING: Could not validate S3 Access for role. Please ensure role ('DataPipelineDefaultRole') has s3:Get*, s3:List*, s3:Put* and sts:AssumeRole permissions for DataPipeline.
现在我确信我的角色和资源角色已经s3:Get*
、s3:List*
、s3:Put*
和sts:AssumeRole
事实上,他们基本上都FullAccess
我需要的所有服务。
DataPipelineDefaultRole 策略:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "*",
"Condition": {
"StringLike": {
"iam:AWSServiceName": [
"elasticmapreduce.amazonaws.com",
"spot.amazonaws.com"
]
}
}
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"sdb:Select*",
"sqs:ReceiveMessage",
"s3:Get*",
"sqs:GetQueue*",
"s3:CreateBucket",
"sns:Unsubscribe",
"s3:List*",
"datapipeline:EvaluateExpression",
"ec2:StartInstances",
"dynamodb:DescribeTable",
"sqs:Delete*",
"iam:ListAttachedRolePolicies",
"ec2:RevokeSecurityGroupEgress",
"dynamodb:GetItem",
"sns:Subscribe",
"iam:ListRolePolicies",
"s3:DeleteObject",
"sdb:BatchPutAttributes",
"iam:GetRole",
"dynamodb:BatchGetItem",
"redshift:DescribeClusterSecurityGroups",
"ec2:CreateTags",
"ec2:DeleteNetworkInterface",
"ec2:RunInstances",
"dynamodb:Scan",
"rds:DescribeDBSecurityGroups",
"ec2:StopInstances",
"ec2:CreateNetworkInterface",
"ec2:CancelSpotInstanceRequests",
"cloudwatch:*",
"sqs:PurgeQueue",
"iam:GetRolePolicy",
"dynamodb:UpdateTable",
"ec2:RequestSpotInstances",
"ec2:DeleteTags",
"sns:ListTopics",
"ec2:ModifyImageAttribute",
"iam:PassRole",
"sns:Publish",
"ec2:DescribeNetworkInterfaces",
"ec2:CreateSecurityGroup",
"rds:DescribeDBInstances",
"ec2:ModifyInstanceAttribute",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:DetachNetworkInterface",
"ec2:TerminateInstances",
"iam:GetInstanceProfile",
"sns:GetTopicAttributes",
"datapipeline:DescribeObjects",
"dynamodb:Query",
"iam:ListInstanceProfiles",
"ec2:Describe*",
"ec2:DeleteSecurityGroup",
"redshift:DescribeClusters",
"sqs:CreateQueue",
"elasticmapreduce:*",
"s3:Put*"
],
"Resource": "*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"iam:PassRole",
"s3:Get*",
"s3:List*",
"s3:Put*",
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::*:role/DataPipelineDefaultResourceRole",
"arn:aws:iam::*:role/DataPipelineDefaultRole",
"arn:aws:s3:::*/*"
]
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:Put*"
],
"Resource": "arn:aws:s3:::*"
},
{
"Sid": "VisualEditor4",
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:Put*"
],
"Resource": "*"
}
]
}
DataPipelineDefaultResourceRole 策略:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudwatch:*",
"datapipeline:*",
"dynamodb:*",
"ec2:Describe*",
"elasticmapreduce:AddJobFlowSteps",
"elasticmapreduce:Describe*",
"elasticmapreduce:ListInstance*",
"rds:Describe*",
"redshift:DescribeClusters",
"redshift:DescribeClusterSecurityGroups",
"s3:*",
"sdb:*",
"sns:*",
"sqs:*"
],
"Resource": [
"*"
]
}
]
}
我已经研究了这个问题一个多星期,尝试了所有现有的解决方案,更新了信任关系,重新创建了角色,保留了默认角色,让数据管道创建新角色并检查了安全组,但仍然存在相同的问题。
激活管道并检查日志 Uri 后,我确实找到了 2 个文件夹Ec2Instance
和RedshiftLoadActivity
,在 Redshift 日志文件中只有 2 行,另一行有更多描述下载 jar 和 S3 文件的[INFO]
任务运行程序。
在日志中 有[INFO]
这些[WARN]
:
Ec2实例:
private.com.amazonaws.services.s3.internal.S3V4AuthErrorRetryStrategy: Attempting to re-send the request to mylogbucket.s3.eu-central-1.amazonaws.com with AWS V4 authentication. To avoid this warning in the future, please use region-specific endpoint to access buckets located in regions that require V4 signing.
RedshiftLoadActivity:
private.com.amazonaws.services.s3.internal.S3V4AuthErrorRetryStrategy: Attempting to re-send the request to mylogbucket.s3.eu-central-1.amazonaws.com with AWS V4 authentication. To avoid this warning in the future, please use region-specific endpoint to access buckets located in regions that require V4 signing.
问题应该是角色和策略,但我确保 Redshift 和 S3 存储桶不是问题,因为我尝试在查询编辑器上使用COPY
命令并且它按预期加载了数据。
我目前仍然陷入该错误,并希望就如何解决此问题提供一些建议。
548 [错误] (任务运行者服务资源:df-0539055_@Ec2Instance_2019-05-30T13:38:35-0) amazonaws.datapipeline.database.ConnectionFactory:无法建立 连接到 JDBC:PostgreSQL://Redshift-cluster-1.coykb9.eu-central-1.redshift.amazonaws.com:5439/db 连接被拒绝。检查主机名和端口是否正确,以及 邮局主管正在接受 TCP/IP 连接
数据管道使用 EMR 生成 EC2 实例并完成提交的任务。
检查数据管道生成的 EMR 的EC2 实例配置文件和EMR 角色。 将 S3 访问策略附加到 EMR 的 EC2 实例配置文件角色。
默认情况下,EC2 实例配置文件是DataPipelineDefaultResourceRole
用于固定
错误 无法建立与 的连接 JDBC:PostgreSQL://Redshift-cluster-1.coykb9.eu-central-1.redshift.amazonaws.com:5439/db 连接被拒绝
更新您的红移安全组入站规则以允许从 0.0.0.0/0 进行连接。这意味着允许互联网上的任何计算机使用凭据进行连接。