早上好,
我现在正在遭受python 2.7在将文件上传到S3桶的"访问拒绝"错误。
我们正在执行CodePipeline中的单个管道来编译和部署一个项目,此后,管道的最后一步是启动lambda函数,此lambda函数下载由上一个步骤(Codebuild)生成的工件,这个lambda。下载完成后,它将提取文件并将文件上传到另一个与用于下载文件的桶不同的存储桶。
lambda函数正在正确下载文件,但在上传时,它显示了"访问拒绝"错误。在这篇文章中,您将找到用于执行先前说明的代码,CloudWatch给出的错误日志以及附加到Lambda函数的IAM策略。
我已经尝试了什么:
- 将所有权限放在所有S3存储桶中(不起作用)
- 将所有权限放在特定的S3存储桶中(不起作用)
- 将公共写作权限放在S3存储桶中(这有效)
- 在python the upload_file和upload_fileobj中使用python(不起作用)
我们用来执行此操作的代码如下:
from __future__ import print_function
from boto3.session import Session
import json
import urllib
import boto3
import zipfile
import tempfile
import botocore
import traceback
print('Initializing function.')
boto3.set_stream_logger(level=1)
s3 = boto3.client('s3')
codepipeline = boto3.client('codepipeline')
documentationFileName = "swagger.json"
def setup_s3_client(job_data):
print("Initializing s3")
key_id = job_data["artifactCredentials"]["accessKeyId"]
key_secret = job_data["artifactCredentials"]["secretAccessKey"]
session_token = job_data["artifactCredentials"]["sessionToken"]
session = Session(aws_access_key_id = key_id,
aws_secret_access_key = key_secret,
aws_session_token = session_token)
print("Created s3 session")
return session.client("s3", config = botocore.client.Config(signature_version = 's3v4'))
def put_job_success(job, message):
print('Putting job success')
print(message)
codepipeline.put_job_success_result(jobId = job)
def put_job_failure(job, message):
print('Putting job failure')
print(message)
codepipeline.put_job_failure_result(jobId = job, failureDetails = {'message': message, 'type': 'JobFailed'})
def get_documentation(s3, artifacts):
print("Getting documentation")
doc = artifacts[0]
objectKey = doc["location"]["s3Location"]["objectKey"]
bucketName = doc["location"]["s3Location"]["bucketName"]
with tempfile.NamedTemporaryFile() as tmp_file:
print("Downloading file form s3")
s3.download_file(bucketName, objectKey, tmp_file.name)
with zipfile.ZipFile(tmp_file.name, 'r') as zip:
print("Printing content on zip")
zip.printdir()
print(zip.namelist())
return zip.read(documentationFileName)
def update_documentation(s3, doc):
print("Updating documentation")
bucketName = "atingo-api-documentation"
objectKey = "atingoEngineApi/api.json"
fileName = "api.json"
with tempfile.NamedTemporaryFile() as tmp_file:
tmp_file.write(doc)
s3.upload_file(tmp_file.name, bucketName, objectKey)
tmp_file.close()
def lambda_handler(event, context):
try:
print(event)
job_id = event["CodePipeline.job"]["id"]
job_data = event["CodePipeline.job"]["data"]
artifacts = event["CodePipeline.job"]["data"]["inputArtifacts"]
s3 = setup_s3_client(job_data)
docs = get_documentation(s3, artifacts)
if (docs):
update_documentation(s3, docs)
put_job_success(job_id, "Doc updated successfully")
else:
print("Failure")
put_job_failure(job_id, "Doc does not exists.")
except Exception as e:
print('Function failed')
print(e)
traceback.print_exc()
put_job_failure(job_id, 'Function exception: ' + str(e))
return 'Completed!'
CloudWatch中的错误日志
15:04:55 START RequestId: 55850db1-cecd-11e7-b4ac-014088afae30 Version: $LATEST
15:04:55 Initializing s3
15:04:55 Created s3 session
15:04:56 Getting documentation
15:04:56 Downloading file form s3
15:04:58 Printing content on zip
15:04:58 File Name Modified Size
15:04:58 swagger.json 2017-11-20 19:34:12 14331
15:04:58 project.jar 2017-11-20 19:36:08 29912075
15:04:58 ['swagger.json', 'project.jar']
15:04:58 Updating documentation
15:04:58 Function failed
15:04:58 Failed to upload /tmp/tmprFknUH to project-api-documentation/projectEngineApi/api.json: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
15:04:58 Putting job failure
15:04:58 Function exception: Failed to upload /tmp/tmprFknUH to project-api-documentation/projectEngineApi/api.json: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
15:04:58 Traceback (most recent call last):
15:04:58 File "/var/task/lambda_function.py", line 84, in lambda_handler
15:04:58 update_documentation(s3, docs)
15:04:58 File "/var/task/lambda_function.py", line 71, in update_documentation
15:04:58 s3.upload_file(tmp_file.name, bucketName, objectKey)
15:04:58 File "/var/runtime/boto3/s3/inject.py", line 110, in upload_file
15:04:58 extra_args=ExtraArgs, callback=Callback)
15:04:58 File "/var/runtime/boto3/s3/transfer.py", line 283, in upload_file
15:04:58 filename, '/'.join([bucket, key]), e))
15:04:58 S3UploadFailedError: Failed to upload /tmp/tmprFknUH to project-api-documentation/projectEngineApi/api.json: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
15:04:59 END RequestId: 55850db1-cecd-11e7-b4ac-014088afae30
15:04:59 REPORT RequestId: 55850db1-cecd-11e7-b4ac-014088afae30 Duration: 3674.95 ms Billed Duration: 3700 ms Memory Size: 128 MB Max Memory Used: 94 MB
lambda功能附加的IAM策略是:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"codepipeline:PutApprovalResult",
"cloudwatch:*",
"codepipeline:PutJobFailureResult",
"codepipeline:PutJobSuccessResult",
"codepipeline:GetJobDetails",
"logs:CreateLogGroup",
"logs:PutDestination"
],
"Resource": "*"
},
{
"Sid": "CreatedManually",
"Effect": "Allow",
"Action": [
"lambda:ListVersionsByFunction",
"lambda:GetFunction",
"lambda:ListAliases",
"lambda:InvokeAsync",
"lambda:GetFunctionConfiguration",
"lambda:Invoke",
"logs:PutLogEvents",
"lambda:UpdateAlias",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"lambda:ListTags",
"lambda:PublishVersion",
"lambda:GetAlias",
"s3:DeleteObject",
"lambda:GetPolicy",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:lambda:*:*:function:*",
"arn:aws:logs:us-east-1:123456789101:log-group:/aws/lambda/*:*:*",
"arn:aws:s3:::project-api-documentation",
"arn:aws:s3:::codepipeline-us-east-1-123456789101",
"arn:aws:s3:::project-api-documentation/*",
"arn:aws:s3:::codepipeline-us-east-1-123456789101/*"
]
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:us-east-1:123456789101:log-group:/aws/lambda/*"
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "s3:ListObjects",
"Resource": [
"arn:aws:s3:::project-api-documentation",
"arn:aws:s3:::codepipeline-us-east-1-123456789101",
"arn:aws:s3:::project-api-documentation/*",
"arn:aws:s3:::codepipeline-us-east-1-123456789101/*"
]
}
]
}
谢谢!
我设法解决了这个问题,问题是:
- 我在事件对象上传时拿起凭据,这些凭据是仅下载工件。您不能使用它们将文件上传到另一个存储桶。
- 要获取将凭证附加到lambda函数的情况下,您应该使用boto3而不创建新会话,在这种情况下,会话仅与凭据一起创建以获取工件。
在这种情况下,一个解决方案可以将 upload_documentation 函数更改为:
def update_documentation(doc):
print("Updating documentation")
bucketName = "project-api-documentation"
objectKey = "projectEngineApi/api.json"
fileName = "api.json"
with tempfile.NamedTemporaryFile() as tmp_file:
tmp_file.write(doc)
s3.upload_file(tmp_file.name, bucketName, objectKey)
tmp_file.close()
也感谢@jarmod帮助获得此修复。