我正在使用threadpool使用S3上传。然而,我怀疑有时一个线程可能会失败。因此,我想重新启动失败的线程。我想要一些关于如何做到这一点的建议:
pool = ThreadPool(processes=10)
pool.map(uploadS3_helper, args)
def uploadS3_helper(args):
return uploadS3(*args)
def uploadS3(myfile, bucket_name, key_root, path_root, use_rel, define_region):
if define_region:
conn = S3Connection(aws_access_key_id=S3_ACCESS_KEY, aws_secret_access_key=S3_SECRET_KEY,
host='s3-us-west-2.amazonaws.com')
else:
conn = S3Connection(aws_access_key_id=S3_ACCESS_KEY, aws_secret_access_key=S3_SECRET_KEY)
bucket = conn.get_bucket(bucket_name)
print key_root + myfile
print path_root
print os.path.join(path_root, myfile)
if use_rel:
bucket.new_key(key_root + myfile).set_contents_from_file(open(os.path.join(path_root, myfile[1:])))
else:
bucket.new_key(key_root + myfile).set_contents_from_file(open(path_root))
要扩展@Martin James的评论:考虑一个"retry decorator"。将链接的代码添加到您的项目中,标记uploadS3
函数,它将尝试多次上传,每次失败后等待更长的时间。