403使用Spark访问s3a时出错



问题:

能够使用AWS CLI以及boto 3成功下载该文件。然而,在使用Hadoop/Spark的S3A连接器时,收到以下错误:

py4j.protocol.Py4JJavaError: An error occurred while calling o24.parquet.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: BCFFD14CB2939D68, AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID: MfT8J6ZPlJccgHBXX+tX1fpX47V7dWCP3Dq+W9+IBUfUhsD4Nx+DcyqsbgbKsPn8NZzjc2U

配置:在我的本地机器上运行此

  1. Spark 2.4.4版

  2. Hadoop 2.7版

添加的罐子:

  1. hadoop-aws-2.7.3.jar

  2. aws-java-sdk-1.7.4.jar

Hadoop配置:

hadoop_conf.set("fs.s3a.access.key", access_key)
hadoop_conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
hadoop_conf.set("fs.s3a.secret.key", secret_key)
hadoop_conf.set("fs.s3a.aws.credentials.provider","org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider")
hadoop_conf.set("fs.s3a.session.token", session_key)
hadoop_conf.set("fs.s3a.endpoint", "s3-us-west-2.amazonaws.com") # yes, I am using central eu server.
hadoop_conf.set("com.amazonaws.services.s3.enableV4", "true")

读取文件的代码:

from pyspark import SparkConf, SparkContext, SQLContext
sc = SparkContext.getOrCreate()
hadoop_conf=sc._jsc.hadoopConfiguration()
sqlContext = SQLContext(sc)
df = sqlContext.read.parquet(path)
print(df.head())

将AWS凭据提供商设置为配置凭据:

hadoopConf.set("fs.s3a.aws.credentials.provider", "com.amazonaws.auth.profile.ProfileCredentialsProvider")

我遇到了与上面相同的问题。事实证明,我需要将会话令牌添加到我的配置中。请参阅文档。

configuration.set("fs.s3a.session.token", System.getenv("AWS_SESSION_TOKEN"))

最新更新