amazon s3 - Spark从s3提交集群模式



我在EC2实例上独立设置了Spark。我尝试使用集群模式提交一个Spark应用程序。jar位于S3中,对它的访问是通过IAM角色设置的。我可以运行aws s3 cp s3://bucket/dir/foo.jar。获取jar文件-这很好。但是,当我运行以下命令时:

spark-submit --master spark://master-ip:7077 --class Foo 
--deploy-mode cluster --verbose s3://bucket/dir/foo/jar

我得到下面列出的错误。看到这些盒子有配置为允许访问的IAM角色,提交作业的正确方法是什么?作业本身根本不使用S3……问题似乎是从S3获取jar。

任何帮助都将不胜感激。

16/07/04 11:44:09 ERROR ClientEndpoint: Exception from cluster was: java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively).
java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively).
        at org.apache.hadoop.fs.s3.S3Credentials.initialize(S3Credentials.java:66)
        at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.initialize(Jets3tFileSystemStore.java:82)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
        at com.sun.proxy.$Proxy5.initialize(Unknown Source)
        at org.apache.hadoop.fs.s3.S3FileSystem.initialize(S3FileSystem.java:77)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
        at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1686)
        at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:598)
        at org.apache.spark.util.Utils$.fetchFile(Utils.scala:395)
        at org.apache.spark.deploy.worker.DriverRunner.org$apache$spark$deploy$worker$DriverRunner$$downloadUserJar(DriverRunner.scala:150)
        at org.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:79)

我找到了一个解决方法。我将jar放在静态http服务器中,并在spark-submit中使用http://server/foo.jar。

相关内容

  • 没有找到相关文章