由于AM容器,应用程序失败了2次,并以出口码-104退出



我正在运行一个带有两个输入文件和一个JAR文件的Spark应用程序,该文件从Amazon S3存储桶中占用。我正在使用 instance typem5.12xlarge的AWS CLI创建一个群集, instance-count as 11,spark属性为:

--deploy-mode cluster
--num-executors 10
--executor-cores 45
--executor-memory 155g

我的火花工作已经运行了一段时间,然后自动失败和重新启动,然后再次运行了一段时间,然后显示了此诊断(从日志中拉出(

diagnostics: Application application_1557259242251_0001 failed 2 times due to AM Container for appattempt_1557259242251_0001_000002 exited with  exitCode: -104
Failing this attempt.Diagnostics: Container [pid=11779,containerID=container_1557259242251_0001_02_000001] is running beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical memory used; 3.5 GB of 6.9 GB virtual memory used. Killing container.
Dump of the process-tree for container_1557259242251_0001_02_000001 :
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Exception in thread "main" org.apache.spark.SparkException: Application application_1557259242251_0001 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1165)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1520)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/05/07 20:03:35 INFO ShutdownHookManager: Shutdown hook called
19/05/07 20:03:35 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-3deea823-45e5-4a11-a5ff-833b01e6ae79
19/05/07 20:03:35 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-d6c3f8b2-34c6-422b-b946-ad03b1ee77d6
Command exiting with ret '1'

我无法弄清楚问题是什么?

我尝试更改实例类型或降低执行者内存和执行器核,但仍有同样的问题在发生。有时,相同的配置设置成功终止了群集并生成结果,但是许多时间都会生成这些错误。

有人可以帮忙吗?

如果您向Spark作业提供了超过1个输入文件。制作一个罐子然后执行。

步骤1:如何制作zip文件

zip abc.zip file1.py file2.py

步骤2:用zip文件执行作业

spark2-submit --master yarn --deploy-mode cluster --py-files /home/abc.zip /home/main_program_file.py

最新更新