AWS Glue 作业失败,出现错误'ERROR Client: Application diagnostics message: User application exited with stat



我最近使用AWS Glue job来测试运行一些spark python代码,我昨天启动了一个运行,它成功了,今天早上,没有任何更改,我启动了三次,都失败了。日志很奇怪,我不明白…

这是从错误日志中复制的:

kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
awk: /tmp/parse_yarn_logs.awk:6: warning: escape sequence `[' treated as plain `['
awk: /tmp/parse_yarn_logs.awk:6: warning: escape sequence `]' treated as plain `]'
awk: /tmp/parse_yarn_logs.awk:8: warning: escape sequence `(' treated as plain `('
awk: /tmp/parse_yarn_logs.awk:8: warning: escape sequence `)' treated as plain `)'
21/03/04 09:56:42 INFO client.RMProxy: Connecting to ResourceManager at ip-xxxxxx.ec2.internal/xxx.xx.xx.x:xxxx
awk: /tmp/parse_yarn_logs.awk:19: (FILENAME=- FNR=1) fatal: Unmatched ( or (: /.*Unregistering ApplicationMaster with FAILED (diag message: Shutdown hook called before final status was reported.*$/

通过查看完整的版本日志,我发现这个位似乎导致了这个问题:

21/03/04 10:12:08 ERROR Client: Application diagnostics message: User application exited with status 1
Exception in thread "main" org.apache.spark.SparkException: Application application_xxxxxxxx_0001 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1149)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1526)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
21/03/04 10:12:08 INFO ShutdownHookManager: Shutdown hook called
21/03/04 10:12:08 INFO ShutdownHookManager: Deleting directory /tmp/spark-xxxxxxxxxx
21/03/04 10:12:08 INFO ShutdownHookManager: Deleting directory /tmp/spark-xxxxxxxxxx

其中一次运行启动时间用了10分钟?!通常只用了几秒钟……看起来胶水不是很稳定…工作失败与否取决于我的运气……

有谁知道是什么原因导致了这个问题,我能做些什么来提高它的性能吗?谢谢。

现在在AWS Glue Job上也发生了同样的情况。但是在我这边,当我在代码

中添加新行时就会发生这种情况
device = DeviceDetector('Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.77.34.5 Safari/537.36 QJY/2.0 Philco_PTV24G50SN-VB_DRM HDR DID/C0132bb2240f').parse() 

当我关闭这一行时,工作就ok了。由于它是我们代码中的新Python包(我刚刚添加了它),我不知道它以前是什么样子。希望有人能解释一下。

相关内容

最新更新