Yarn MapReduce近似pi示例在以非hadoop用户身份运行时退出代码1失败



我正在使用Hadoop2.6.2和yarn运行一个小型的linux机器专用集群。我从linux边缘节点启动yarn作业。当hadoop(超级用户,集群所有者)用户运行时,近似pi值的罐装Yarn示例非常有效,但当从我在边缘节点上的个人帐户运行时失败。在这两种情况下(hadoop,me),我都是这样运行作业的:

clott@edge: /home/hadoop/hadoop-2.6.2/bin/yarn jar /home/hadoop/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.2.jar pi 2 5

它失败了;完整输出如下。我认为未找到异常的文件完全是伪造的。我认为是某些原因导致容器启动失败,因此找不到输出。容器启动失败的原因是什么?如何进行调试?

因为这个完全相同的命令在由hadoop用户运行时非常有效,但在由同一边缘节点上的不同帐户运行时则不然,所以我怀疑存在权限或其他纱线配置问题;我不怀疑是缺少jar文件的问题。我的个人帐户使用与hadoop帐户相同的环境变量,这是值得的。

这些问题很相似,但我没有找到解决方案:

https://issues.cloudera.org/browse/DISTRO-577

作为不同的用户运行地图缩减作业

Yarn MapReduce作业问题-Hadoop 2.3.0 中的AM容器启动错误

我尝试过这些补救措施,但没有成功:

  1. 在core-site.xml中,将hadoop.tmp.dir的值设置为/tmp/temp-${user.name}

  2. 将我的个人用户帐户添加到集群中的每个节点

我想很多安装都是由一个用户运行的,但我试图让两个人在集群上一起工作,而不会破坏彼此的中间结果。我完全疯了吗?

全输出:

Number of Maps  = 2
Samples per Map = 5
Wrote input for Map #0
Wrote input for Map #1
Starting Job
15/12/22 15:29:18 INFO client.RMProxy: Connecting to ResourceManager at ac1.mycompany.com/1.2.3.4:8032
15/12/22 15:29:18 INFO input.FileInputFormat: Total input paths to process : 2
15/12/22 15:29:19 INFO mapreduce.JobSubmitter: number of splits:2
15/12/22 15:29:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1450815437271_0002
15/12/22 15:29:19 INFO impl.YarnClientImpl: Submitted application application_1450815437271_0002
15/12/22 15:29:19 INFO mapreduce.Job: The url to track the job: http://ac1.mycompany.com:8088/proxy/application_1450815437271_0002/
15/12/22 15:29:19 INFO mapreduce.Job: Running job: job_1450815437271_0002
15/12/22 15:29:31 INFO mapreduce.Job: Job job_1450815437271_0002 running in uber mode : false
15/12/22 15:29:31 INFO mapreduce.Job:  map 0% reduce 0%
15/12/22 15:29:31 INFO mapreduce.Job: Job job_1450815437271_0002 failed with state FAILED due to: Application application_1450815437271_0002 failed 2 times due to AM Container for appattempt_1450815437271_0002_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://ac1.mycompany.com:8088/proxy/application_1450815437271_0002/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1450815437271_0002_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
15/12/22 15:29:31 INFO mapreduce.Job: Counters: 0
Job Finished in 13.489 seconds
java.io.FileNotFoundException: File does not exist: hdfs://ac1.mycompany.com/user/clott/QuasiMonteCarlo_1450816156703_163431099/out/reduce-out
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1817)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1841)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

是的,Manjunath Ballur你是对的,这是权限问题!最后学习了如何保存纱线应用程序日志,这清楚地揭示了问题所在。以下是步骤:

  1. 编辑yarn-site.xml并添加一个属性以延迟纱线日志删除:

    <property>
        <name>yarn.nodemanager.delete.debug-delay-sec</name>
        <value>600</value>
    </property>
    
  2. 将yarn-site.xml推送到所有节点(ARGH我忘了很久了),然后重新启动集群。

  3. 运行yarn示例来估计pi,如上所示,它失败了。看看http://namenode:8088/cluster/apps/FAILED要查看失败的应用程序,请单击最近失败的链接,查看底部以查看使用了集群中的哪些节点。

  4. 在应用程序失败的群集中的一个节点上打开一个窗口。找到工作目录,在我的情况下是

    ~hadoop/hadoop-2.6.2/logs/userlogs/application_1450815437271_0004/container_1450‌​815437271_0004_01_000001/
    
  5. 等等,我看到了文件stdout(只有log4j在抱怨)、stderr(几乎为空)和syslog(赢家鸡肉晚餐)。在系统日志文件中,我发现了这个gem:

    2015-12-23 08:31:42,376 INFO [main] org.apache.hadoop.service.AbstractService: Service JobHistoryEventHandler failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=clott, access=EXECUTE, inode="/tmp/hadoop-yarn/staging/history":hadoop:supergroup:drwxrwx---
    

所以问题出在hdfs:///tmp/hadoop-yarn/staging/history.一个简单的chmod 777让我说对了,我不再和团体争了。现在,非hadoop非超级用户可以运行一个yarn作业。

相关内容

最新更新