Hadoop Streaming 在 Hadoop 1.0.0 中使用简单映射和减少作业(使用 NLTK 代码)失败



我的执行代码和输出

[hduser@Janardhan hadoop]$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.0.jar -file /home/hduser/mapper.py -mapper mapper.py -file /home/hduser/reducer.py -reducer reducer.py -input /user/hduser/input.txt -output /home/hduser/outpututttt

 Warning: $HADOOP_HOME is deprecated.
    packageJobJar: [/home/hduser/mapper.py, /home/hduser/reducer.py, /app/hadoop/tmp/hadoop-unjar2185859252991058106/] [] /tmp/streamjob2973484922110272968.jar tmpDir=null
    12/05/03 20:36:02 INFO mapred.FileInputFormat: Total input paths to process : 1
    12/05/03 20:36:03 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local]
    12/05/03 20:36:03 INFO streaming.StreamJob: Running job: job_201205032014_0003
    12/05/03 20:36:03 INFO streaming.StreamJob: To kill this job, run:
    12/05/03 20:36:03 INFO streaming.StreamJob: /usr/local/hadoop/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201205032014_0003
    12/05/03 20:36:03 INFO streaming.StreamJob: Tracking URL: http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201205032014_0003
    12/05/03 20:36:04 INFO streaming.StreamJob:  map 0%  reduce 0%
    12/05/03 20:36:21 INFO streaming.StreamJob:  map 100%  reduce 0%
    12/05/03 20:36:24 INFO streaming.StreamJob:  map 0%  reduce 0%
    12/05/03 20:37:00 INFO streaming.StreamJob:  map 100%  reduce 100%
    12/05/03 20:37:00 INFO streaming.StreamJob: To kill this job, run:
    12/05/03 20:37:00 INFO streaming.StreamJob: /usr/local/hadoop/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201205032014_0003
    12/05/03 20:37:00 INFO streaming.StreamJob: Tracking URL: http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201205032014_0003
    12/05/03 20:37:00 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201205032014_0003_m_000000
    12/05/03 20:37:00 INFO streaming.StreamJob: killJob...
    Streaming Job Failed! 

这是我从工作跟踪器中得到的错误:

java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
    at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:311)
    at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545)
    at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:132)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

它使用以下代码在本地工作:

 [hduser@Janardhan ~]$ cat input.txt | ./mapper.py | sort | ./reducer.py 
('be', 'VB')    1
('ceremony', 'NN')  1
('first', 'JJ')     2
('for', 'IN')   2
('hi', 'NN')    1
('place', 'NN')     1
('the', 'DT')   2
('welcome', 'VBD')  1
您需要

通过检查映射和归约任务失败的数据节点上的 stderr 日志来进行调试。 当本地运行的作业在群集上失败时,这些通常会提供很多信息。

您应该能够通过Hadoop集群的jobtracker Web界面访问日志,通常在 http://master.node.ip.address:50030/jobtracker.jsp。 您的作业应显示在"失败的作业"下。 单击作业 ID,然后单击"失败"列中的地图或减少任务,您应该会看到日志。

请注意,如果 mapper.py 和 reducer.py 不可执行(第一行#!/usr/bin/python,文件属性设置正确(,您可能需要将参数更改为"-mapper 'python mapper.py'"等。

最新更新