我编写了一个mapreduce作业来从数据集中提取一些信息。数据集是用户对电影的评分。用户数量约250K,电影数量约300k。map的输出为<user, <movie, rating>*> and <movie,<user,rating>*>
。在reducer中,我将处理这些对。
但是当我运行作业时,映射器按预期完成,但是reducer总是抱怨
Task attempt_* failed to report status for 600 seconds.
我知道这是由于更新状态失败,所以我在代码中添加了对context.progress()
的调用,如下所示:
int count = 0;
while (values.hasNext()) {
if (count++ % 100 == 0) {
context.progress();
}
/*other code here*/
}
不幸的是,这没有帮助。仍然有许多reduce任务失败。
日志如下:
Task attempt_201104251139_0295_r_000014_1 failed to report status for 600 seconds. Killing!
11/05/03 10:09:09 INFO mapred.JobClient: Task Id : attempt_201104251139_0295_r_000012_1, Status : FAILED
Task attempt_201104251139_0295_r_000012_1 failed to report status for 600 seconds. Killing!
11/05/03 10:09:09 INFO mapred.JobClient: Task Id : attempt_201104251139_0295_r_000006_1, Status : FAILED
Task attempt_201104251139_0295_r_000006_1 failed to report status for 600 seconds. Killing!
BTW,错误发生在reduce to copy阶段,日志显示:
reduce > copy (28 of 31 at 26.69 MB/s) > :Lost task tracker: tracker_hadoop-56:localhost/127.0.0.1:34385
谢谢你的帮助
最简单的方法是设置这个配置参数:
<property>
<name>mapred.task.timeout</name>
<value>1800000</value> <!-- 30 minutes -->
</property>
in mapred-site.xml
另一种最简单的方法是在程序内设置作业配置
Configuration conf=new Configuration();
long milliSeconds = 1000*60*60; <default is 600000, likewise can give any value)
conf.setLong("mapred.task.timeout", milliSeconds);
**在设置之前,请检查jobtracker GUI中的Job文件(Job .xml)文件中正确的属性名称是mapred.task.timeout还是mapreduce.task.timeout...当运行作业时,再次检查作业文件中是否根据设置的值更改了该属性。
在较新的版本中,参数的名称已更改为mapreduce.task.timeout
,如本链接中所述(搜索task.timeout
)。此外,您还可以禁用此超时,如上面的链接所述:
任务终止前的毫秒数既不读取输入,也不写入输出,也不更新其状态字符串。值为0禁用超时
下面是mapred-site.xml
:
<property>
<name>mapreduce.task.timeout</name>
<value>0</value> <!-- A value of 0 disables the timeout -->
</property>
如果您有hive查询和它的超时,您可以通过以下方式设置以上配置:
设置mapred.tasktracker.expiry.interval = 1800000;
mapred.task设置。超时= 1800000;
From https://issues.apache.org/jira/browse/HADOOP-1763
原因可能是:
1. Tasktrackers run the maps successfully
2. Map outputs are served by jetty servers on the TTs.
3. All the reduce tasks connects to all the TT where maps are run.
4. since there are lots of reduces wanting to connect the map output server, the jetty servers run out of threads (default 40)
5. tasktrackers continue to make periodic heartbeats to JT, so that they are not dead, but their jetty servers are (temporarily) down.