我有一个4节点集群和总共96GB的内存。
我已经将我的输入划分为100个文件,并将作业设置为具有100个映射器。而从日志来看,映射器似乎是按顺序运行的。
[2014/10/08 15:22:36] INFO: Total input paths to process : 100
[2014/10/08 15:22:36] INFO: number of splits:100
[2014/10/08 15:22:36] INFO: Starting task: attempt_local1244628585_0001_m_000000_0
[2014/10/08 15:22:36] INFO: Submitting tokens for job: job_local1244628585_0001
[2014/10/08 15:22:36] INFO: Processing split: hdfs://.../input/in10:0+2
[2014/10/08 15:22:38] INFO: Task:attempt_local1244628585_0001_m_000000_0 is done. And is in the process of committing
[2014/10/08 15:22:38] INFO: Task attempt_local1244628585_0001_m_000000_0 is allowed to commit now
[2014/10/08 15:22:38] INFO: Saved output of task 'attempt_local1244628585_0001_m_000000_0' to hdfs://.../output/_temporary/0/task_local1244628585_0001_m_000000
[2014/10/08 15:22:38] INFO: hdfs://.../input/in10:0+2
[2014/10/08 15:22:38] INFO: Task 'attempt_local1244628585_0001_m_000000_0' done.
[2014/10/08 15:22:38] INFO: Finishing task: attempt_local1244628585_0001_m_000000_0
[2014/10/08 15:22:38] INFO: Starting task: attempt_local1244628585_0001_m_000001_0
不断地。基本上,它先完成一项任务,然后再开始另一项任务。
您正在本地模式下运行:
[2014/10/08 15:22:36] INFO: Starting task: attempt_**local**1244628585_0001_m_000000_0
根据您的Hadoop版本,您需要配置JobTracker地址或ResourceManager地址。