我是Cloudera和Hadoop的新手,Cloudera WordCount 1.0示例(part-00000)的输出是空的。我正在使用的步骤和文件在这里。我想提供任何有用的作业日志信息,版本也是如此 - 我只需要一些关于在哪里找到它们的指导。下面是作业输出和源。在编写的其他部分(part-00001 到 part-00011)中,非空的部件是 part-00001(Bye 1)、part-00002(Hadoop 2)、part-00004(再见 1)、part-00005(世界 2)和 part-00009(Hello 2)。任何帮助都会很棒。
以下是命令和输出:
[me@server ~]$ hadoop fs -cat /user/me/wordcount/input/file0
Hello World Bye World
[me@server ~]$ hadoop fs -cat /user/me/wordcount/input/file1
Hello Hadoop Goodbye Hadoop
[me@server ~]$ hadoop jar wordcount.jar org.myorg.WordCount /user/me/wordcount/input /user/me/wordcount/output
13/11/12 10:39:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/11/12 10:39:41 INFO mapred.FileInputFormat: Total input paths to process : 2
13/11/12 10:39:42 INFO mapred.JobClient: Running job: job_201311051201_0014
13/11/12 10:39:43 INFO mapred.JobClient: map 0% reduce 0%
13/11/12 10:39:49 INFO mapred.JobClient: map 33% reduce 0%
13/11/12 10:39:52 INFO mapred.JobClient: map 67% reduce 0%
13/11/12 10:39:53 INFO mapred.JobClient: map 100% reduce 0%
13/11/12 10:39:58 INFO mapred.JobClient: map 100% reduce 25%
13/11/12 10:40:01 INFO mapred.JobClient: map 100% reduce 100%
13/11/12 10:40:04 INFO mapred.JobClient: Job complete: job_201311051201_0014
13/11/12 10:40:04 INFO mapred.JobClient: Counters: 33
13/11/12 10:40:04 INFO mapred.JobClient: File System Counters
13/11/12 10:40:04 INFO mapred.JobClient: FILE: Number of bytes read=313
13/11/12 10:40:04 INFO mapred.JobClient: FILE: Number of bytes written=2695420
13/11/12 10:40:04 INFO mapred.JobClient: FILE: Number of read operations=0
13/11/12 10:40:04 INFO mapred.JobClient: FILE: Number of large read operations=0
13/11/12 10:40:04 INFO mapred.JobClient: FILE: Number of write operations=0
13/11/12 10:40:04 INFO mapred.JobClient: HDFS: Number of bytes read=410
13/11/12 10:40:04 INFO mapred.JobClient: HDFS: Number of bytes written=41
13/11/12 10:40:04 INFO mapred.JobClient: HDFS: Number of read operations=18
13/11/12 10:40:04 INFO mapred.JobClient: HDFS: Number of large read operations=0
13/11/12 10:40:04 INFO mapred.JobClient: HDFS: Number of write operations=24
13/11/12 10:40:04 INFO mapred.JobClient: Job Counters
13/11/12 10:40:04 INFO mapred.JobClient: Launched map tasks=3
13/11/12 10:40:04 INFO mapred.JobClient: Launched reduce tasks=12
13/11/12 10:40:04 INFO mapred.JobClient: Data-local map tasks=3
13/11/12 10:40:04 INFO mapred.JobClient: Total time spent by all maps in occupied slots (ms)=16392
13/11/12 10:40:04 INFO mapred.JobClient: Total time spent by all reduces in occupied slots (ms)=61486
13/11/12 10:40:04 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/11/12 10:40:04 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/11/12 10:40:04 INFO mapred.JobClient: Map-Reduce Framework
13/11/12 10:40:04 INFO mapred.JobClient: Map input records=2
13/11/12 10:40:04 INFO mapred.JobClient: Map output records=8
13/11/12 10:40:04 INFO mapred.JobClient: Map output bytes=82
13/11/12 10:40:04 INFO mapred.JobClient: Input split bytes=357
13/11/12 10:40:04 INFO mapred.JobClient: Combine input records=8
13/11/12 10:40:04 INFO mapred.JobClient: Combine output records=6
13/11/12 10:40:04 INFO mapred.JobClient: Reduce input groups=5
13/11/12 10:40:04 INFO mapred.JobClient: Reduce shuffle bytes=649
13/11/12 10:40:04 INFO mapred.JobClient: Reduce input records=6
13/11/12 10:40:04 INFO mapred.JobClient: Reduce output records=5
13/11/12 10:40:04 INFO mapred.JobClient: Spilled Records=12
13/11/12 10:40:04 INFO mapred.JobClient: CPU time spent (ms)=15650
13/11/12 10:40:04 INFO mapred.JobClient: Physical memory (bytes) snapshot=3594293248
13/11/12 10:40:04 INFO mapred.JobClient: Virtual memory (bytes) snapshot=18375352320
13/11/12 10:40:04 INFO mapred.JobClient: Total committed heap usage (bytes)=6497697792
13/11/12 10:40:04 INFO mapred.JobClient: org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
13/11/12 10:40:04 INFO mapred.JobClient: BYTES_READ=50
[me@server ~]$ hadoop fs -cat /user/me/wordcount/output/part-00000
[me@server ~]$ hdfs dfs -ls -R /user/me/wordcount/output
-rw-r--r-- 3 me me 0 2013-11-12 10:40 /user/me/wordcount/output/_SUCCESS
drwxr-xr-x - me me 0 2013-11-12 10:39 /user/me/wordcount/output/_logs
drwxr-xr-x - me me 0 2013-11-12 10:39 /user/me/wordcount/output/_logs/history
-rw-r--r-- 3 me me 67134 2013-11-12 10:40 /user/me/wordcount/output/_logs/history/job_201311051201_0014_1384270782432_me_wordcount
-rw-r--r-- 3 me me 81866 2013-11-12 10:39 /user/me/wordcount/output/_logs/history/job_201311051201_0014_conf.xml
-rw-r--r-- 3 me me 0 2013-11-12 10:39 /user/me/wordcount/output/part-00000
-rw-r--r-- 3 me me 6 2013-11-12 10:39 /user/me/wordcount/output/part-00001
-rw-r--r-- 3 me me 9 2013-11-12 10:39 /user/me/wordcount/output/part-00002
-rw-r--r-- 3 me me 0 2013-11-12 10:39 /user/me/wordcount/output/part-00003
-rw-r--r-- 3 me me 10 2013-11-12 10:39 /user/me/wordcount/output/part-00004
-rw-r--r-- 3 me me 8 2013-11-12 10:39 /user/me/wordcount/output/part-00005
-rw-r--r-- 3 me me 0 2013-11-12 10:39 /user/me/wordcount/output/part-00006
-rw-r--r-- 3 me me 0 2013-11-12 10:39 /user/me/wordcount/output/part-00007
-rw-r--r-- 3 me me 0 2013-11-12 10:39 /user/me/wordcount/output/part-00008
-rw-r--r-- 3 me me 8 2013-11-12 10:39 /user/me/wordcount/output/part-00009
-rw-r--r-- 3 me me 0 2013-11-12 10:39 /user/me/wordcount/output/part-00010
-rw-r--r-- 3 me me 0 2013-11-12 10:39 /user/me/wordcount/output/part-00011
[me@server ~]$
以下是来源:
package org.myorg;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
public class WordCount {
public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}
public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
您正在启动 12 个 reduce 任务 ( Launched reduce tasks=12
),尽管映射器只有五个输出: 根据教程,您有预期的五个输出。在 CDH3 中,化简器的数量设置为映射器输出的数量:很可能这种行为在 CDH4 中发生了变化 - 查看您的配置文件以查看是否有类似 mapred.reduce.tasks
或类似的东西。
这是因为您在作业中使用的化简器数量大于您实际拥有的键数,即单词。因此,来自化简器的某些输出文件是空的。检查默认分区程序如何根据化简器的数量以及将数据发送到化简器的键进行分区,即哈希分区程序链接
好的,非常感谢Binary01和davek3的方向。我必须做一些阅读才能理解发生了什么,但为了后代,我将在这里分享细节的答案:我通过编译 v2.0 代码让它工作,所以它会需要"-D mapred.reduce.tasks=1",这导致了正确的输出。只是为了踢,我在没有 -D 的哈姆雷特上运行它,它也有效。
您可以运行简单命令来组合所有零件文件的输出:
cat part-* > output.txt