我试图通过改变hadoop给出的wordcount示例来创建一个简单的map reduce作业。
我试着输出一个列表,而不是单词计数。wordcount示例给出了以下输出
hello 2
world 2
我正试图把它作为一个列表输出,这将构成未来工作的基础
hello 1 1
world 1 1
我想我的思路是对的,但我在写清单时遇到了麻烦。而不是上面的,我得到
Hello foo.MyArrayWritable@61250ff2
World foo.MyArrayWritable@483a0ab1
这是MyArrayWritable。我在write(DataOuptut arg0)
中放了一个sys,但它从不输出任何东西,所以我认为该方法可能不会被调用,我不知道为什么。
class MyArrayWritable extends ArrayWritable{
public MyArrayWritable(Class<? extends Writable> valueClass, Writable[] values) {
super(valueClass, values);
}
public MyArrayWritable(Class<? extends Writable> valueClass) {
super(valueClass);
}
@Override
public IntWritable[] get() {
return (IntWritable[]) super.get();
}
@Override
public void write(DataOutput arg0) throws IOException {
for(IntWritable i : get()){
i.write(arg0);
}
}
}
EDIT -添加更多源代码
public class WordCount {
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}
public static class Reduce extends Reducer<Text, IntWritable, Text, MyArrayWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
ArrayList<IntWritable> list = new ArrayList<IntWritable>();
for (IntWritable val : values) {
list.add(val);
}
context.write(key, new MyArrayWritable(IntWritable.class, list.toArray(new IntWritable[list.size()])));
}
}
public static void main(String[] args) throws Exception {
if(args == null || args.length == 0)
args = new String[]{"./wordcount/input","./wordcount/output"};
Path p = new Path(args[1]);
FileSystem fs = FileSystem.get(new Configuration());
fs.exists(p);
fs.delete(p, true);
Configuration conf = new Configuration();
Job job = new Job(conf, "wordcount");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setJarByClass(WordCount.class);
job.setInputFormatClass(TextInputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
}
你的reducer中有一个'bug' -值迭代器在整个循环中重用相同的IntWritable,所以你应该像下面这样包装添加到列表中的值:
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
ArrayList<IntWritable> list = new ArrayList<IntWritable>();
for (IntWritable val : values) {
list.add(new IntWritable(val));
}
context.write(key, new MyArrayWritable(IntWritable.class, list.toArray(new IntWritable[list.size()])));
}
这实际上不是一个问题,因为你使用的是一个数组列表,你的映射器只输出一个值(一个),但如果你扩展这段代码,这可能会让你出错。
您还需要在作业中定义您的map和reducer输出类型不同:
// map output types
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
// reducer output types
job.setOutputValueClass(Text.class);
job.setOutputValueClass(MyArrayWritable.class);
您可能想要显式地定义reducer的数量(这可能就是为什么您从未看到您的系统被写入任务日志的原因,特别是如果您的集群管理员已将默认数量定义为0):
job.setNumReduceTasks(1);
你使用默认的文本输出格式,它调用toString()输出键和值对- MyArrayWritable没有一个覆盖的toString方法,所以你应该在你的MyArrayWritable:
@Override
public String toString() {
return Arrays.toString(get());
}
最后从MyArrayWritable中删除覆盖的write
方法-这不是与附加的readFields方法兼容的有效实现。你不需要重写这个方法,但是如果你这样做了(比如你想看到一个系统来验证它是否被调用了),那么你可以这样做:
@Override
public void write(DataOutput arg0) throws IOException {
System.out.println("write method called");
super.write(arg0);
}