我想在mapreduce Word Count中做一个总排序。
public int run(String[] args) throws Exception {
Job job = Job.getInstance(getConf(), "wordcount");
job.setJarByClass(this.getClass());
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setMapperClass(Map.class);
//Total Sort
job.setPartitionerClass(TotalOrderPartitioner.class);
InputSampler.Sampler<Text, IntWritable> sampler = new InputSampler.RandomSampler<Text, IntWritable>(0.1, 10000, 10);
InputSampler.writePartitionFile(job, sampler);
Path inputDir = new Path(args[2] + "/_tmp");
Path partitionFile = new Path(inputDir, "_partitioning");
TotalOrderPartitioner.setPartitionFile(job.getConfiguration(),partitionFile);
InputSampler.writePartitionFile(job, sampler);
job.setReducerClass(Reduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
return job.waitForCompletion(true) ? 0 : 1;
}
但是我得到了类似java.io.IOException: wrong key class: org.apache.hadoop.io.Text is not class org.apache.hadoop.io.LongWritable
的错误
我不明白InputSampler。RandomSampler工作。
在上面的代码中,没有为作业设置InputFormat
,因此将采用默认值TextInputFormat<LongWritable,Text>
。
对于InputSampler.RandomSampler<Text, IntWritable>
,它已被配置为Text, IntWritable,这与TextInputFormat
不匹配。
由于InputFormat
和InputSampler
之间存在类型不匹配,因此抛出错误