在Hadoop的WordCount程序中获取异常



当我试图在hadoop上运行第一个程序时,我面临这个异常。(我在0.20.2版本上使用hadoop新API)。我在网上搜索,看起来大多数人在配置逻辑中没有设置MapperClass和ReducerClass时都遇到了这个问题。但是我检查过了,代码看起来是没问题的。如果有人能帮我,我将非常感激。

. io .IOException: Type mismatch in key from map: expected .apache.hadoop.io。文本,收到org.apache.hadoop.io.LongWritableorg.apache.hadoop.mapred.MapTask MapOutputBuffer.collect美元(MapTask.java: 871)

package com.test.wc;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class WordCountMapper extends Mapper<LongWritable,Text,Text,IntWritable> {
public void Map(LongWritable key,Text value,Context ctx) throws IOException , InterruptedException {
    String line = value.toString();
    for(String word:line.split("\W+")) {
        if(word.length()> 0){
            ctx.write(new Text(word), new IntWritable(1));
        }
    }
}
}

package com.test.wc;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class WordCountReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context ctx) throws IOException,InterruptedException {
 int wordCount = 0;
    for(IntWritable value:values)
    {
        wordCount+=value.get();
    }
    ctx.write(key,new IntWritable(wordCount));
}
}

package com.test.wc;
import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCountJob {
public static void main(String args[]) throws IOException, InterruptedException, ClassNotFoundException{
    if(args.length!=2){
        System.out.println("invalid usage");
        System.exit(-1);
    }
    Job job = new Job();
    job.setJarByClass(WordCountJob.class);
    job.setJobName("WordCountJob");

    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    job.setMapperClass(WordCountMapper.class);
    job.setReducerClass(WordCountReducer.class);
    //job.setCombinerClass(WordCountReducer.class);
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(IntWritable.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

    System.exit(job.waitForCompletion(true) ? 0:1);
}
}

您的Map()方法无法覆盖Mappermap()方法,因为您使用大写M代替小写M。

因此,使用了默认的身份映射方法,这导致用作输入的相同键和值对也用作输出。由于您的映射器指定了extends Mapper<LongWritable,Text,Text,IntWritable>,您尝试输出LongWritable, Text而不是Text, IntWritable导致了异常。

Map()方法更改为map()并添加@Override注释应该可以做到这一点-如果您使用IDE,我强烈建议使用它内置的方法覆盖功能来避免这样的错误

编辑你的mapper函数

public void Map(LongWritable key, Text value, Context ctx)

public void map(LongWritable key, Text value, Context ctx)

它适合我。

Hadoop版本:- Hadoop 1.0.3

相关内容

  • 没有找到相关文章

最新更新