Hadoop 2(YARN).获取java.io.IOException:错误的键类.例外情况



我正在尝试运行hadoop 2 MapReduce进程,其中output_format_class是SequenceFileOutputFormatinput_format_classSequenceFileInputFormat

我选择Mapper将密钥和值都作为BytesWritable发出。对于Reducer,它发出的密钥为IntWritable,值为BytesWritable

每次我得到以下错误:

Error: java.io.IOException: wrong key class: org.apache.hadoop.io.BytesWritable is not class org.apache.hadoop.io.IntWritable
        at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1306)
        at org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:83)
        at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
        at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
        at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
        at org.apache.hadoop.mapreduce.Reducer.reduce(Reducer.java:150)
        at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
        at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

我发现,当我把OutputFormat定义为SequenceFileOutputFormat时,问题就解决了,但我需要它作为SequenceFileOutputFormat

主要内容如下:

    Configuration conf = new Configuration(true);
    conf.set("refpath", "/out/Sample1/Local/EU/CloudBurst/BinaryFiles/ref.br");
    conf.set("qrypath",   "/out/Sample1/Local/EU/CloudBurst/BinaryFiles/qry.br");
    conf.set("MIN_READ_LEN",      Integer.toString(MIN_READ_LEN));
    conf.set("MAX_READ_LEN",      Integer.toString(MAX_READ_LEN));
    conf.set("K",                 Integer.toString(K));
    conf.set("SEED_LEN",          Integer.toString(SEED_LEN));
    conf.set("FLANK_LEN",         Integer.toString(FLANK_LEN));
    conf.set("ALLOW_DIFFERENCES", Integer.toString(ALLOW_DIFFERENCES));
    conf.set("BLOCK_SIZE",        Integer.toString(BLOCK_SIZE));
    conf.set("REDUNDANCY",        Integer.toString(REDUNDANCY));
    conf.set("FILTER_ALIGNMENTS", (FILTER_ALIGNMENTS ? "1" : "0"));
    Job job = new Job(conf,"CloudBurst");
    job.setNumReduceTasks(NUM_REDUCE_TASKS); // MV2
    //conf.setNumMapTasks(NUM_MAP_TASKS); TODO find solution for mv2
    FileInputFormat.addInputPath(job, new Path("/out/Sample1/Local/EU/CloudBurst/BinaryFiles/ref.br"));//TODO change it fit to the params
    FileInputFormat.addInputPath(job, new Path("/out/Sample1/Local/EU/CloudBurst/BinaryFiles/qry.br"));//TODO change it fit to the params
    job.setJarByClass(MerReduce.class);//mv2
    job.setInputFormatClass(SequenceFileInputFormat.class);
    job.setOutputFormatClass(SequenceFileOutputFormat.class);
    // The order of seeds is not important, but make sure the reference seeds are seen before the qry seeds
    job.setPartitionerClass(MerReduce.PartitionMers.class); // mv2
    job.setGroupingComparatorClass(MerReduce.GroupMersWC.class); //mv2 TODO
    job.setMapperClass(MerReduce.MapClass.class);
    job.setReducerClass(MerReduce.ReduceClass.class);
    job.setMapOutputKeyClass(BytesWritable.class);//mv2
    job.setMapOutputValueClass(BytesWritable.class);//mv2
    job.setOutputKeyClass(IntWritable.class);
    job.setOutputValueClass(BytesWritable.class);
    Path oPath = new Path("/out/Sample1/Local/EU/Vectors");//TODO change it fit to the params
    //conf.setOutputPath(oPath);
    FileOutputFormat.setOutputPath(job, oPath);
    System.err.println("  Removing old results");
    FileSystem.get(conf).delete(oPath);
    int code = job.waitForCompletion(true) ? 0 : 1;
    System.err.println("Finished");
}

mapper类标题:

public static class MapClass extends Mapper<IntWritable, BytesWritable, BytesWritable, BytesWritable>
public void map(IntWritable id, BytesWritable rawRecord,Context context) throws IOException, InterruptedException 

减速器类标题:

public static class ReduceClass extends Reducer (BytesWritable,  BytesWritable, IntWritable, BytesWritable)
public synchronized void reduce(BytesWritable mer, Iterator<BytesWritable> values,Context context)
            throws IOException, InterruptedException {

有人有主意吗?

job.setInputFormatClass(SequenceFileInputFormat.class);

应该是

job.setInputFormatClass(IntWritable.class(;

映射器的输入是int和字节,但在作业中,您将这两个输入都作为序列

提供

相关内容

  • 没有找到相关文章

最新更新