错误的值类:org.apache.mahout.math.VarLongWritable不是org.apache.mah



当我使用mahout和Hadoop做一些推荐时,我遇到了一个问题。

错误消息是:

Error: java.io.IOException: wrong value class: org.apache.mahout.math.VarLongWritable is not class org.apache.mahout.math.VectorWritable
    at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1378)
    at org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:83)
    at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
    at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
    at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
    at org.apache.hadoop.mapreduce.Reducer.reduce(Reducer.java:150)
    at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

而且,主要功能是:

job.setInputFormatClass(TextInputFormat.class);

    job.setMapperClass(FilesToItemPrefsMapper.class);
    job.setMapOutputKeyClass(VarLongWritable.class);
    job.setMapOutputValueClass(VarLongWritable.class);
    job.setReducerClass(FileToUserVectorReducer.class);
    job.setOutputKeyClass(VarLongWritable.class);
    job.setOutputValueClass(VectorWritable.class);
    job.setOutputFormatClass(SequenceFileOutputFormat.class);
    SequenceFileOutputFormat.setOutputCompressionType(job,CompressionType.NONE);

映射器是:

public void map(LongWritable key, Text value, Context context)
        throws IOException, InterruptedException {
        String line = value.toString();
        Matcher m = NUMBERS.matcher(line);
        m.find();
        VarLongWritable userID = new VarLongWritable(Long.parseLong(m.group()));
        VarLongWritable itemID = new VarLongWritable();
        while (m.find()){
            itemID.set(Long.parseLong(m.group()));
            context.write(userID, itemID);
        }

减速器为:

public class FileToUserVectorReducer 
        extends Reducer<VarLongWritable, VarLongWritable, VarLongWritable, VectorWritable> {
    public void reducer(VarLongWritable userID, Iterable<VarLongWritable> itemPrefs, Context context)
        throws IOException, InterruptedException{
        Vector userVector = new RandomAccessSparseVector(Integer.MAX_VALUE, 100);
        for(VarLongWritable itemPref : itemPrefs){
            userVector.set((int)itemPref.get(), 1.0f);
        }
        context.write(userID, new VectorWritable(userVector));
    }
}

我认为reducer的值VectorWritable是在job.setOutputValueClass(VectorWratable.class)中设置的。如果是这样,为什么会发出这样的错误消息?

问题出在Reducer函数中。减速器(…)应该是reduce,这意味着:

public class FileToUserVectorReducer 
        extends Reducer<VarLongWritable, VarLongWritable, VarLongWritable, VectorWritable> {
    @Override
    public void reduce(VarLongWritable userID, Iterable<VarLongWritable> itemPrefs, Context context)
        throws IOException, InterruptedException{
        Vector userVector = new RandomAccessSparseVector(Integer.MAX_VALUE, 100);
        for(VarLongWritable itemPref : itemPrefs){
            userVector.set((int)itemPref.get(), 1.0f);
        }
        context.write(userID, new VectorWritable(userVector));
    }
}

@覆盖非常有用。如果我使用@Override,它会在编译时发出错误消息。起初我认为这是不必要的,但这次经历证明了它的价值。

相关内容

最新更新