映射减少程序显示错误 java.io.IOException 线程"main"异常:作业失败



我正在尝试运行我的map reduce程序。在我尝试运行它之后,输出如下。(我只显示输出的最后部分)

File System Counters
    FILE: Number of bytes read=3052
    FILE: Number of bytes written=224295
    FILE: Number of read operations=0
    FILE: Number of large read operations=0
    FILE: Number of write operations=0
    HDFS: Number of bytes read=0
    HDFS: Number of bytes written=0
    HDFS: Number of read operations=5
    HDFS: Number of large read operations=0
    HDFS: Number of write operations=1
Map-Reduce Framework
    Map input records=4
    Map output records=4
    Map output bytes=120
    Map output materialized bytes=0
    Input split bytes=97
    Combine input records=0
    Combine output records=0
    Spilled Records=0
    Failed Shuffles=0
    Merged Map outputs=0
    GC time elapsed (ms)=40
    CPU time spent (ms)=0
    Physical memory (bytes) snapshot=0
    Virtual memory (bytes) snapshot=0
    Total committed heap usage (bytes)=117927936
File Input Format Counters 
    Bytes Read=272
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
at mapreduceprogram.main(mapreduceprog.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

我不知道错误发生在哪里对此有什么帮助吗?

我的主要方法内容:

public static void main(String[] args) throws Exception {
     JobConf conf = new JobConf(mapreduceprog.class);
     conf.setJobName("mapreduceprog");
     conf.setOutputKeyClass(Text.class);
     conf.setOutputValueClass(IntWritable.class);
         conf.setMapOutputKeyClass(Text.class);
         conf.setMapOutputValueClass(Text.class);
     conf.setMapperClass(Map.class);
     conf.setCombinerClass(Reduce.class);
     conf.setReducerClass(Reduce.class);
     conf.setInputFormat(TextInputFormat.class);
     conf.setOutputFormat(TextOutputFormat.class);
     FileInputFormat.setInputPaths(conf, new Path(args[0]));
     FileOutputFormat.setOutputPath(conf, new Path(args[1]));
     JobClient.runJob(conf);

我的68号线是

JobClient.runJob(conf);

您使用的是较旧的Api。我建议你使用更新的Api。代码看起来像这个

import java.io.File;
import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class MyDriver {
    public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
        if(args.length!=2){
            System.out.println("Error");
            System.exit(-1);
        }
        Job job=new Job();
        job.setJarByClass(MyDriver.class);
        job.setMapperClass(Map.class);
        job.setReducerClass(Reduce.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(Text.class);
        job.setOutputKeyClass(Text.class); /*Reducer Output Key and value class*/
        job.setOutputValueClass(NullWritable.class);
        job.setInputFormatClass(CustomInputFormat.class);
        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[2]));
        boolean success=job.waitForCompletion(true);
        System.exit(success?0:-1);
    }
}

相关内容

  • 没有找到相关文章

最新更新