在Hadoop实践中遇到错误



尝试运行一个Hadoop Map Reduce代码,但得到以下错误。不知道为什么…

hadoop jar BWC11.jar WordCountDriver"/home/培训/training_material/数据/莎士比亚喜剧"/home/培训/training_material/数据/莎士比亚/锥子"警告:$HADOOP_HOME已弃用。

Exception in thread "main" java.lang.NoClassDefFoundError: WordCountDriver (wrong name:

com/felix/hadoop/training/WordCountDriver)在java.lang.ClassLoader.defineClass1(本机方法)java.lang.ClassLoader.defineClass (ClassLoader.java: 791)java.security.SecureClassLoader.defineClass (SecureClassLoader.java: 142)java.net.URLClassLoader.defineClass (URLClassLoader.java: 449)java.net.URLClassLoader.access 100美元(URLClassLoader.java: 71)在java.net.URLClassLoader 1.美元运行(URLClassLoader.java: 361)在java.net.URLClassLoader 1.美元运行(URLClassLoader.java: 355)在java.security.AccessController。doPrivileged(本地方法)java.net.URLClassLoader.findClass (URLClassLoader.java: 354)java.lang.ClassLoader.loadClass (ClassLoader.java: 423)sun.misc.Launcher AppClassLoader.loadClass美元(Launcher.java: 308)java.lang.ClassLoader.loadClass (ClassLoader.java: 410)java.lang.ClassLoader.loadClass (ClassLoader.java: 356). lang . class。forName0(本地方法)java.lang.Class.forName (Class.java: 264)org.apache.hadoop.util.RunJar.main (RunJar.java: 149)[training@localhost BasicWordCount] $

有人能帮我一下吗?

驱动代码:

package com.felix.hadoop.training;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class WordCountDriver extends Configured implements Tool{
    public static void main(String[] args) throws Exception
    {
        ToolRunner.run(new WordCountDriver(),args);
    }
    @Override
    public int run(String[] args) throws Exception {
        Job job = new Job(getConf(),"Basic Word Count Job");
        job.setJarByClass(WordCountDriver.class);
        job.setMapperClass(WordCountMapper.class);
        job.setReducerClass(WordCountReducer.class);
        job.setInputFormatClass(TextInputFormat.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        job.setNumReduceTasks(1);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        job.waitForCompletion(true);

        return 0;
    }

}

映射器代码:

package com.felix.hadoop.training;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
/**
 * 
 * @author training
 * Class : WordCountMapper
 *
 */
public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
    /**
     * Optimization: Instead of creating the variables in the 
     */
    @Override
    public void map(LongWritable inputKey,Text inputVal,Context context) throws IOException,InterruptedException
    {
        String line = inputVal.toString();
        String[] splits = line.trim().split("\W+");
        for(String outputKey:splits)
        {
            context.write(new Text(outputKey), new IntWritable(1));
        }

    }
}

减速器代码:

package com.felix.hadoop.training;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class WordCountReducer extends Reducer<Text,IntWritable,Text, IntWritable>{
    @Override
    public void reduce(Text key,Iterable<IntWritable> listOfValues,Context context) throws IOException,InterruptedException
    {
        int sum=0;
        for(IntWritable val:listOfValues)
        {
            sum = sum + val.get();
        }
        context.write(key,new IntWritable(sum));

    }
}

不知道为什么我得到这个错误…我试图添加class path,复制类文件到.jar文件所在的位置等…

"WordCountDriver"之前添加包名" com.felix.hadoop.training "

相关内容

  • 没有找到相关文章

最新更新