Ubuntu 12.04 - Eclispe 3.8- hadoop-1.2.1- 输入路径不存在



我确实设置了HadoopUbuntu操作系统,遵循了所有必要的步骤, 1.创建了hdfs文件系统 2.将文本文件移动到输入目录 3.有权访问所有目录。 但是当运行简单的字数统计示例时,我得到:

 import java.io.IOException;
 import java.util.*;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.conf.*;
 import org.apache.hadoop.io.*;
 import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class wordcount {
 public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();
    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String line = value.toString();
        StringTokenizer tokenizer = new StringTokenizer(line);
        while (tokenizer.hasMoreTokens()) {
            word.set(tokenizer.nextToken());
            context.write(word, one);
        }
    }
 } 
 public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
    public void reduce(Text key, Iterable<IntWritable> values, Context context) 
      throws IOException, InterruptedException {
        int sum = 0;
        for (IntWritable val : values) {
            sum += val.get();
        }
        context.write(key, new IntWritable(sum));
    }
 }
 public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    conf.addResource(new Path("/HADOOP_HOME/conf/core-site.xml"));
    conf.addResource(new Path("/HADOOP_HOME/conf/hdfs-site.xml"));
    Job job = new Job(conf, "wordcount");
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    job.setJarByClass(wordcount.class);
    job.setMapperClass(Map.class);
    job.setReducerClass(Reduce.class);
    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);


 // FileInputFormat.addInputPath(job, new Path(args[0]));
 //  FileOutputFormat.setOutputPath(job, new Path(args[1]));
    FileInputFormat.setInputPaths(job, new Path("/user/gabriele/input"));
    FileOutputFormat.setOutputPath(job, new Path("/user/gabriele/output"));

    job.waitForCompletion(true);
 }
}

但是,输入路径是有效的(也从命令行检查),甚至可以从 Eclipse 本身查看该路径中的文件,所以如果我错了,请协助。

有一个解决方案说添加以下 2 行:

config.addResource(new Path("/HADOOP_HOME/conf/core-site.xml"));config.addResource(new Path("/HADOOP_HOME/conf/hdfs-site.xml"));

但仍然不起作用。

这里是错误:运行方式 -> 在 hadoop 上运行

08-13-11 08:39:11 警告效用。NativeCodeLoader:无法为您的平台加载原生 hadoop 库...在适用的情况下使用内置的 Java 类08-11-13 08:39:12 警告映射。JobClient:使用 GenericOptionsParser 来解析参数。应用程序应实现相同的工具。08-11-13 08:39:12 警告映射。作业客户端:未设置作业 jar 文件。 可能找不到用户类。请参阅 JobConf(Class) 或 JobConf#setJar(String)。08-13-11 08:39:12 信息映射。JobClient:清理暂存区域文件:/tmp/hadoop-gabriele/mapred/staging/gabriele481581440/.staging/job_local481581440_000108-13-11 08:39:12 错误安全。UserGroupInformation: PriviledgedActionException as:gabriele cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: 输入路径不存在: file:/user/gabriele/input线程"main"org.apache.hadoop.mapreduce.lib.input.InvalidInputException中的异常:输入路径不存在:file:/user/gabriele/input at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:235) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:252) at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071) at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) at org.apache.hadoop.mapreduce.Job.submit(Job.java:550) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) at wordcount.main(wordcount.java:74)

谢谢

除非你的Hadoop安装真的植根于/HADOOP_HOME,否则我建议你更改以下行,以便将HADOOP_HOME替换到Hadoop实际安装的位置(/usr/lib/hadoop,/opt/hadoop或任何安装它的地方):

conf.addResource(new Path("/usr/lib/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/usr/lib/hadoop/conf/hdfs-site.xml"));

或者在 Eclipse 中,将/usr/lib/hadoop/conf 文件夹(或任何已安装 hadoop 的位置)添加到 Build 类路径中)。

最新更新