Hadoop 2.6 java.lang.nullpointerexception



我正在尝试运行wordcount示例

class WordCountMapper : public HadoopPipes::Mapper {
public:
  // constructor: does nothing
  WordCountMapper( HadoopPipes::TaskContext& context ) {
  }
  // map function: receives a line, outputs (word,"1")
  // to reducer.
  void map( HadoopPipes::MapContext& context ) {
    //--- get line of text ---
    string line = context.getInputValue();
    //--- split it into words ---
    vector< string > words =
      HadoopUtils::splitString( line, " " );
    //--- emit each word tuple (word, "1" ) ---
    for ( unsigned int i=0; i < words.size(); i++ ) {
      context.emit( words[i], HadoopUtils::toString( 1 ) );
    }
  }
};
class WordCountReducer : public HadoopPipes::Reducer {
public:
  // constructor: does nothing
  WordCountReducer(HadoopPipes::TaskContext& context) {
  }
  // reduce function
  void reduce( HadoopPipes::ReduceContext& context ) {
    int count = 0;
    //--- get all tuples with the same key, and count their numbers ---
    while ( context.nextValue() ) {
      count += HadoopUtils::toInt( context.getInputValue() );
    }
    //--- emit (word, count) ---
    context.emit(context.getInputKey(), HadoopUtils::toString( count ));
  }
};
int main(int argc, char *argv[]) {
  return HadoopPipes::runTask(HadoopPipes::TemplateFactory< 
                              WordCountMapper, 
                              WordCountReducer >() );
}

我得到这个错误

 java.lang.Exception: java.lang.NullPointerException    at
 org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
    at
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
 Caused by: java.lang.NullPointerException  at
 org.apache.hadoop.mapred.pipes.Application.<init(Application.java:104)
    at
 org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:69)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)   at
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
    at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)     at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745) 15/10/30 11:42:57 INFO
 mapreduce.Job: Job job_local1470785393_0001 running in uber mode :
 false 15/10/30 11:42:57 INFO mapreduce.Job:  map 0% reduce 0% 15/10/30
 11:42:57 INFO mapreduce.Job: Job job_local1470785393_0001 failed with
 state FAILED due to: NA 15/10/30 11:42:57 INFO mapreduce.Job:
 Counters: 0 Exception in thread "main" java.io.IOException: Job
 failed!    at
 org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)  at
 org.apache.hadoop.mapred.pipes.Submitter.runJob(Submitter.java:264)
    at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:503)
    at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:518)

我试着清理代码(只保留函数的名称),但问题仍然存在,我读了许多他们编辑"Application.java"的解决方案,但我没有任何hadoop-java文件

在mapred-site.xml中添加此属性:

<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>

在yarn-site.xml中添加以下属性:

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>map reduce_shuffle</value>
</property>

我也得到了同样的错误,添加以上两个属性解决了问题。。。

相关内容

  • 没有找到相关文章

最新更新