我是Hadoop的新手。。我只是在独立模式下运行我的hadoop应用程序。它运行得很好。我现在决定将其转移到伪分布式模式。我做了前面提到的配置更改。我的xml文件片段如下所示:
我的core-site.xml如下所示:
<name>fs.default.name</name>
<value>hdfs://localhost/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-onur</value>
<description>A base for other temporary directories.</description>
</property>
我的hdfs-site.xml是
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
我的mapred.xml是
<property>
<name>mapred.job.tracker</name>
<value>localhost:8021</value>
</property>
我运行了start-dfs.sh和start-mapred.sh的脚本,它启动得很好
root@vissu-desktop:/home/vissu/Raveesh/Hadoop# start-dfs.sh
starting namenode, logging to /home/vissu/Raveesh/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-root-namenode-vissu-desktop.out
localhost: starting datanode, logging to /home/vissu/Raveesh/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-root-datanode-vissu-desktop.out
localhost: starting secondarynamenode, logging to /home/vissu/Raveesh/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-root-secondarynamenode-vissu-desktop.out
root@vissu-desktop:/home/vissu/Raveesh/Hadoop# start-mapred.sh
starting jobtracker, logging to /home/vissu/Raveesh/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-root-jobtracker-vissu-desktop.out
localhost: starting tasktracker, logging to /home/vissu/Raveesh/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-root-tasktracker-vissu-desktop.out
root@vissu-desktop:/home/vissu/Raveesh/Hadoop#
现在我试着运行我的应用程序:但出现了以下错误。
root@vissu-desktop:/home/vissu/Raveesh/Hadoop/hadoop-0.20.2# hadoop jar ResultAgg_plainjar.jar ProcessInputFile /home/vissu/Raveesh/VotingConfiguration/sample.txt
ARG 0 obtained = ProcessInputFile
12/07/17 17:43:33 INFO preprocessing.ProcessInputFile: Modified File Name is /home/vissu/Raveesh/Hadoop/hadoop-0.20.2/sample.txt_modf
Going to process map reduce jobs
12/07/17 17:43:33 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/07/17 17:43:34 ERROR preprocessing.ProcessInputFile: Input path does not exist: hdfs://localhost/home/vissu/Raveesh/Hadoop/hadoop-0.20.2/sample.txt_modf
root@vissu-desktop:/home/vissu/Raveesh/Hadoop/hadoop-0.20.2#
应用程序最初从路径中获取一个文件,然后对其进行修改并创建一个sample.txt_mdf,该文件必须由map reduce框架使用。当在独立模式下运行时,我给出了绝对路径,因此它很好。但是我不知道应该在hadoop的path api中指定什么路径。。如果我给文件,它会添加hdfs://localhost/。。所以我不确定如何在伪分布式模式下给出路径。。我应该简单地确保修改后的文件是在那个位置创建的吗。。
我的问题是如何提及路径。。
包含路径的代码段是
KeyValueTextInputFormat.addInputPath(conf,
new Path(System.getProperty("user.dir")+File.separator+inputFileofhits.getName()));
FileOutputFormat.setOutputPath(
conf,
new Path(ProcessInputFile.resultAggProps
.getProperty("OUTPUT_DIRECTORY")));
感谢
HDFS中是否存在此文件?看起来你已经提供了一个文件的本地路径(HDFS中的用户目录通常植根于/user而不是/home
您可以通过键入:来检查HDFS中是否存在该文件
#> hadoop fs -ls hdfs://localhost/home/vissu/Raveesh/Hadoop/hadoop-0.20.2/sample.txt_modf
如果这不返回任何内容,即文件不在HDFS中,则可以使用hadoop-fs命令再次复制到HDFS:
#> hadoop fs -put /home/vissu/Raveesh/Hadoop/hadoop-0.20.2/sample.txt_modf hdfs://localhost/user/vissu/Raveesh/Hadoop/hadoop-0.20.2/sample.txt_modf
请注意,HDFS中的路径以/user为根,而不是/home。