hadoop:在伪分布式环境下具有多个reducer



我是hadoop的新手。我已经成功地在伪分布式模式下配置了hadoop设置。我想要多个具有选项-D mapred.reduce.tasks=2的减速器(具有hadoop流)。然而,仍然只有一个减速器。

根据谷歌的说法,我确信这是地图。LocalJobRunner将减速器的数量限制为1。但我想知道有没有什么变通办法来增加减速器?

我的hadoop配置文件:

[admin@localhost string-count-hadoop]$ cat ~/hadoop-1.1.2/conf/core-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/admin/hadoop-data/tmp</value>
    </property>
</configuration>

[admin@localhost string-count-hadoop]$ cat ~/hadoop-1.1.2/conf/mapred-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>localhost:9001</value>
    </property>
</configuration>

[admin@localhost string-count-hadoop]$ cat ~/hadoop-1.1.2/conf/hdfs-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
    <property>
        <name>dfs.name.dir</name>
        <value>/home/admin/hadoop-data/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/home/admin/hadoop-data/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property> 
</configuration>

我开始工作的方式:

[admin@localhost string-count-hadoop]$ cat hadoop-startjob.sh 
#!/bin/sh
~/hadoop-1.1.2/bin/hadoop jar ~/hadoop-1.1.2/contrib/streaming/hadoop-streaming-1.1.2.jar 
        -D mapred.job.name=string-count 
        -D mapred.reduce.tasks=2 
        -mapper  mapper  
        -file    mapper  
        -reducer reducer 
        -file    reducer 
        -input   $1      
        -output  $2
[admin@localhost string-count-hadoop]$ ./hadoop-startjob.sh /z/programming/testdata/items_sequence /z/output
packageJobJar: [mapper, reducer] [] /tmp/streamjob837249979139287589.jar tmpDir=null
13/07/17 20:21:10 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/07/17 20:21:10 WARN snappy.LoadSnappy: Snappy native library not loaded
13/07/17 20:21:10 INFO mapred.FileInputFormat: Total input paths to process : 1
13/07/17 20:21:11 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir.
...
...

尝试修改core-site.xml的属性

<property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
 </property>

至,

<property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000/</value>
 </property>

在9000之后再加一个/,然后重新启动所有守护进程。

相关内容

  • 没有找到相关文章

最新更新