Hadoop 0.23.9如何启动数据节点



我似乎无法正确启动hadoop。我使用的是hadoop0.23.9:

[msknapp@localhost sbin]$ hadoop namenode -format
...
[msknapp@localhost sbin]$ ./start-dfs.sh 
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/cloud/hadoop-0.23.9/logs/hadoop-msknapp-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /usr/local/cloud/hadoop-0.23.9/logs/hadoop-msknapp-datanode-localhost.localdomain.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/cloud/hadoop-0.23.9/logs/hadoop-msknapp-secondarynamenode-localhost.localdomain.out
[msknapp@localhost sbin]$ ./start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /usr/local/cloud/hadoop-0.23.9/logs/yarn-msknapp-resourcemanager-localhost.localdomain.out
localhost: starting nodemanager, logging to /usr/local/cloud/hadoop-0.23.9/logs/yarn-msknapp-nodemanager-localhost.localdomain.out
[msknapp@localhost sbin]$ cd /var/local/stock/data
[msknapp@localhost data]$ hadoop fs -ls /
[msknapp@localhost data]$ hadoop fs -mkdir /stock
[msknapp@localhost data]$ ls
companies.csv  raw  slf_series.txt
[msknapp@localhost data]$ hadoop fs -put companies.csv /stock/companies.csv 
13/12/08 11:10:40 WARN hdfs.DFSClient: DataStreamer Exception
java.io.IOException: File /stock/companies.csv._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1180)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:414)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:394)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1571)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1567)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1262)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1565)
    at org.apache.hadoop.ipc.Client.call(Client.java:1094)
    at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:195)
    at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:102)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:67)
    at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1130)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1006)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:458)
put: File /stock/companies.csv._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
13/12/08 11:10:40 ERROR hdfs.DFSClient: Failed to close file /stock/companies.csv._COPYING_
java.io.IOException: File /stock/companies.csv._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1180)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:414)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:394)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1571)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1567)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1262)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1565)
    at org.apache.hadoop.ipc.Client.call(Client.java:1094)
    at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:195)
    at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:102)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:67)
    at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1130)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1006)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:458)

这是我的core-site.xml:

<property>
    <name>fs.default.name</name>
    <value>hdfs://localhost/</value>
</property>

和我的hdfs-site.xml:

<property>
        <name>dfs.replication</name>
        <value>1</value>
</property>

和mapred-site.xml:

    <property>
            <name>mapred.job.tracker</name>
            <value>localhost:8021</value>
    </property>

我查阅了所有的文档,弄不清楚如何正确地启动hadoop。我在网上找不到任何关于hadoop-0.23.9的文档。我的Hadoop书是为0.22写的。在线文档是2.1.1版的,巧合的是我没能开始工作。

有人能告诉我如何正确启动hadoop吗?

fs.default.name 指定端口

类似:

<property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
</property>

之后,为hdfs:创建一个tmp目录

sudo mkdir -p /app/hadoop/tmp
sudo chown you /app/hadoop/tmp

并添加到core-site.xml:

<property>
   <name>hadoop.tmp.dir</name>
   <value>/app/hadoop/tmp</value>
   <description>A base for other temporary directories.</description>
</property>

请确保重新启动群集。

$HADOOP_HOME/bin/stop-all.sh
$HADOOP_HOME/bin/start-all.sh

尝试在hadoop停止的情况下删除所有数据:

$HADOOP_HOME/bin/hadoop datanode -format

或手动删除的内容

/app/hadop/tmp/dfs/data/

然后再次启动hadoop:

$HADOOP_HOME/bin/start-all.sh

配置中的关键问题如下:

java.io.IOException: File /stock/companies.csv._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

确保您的HDFS特定配置至少具有以下内容:

hdfs-site.xml:如xml中所示,您必须已经存在/tmp/hdf23/namenode和/tmp/hDF23/datanode文件夹。您可以为hdfs根配置任何其他文件夹,然后在其中配置namenode和datanode文件夹。

<configuration>
        <property>
           <name>dfs.replication</name>
           <value>1</value>
        </property>
        <property>
           <name>dfs.namenode.name.dir</name>
           <value>file:///tmp/hdfs23/namenode</value>
        </property>
        <property>
            <name>fs.checkpoint.dir</name>
            <value>file:///tmp/hdfs23/secnamenode</value>
        </property>
        <property>
             <name>fs.checkpoint.edits.dir</name>
             <value>file:///tmp/hdfs23/secnamenode</value>
        </property>
        <property>
           <name>dfs.datanode.data.dir</name>
           <value>file:///tmp/hdfs23/datanode</value>
        </property>
</configuration>

core-site.xml

<configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://localhost:9000</value>
        </property>
     <property>
         <name>fs.default.name</name>
         <value>hdfs://localhost:9000</value>
     </property>
     <property>
        <name>hadoop.http.staticuser.user</name>
        <value>hdfs</value>
     </property>
</configuration>

然后你需要像你已经做的那样格式化你的名称节点:

$ hadoop namenode -format

之后,您可以按如下方式启动HDFS:

[Hadoop023_ROOT]/sbin/start-dfs.sh

相关内容

  • 没有找到相关文章

最新更新