Hadoop的start-DFS没有在从站上创建数据节点



我试图在两个节点上设置Hadoop集群。我的主节点上的start-dfs.sh正在打开一个窗口,窗口关闭后不久,当我执行start-dfs时,它记录namenode正确启动,但datanode没有,并记录以下内容:

Problem binding to [slave-VM1:9005] java.net.BindException: Cannot assign requested address: bind; For more details see:  http://wiki.apache.org/hadoop/BindException

我已经设置

ssh-keygen -t rsa -P ''
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

(并使用正确的公钥设置adminstrators_authorized_keys文件)(ssh user@remotemachine正在工作并允许访问从服务器)

这是我在主机器和从机器(Windows)上的完整Hadoop配置集:

hdfs-site.xml : 
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/C:/Hadoop/hadoop-3.2.2/data/namenode</value>
</property>
<property>
<name>dfs.datanode.https.address</name>
<value>slaveVM1:50475</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/C:/Hadoop/hadoop-3.2.2/data/datanode</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
core-site.xml : 

<configuration>
<property>
<name>dfs.datanode.http.address</name>
<value>slaveVM1:9005</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://masterVM2:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/C:/Hadoop/hadoop-3.2.2/hadoopTmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://masterVM2:8020</value>
</property>
</configuration>

mapred-site.xml

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>masterVM2:9001</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property> 
<name>mapreduce.application.classpath</name>
<value>%HADOOP_HOME%/share/hadoop/mapreduce/*,%HADOOP_HOME%/share/hadoop/mapreduce/lib/*,%HADOOP_HOME%/share/hadoop/common/*,%HADOOP_HOME%/share/hadoop/common/lib/*,%HADOOP_HOME%/share/hadoop/yarn/*,%HADOOP_HOME%/share/hadoop/yarn/lib/*,%HADOOP_HOME%/share/hadoop/hdfs/*,%HADOOP_HOME%/share/hadoop/hdfs/lib/*</value>
</property>
</configuration>

yarn-site.xml

<configuration>
<property>
<name>yarn.acl.enable</name>
<value>0</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

PS:我是两台机器的管理员,我设置HADOOP_CONF_DIRC:Hadoophadoop-3.2.2etchadoop我还在hadoop_conf_dirslaves文件中设置了从IP。

PS:如果我删除代码:

<property>
<name>dfs.datanode.https.address</name>
<value>slave:50475</value>
</property> 

fromhdfs-site.xml

然后在主节点上启动datanode和namenode。

主机:

*.*.*.* slaveVM1
*.*.*.* masterVM2

是各自机器的ip,所有其他条目都被注释掉

这种情况通常会发生

BindException: Cannot assign requested address: bind;

表示端口正在使用。这意味着可能是应用程序已经启动,或者之前已经启动但没有正确关闭,或者另一个应用程序正在使用该端口。尝试重新启动(作为一种繁重但相当有效的清除端口的方法)。

最新更新