Hadoop3:工作节点连接到资源管理器时出错



>我有一个3节点的Hadoop集群(DigitalOcean Droplets):

  • Hadoop-master被配置为namenode和数据节点
  • Hadoop-worker1 和 Hadoop-worker2 被配置为数据节点

每当我运行mapreduce流作业并选择工作节点来运行ApplicationMaster时,作业在尝试连接到资源管理器时都会挂起。数据节点日志显示它尝试连接到

0.0.0.0
INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s);
INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s);

这是yarn.resourcemanager.hostname property的默认值。 但是,我已经在 yarn-site 中为我的两个工作节点指定了此属性.xml:

<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-master</value>
</property>

在我的所有节点上,我的/etc/hosts 文件看起来像这样,所以 hadoop-master 应该指向正确的 IP 地址。

#127.0.1.1 hadoop-worker1 hadoop-worker1
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
#::1 ip6-localhost ip6-loopback
#fe00::0 ip6-localnet
#ff00::0 ip6-mcastprefix
#ff02::1 ip6-allnodes
#ff02::2 ip6-allrouters
#ff02::3 ip6-allhosts
165.22.19.161 hadoop-master
165.22.19.154 hadoop-worker1
165.22.19.158 hadoop-worker2

我还通过转到hadoop-worker1:9864并访问工作节点的Web界面以查看加载的内容来检查配置:

<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-master</value>
<final>false</final>
<source>yarn-site.xml</source>
</property>

更重要的是,我尝试从其中一个workernode运行YARN命令,它实际上可以正确联系资源管理器:

hadoop@hadoop-worker1:/opt/hadoop$ yarn node --list
2019-06-15 18:47:42,119 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/165.22.19.161:8032
Total Nodes:3
Node-Id             Node-State Node-Http-Address       Number-of-Running-Containers
hadoop-worker2:40673            RUNNING hadoop-worker2:8042                                0
hadoop-worker1:41875            RUNNING hadoop-worker1:8042                                1
hadoop-master:40075             RUNNING hadoop-master:8042                                 0
hadoop@hadoop-worker1:/opt/hadoop$

我不确定该怎么做,我相信这可能与流媒体作业未正确加载设置有关,任何帮助将不胜感激,因为我已经在这个问题上停留了 2 天。

我已经将-D yarn.resourcemanager.hostname=hadoop-master标志添加到 mapred 流命令中,它现在似乎可以工作了。

相关内容

  • 没有找到相关文章

最新更新