无法启动 CDH4 辅助名称节点:名称节点地址的 URI 无效



我一直在尝试设置一个 CDH4 安装的 hadoop。我有 12 台机器,标记为 hadoop01 - hadoop12,namenode、作业跟踪器和所有数据节点都启动良好。我能够查看 dfshealth.jsp并看到它找到了所有数据节点。

但是,每当我尝试启动辅助名称节点时,都会给出异常:

Starting Hadoop secondarynamenode:                         [  OK  ]
starting secondarynamenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-secondarynamenode-hadoop02.dev.terapeak.com.out
Exception in thread "main" java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:324)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:312)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:305)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:222)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:186)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:578)

这是我在辅助名称节点上的 hdfs-site.xml 文件:

<configuration>
  <property>
    <name>dfs.name.dir</name>
    <value>/data/1/dfs/nn</value>
  </property>
  <property>
    <name>dfs.namenode.http-address</name>
    <value>10.100.20.168:50070</value>
    <description>
      The address and the base port on which the dfs NameNode Web UI will listen.
      If the port is 0, the server will start on a free port.
    </description>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.check.period</name>
    <value>3600</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.txns</name>
    <value>40000</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>/var/lib/hadoop-hdfs/cache</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.edits.dir</name>
    <value>/var/lib/hadoop-hdfs/cache</value>
 </property>
 <property>
    <name>dfs.namenode.num.checkpoints.retained</name>
    <value>1</value>
  </property>
 <property>
    <name>mapreduce.jobtracker.restart.recover</name>
    <value>true</value>
  </property>
</configuration>

给dfs.namenode.http-address的值似乎有问题,但我不确定是什么。应该从 http://开始还是从 hdfs://开始?我尝试在lynx中调用10.100.20.168:50070,它显示了一个页面。有什么想法吗?

看起来我在辅助名称节点上缺少核心站点.xml配置。添加了它,该过程正确启动。

核心站点.xml:

<configuration>
 <property>
  <name>fs.defaultFS</name>
  <value>hdfs://10.100.20.168/</value>
 </property>
</configuration>

如果您正在运行单节点集群,请确保已正确设置HADOOP_PREFIX变量,如链接所示:http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

即使我也遇到了与您相同的问题,并且通过设置此变量得到了纠正

最新更新