Map Reduce作业在Hadoop启动时陷入困境



我安装了一个四节点hadoop集群。在hadoopWebui中,我可以看到所有的datanodes和namenodes都已启动并运行。但是,当我在配置单元中运行select count(*) from table_name;时,查询会被卡住。

hive> select count(*) from test_hive2;
Query ID = dssbp_20160804124833_ff269da1-6b91-4e46-a1df-460603a5cb98
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>

我在数据节点节点管理器日志和配置单元日志中不断出现的错误是:

2016-08-04 12:33:31,474 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode1/172.18.128.24:6005. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

我检查过的东西:

1.可以从datanode到name node进行telnet
2.可以执行hadoop-put和get命令
3.可以在配置单元中创建表格并将数据加载到其中。

cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.18.128.24   namenode1 mycluster
172.18.128.25  namenode2
172.18.128.26  datanode1
172.18.128.27  datanode2

如果有人能提出可能的解决方案,那将是非常有帮助的。

谨致问候,Ranjan

我可以解决这个问题,因为资源管理器有一些问题,并且从数据节点无法连接到172.18.128.24:6005这个端口。

相关内容

  • 没有找到相关文章

最新更新