背景:
-
我已经在EC2实例上安装了3节点Cloudera Hadoop集群,它正在正常工作。
-
在我的windows机器上的客户端程序,将数据从我的机器加载到HDFS。
详细信息:
我的客户端程序是用Java开发的,它从windows本地磁盘读取数据并将其写入HDFS。
为此,我试图通过Putty创建SSH隧道,然后我试图用我的windows用户名登录到无法工作的远程EC2实例。我可以使用unix用户名登录。我想知道这是正确的行为吗?
我不知道我是否正确创建了隧道,但在那之后,当我尝试运行我的客户端程序时,它会给我以下错误:
我的客户端程序是用Java开发的,它从windows本地磁盘读取数据并将其写入HDFS。当我试图运行我的程序时,它会给我以下错误。
PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/features.json could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
6:32:45.711 PM INFO org.apache.hadoop.ipc.Server
IPC Server handler 13 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 108.161.91.186:54097: error: java.io.IOException: File /user/ubuntu/features.json could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
java.io.IOException: File /user/ubuntu/features.json could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1331)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2198)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:480)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)
有什么想法吗?
您可以使用hdfs fsck / -delete
验证您的hdfs集群运行状况,然后可以重新平衡您的数据节点。