在尝试在Apache Nifi1.2.1上使用Puthdfs处理器的同时;
hadoop configuration reource : /usr/local/hadoop-2.7.0/etc/hadoop/core-site.xml, /usr/local/hadoop-2.7.0/etc/hadoop/hdfs-site.xml
directory: /mydir
我面对以下错误。
Caused by: org.apache.hadoop.ipc.RemoteException: File /tweets/.381623121831518.json could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3067)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:722)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
分辨率
我遵循以下程序以纠正此问题;
停止所有服务
$ cd $HADOOP_HOME
$ sbin/stop-all.sh
删除hdfs-site.xml中提到的Namenode和DataNode目录
$rm datanode
$rm namenode
格式Namenode
hadoop namenode -format
开始所有Hadoop服务
$sbin/start-all.sh
验证:
检查所有运行的服务
bash-3.2# jps
61488 ResourceManager
57128 RunNiFi
61160 NameNode
61256 DataNode
57129 NiFi
61609 Jps
61371 SecondaryNameNode
61582 NodeManager
检查在puthdfs处理器 ->目标目录中指定的/myDir传输的文件。应该在此目录中传输文件
$ hdfs dfs -ls /mydir