在使用以下配置的apachenifi1.2.1上的puthdfs处理器时;
hadoop configuration reource : /usr/local/hadoop-2.7.0/etc/hadoop/core-site.xml, /usr/local/hadoop-2.7.0/etc/hadoop/hdfs-site.xml
directory: /mydir
我面对下面的错误。
Caused by: org.apache.hadoop.ipc.RemoteException: File /tweets/.381623121831518.json could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3067)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:722)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
1条答案
按热度按时间zqdjd7g91#
分辨率
我按照以下程序来纠正这个问题;
停止所有服务
删除hdfs-site.xml中提到的namenode和datanode目录
设置名称节点格式
启动所有hadoop服务
验证:
检查所有正在运行的服务
检查在puthdfs processor->destination directory中指定的/mydir中传输的文件。此目录中应该有传输的文件