从/tmp中删除文件后hadoop hdfs不工作(即使在重新格式化之后)

xuo3flqw  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(334)

我错了 sudo rm -rf /tmp/* 我的hadoop hdfs似乎已损坏,我尝试重新格式化hdfs并重新启动所有守护进程,但不幸的是,我仍然无法使其正常工作,我可以在hdfs中创建文件夹,但我无法将任何文件复制到它 -copyFromLocal .
我的hadoop版本:hadoop 2.5.0-cdh5.3.5
它抱怨我没有运行datanode:

copyFromLocal: File /user/vagrant/data/wikipedia/simple/part-00025.xml.bz2._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
17/04/17 08:17:55 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/vagrant/data/wikipedia/simple/part-00026.xml.bz2._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1503)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3109)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:616)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
        at org.apache.hadoop.ipc.Client.call(Client.java:1411)
        at org.apache.hadoop.ipc.Client.call(Client.java:1364)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391)
        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536)
copyFromLocal: File /user/vagrant/data/wikipedia/simple/part-00026.xml.bz2._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

但是,当我检查datanode守护程序时,它似乎正在运行:

$ sudo service hadoop-hdfs-datanode status
● hadoop-hdfs-datanode.service - LSB: Starts a Hadoop HDFS DataNode
   Loaded: loaded (/etc/init.d/hadoop-hdfs-datanode; bad; vendor preset: enabled)
   Active: active (exited) since Mon 2017-04-17 08:15:05 UTC; 7h ago
     Docs: man:systemd-sysv-generator(8)
  Process: 11676 ExecStop=/etc/init.d/hadoop-hdfs-datanode stop (code=exited, status=0/SUCCESS)
  Process: 11791 ExecStart=/etc/init.d/hadoop-hdfs-datanode start (code=exited, status=0/SUCCESS)

Apr 17 08:15:05 s201703-41 systemd[1]: Starting LSB: Starts a Hadoop HDFS DataNode...
Apr 17 08:15:05 s201703-41 hadoop-hdfs-datanode[11791]:  * Starting Hadoop HDFS DataNode (hadoop-hdfs-datanode):
Apr 17 08:15:05 s201703-41 su[11876]: Successful su for hdfs by root
Apr 17 08:15:05 s201703-41 su[11876]: + ??? root:hdfs
Apr 17 08:15:05 s201703-41 su[11876]: pam_unix(su:session): session opened for user hdfs by (uid=0)
Apr 17 08:15:05 s201703-41 systemd[1]: Started LSB: Starts a Hadoop HDFS DataNode.
Apr 17 08:15:05 s201703-41 hadoop-hdfs-datanode[11791]: starting datanode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-datanode-s2017
lines 1-14/14

我确实尝试停止所有hadoop守护进程,重新格式化hdfs,然后重新启动守护进程,但仍然得到与上面相同的错误消息。

for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x stop ; done
sudo -u hdfs hdfs namenode -format
for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x start ; done

这是我的 /etc/hadoop/conf/hdfs-site.xml :

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- generated by Chef for test-applicant, changes will be overwritten -->

<configuration>
  <property>
    <name>fs.s3n.awsSecretAccessKey</name>
    <value>*****KEY (redacted)*****</value>
  </property>
  <property>
    <name>fs.s3n.awsAccessKeyId</name>
    <value>*****ID (redacted)*****</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/var/hdfs/dfs/data</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/var/hdfs/dfs/name</value>
  </property>
</configuration>

有谁能告诉我还有什么可以尝试修复我的hdfs吗?

sirbozc5

sirbozc51#

namenode格式将只格式化 dfs.namenode.name.dir . 数据目录 dfs.datanode.data.dir 在启动服务之前,必须手动删除数据节点的。

相关问题