java.io.ioexception:来自hadoop数据节点上inputstream的过早eof

lnlaulya  于 2021-05-29  发布在  Hadoop
关注(0)|答案(0)|浏览(329)

下面的一个hadoop数据节点出错,节点状态为dead。我在hadoop集群上有一个namenode和3个datanode。两个数据节点正常,但一个处于死状态。
请给我建议。提前谢谢

BP-565327583-10.0.0.68-1530188595210:blk_1073745647_4861, duration: 25367121816
2018-07-18 07:15:18,720 INFO datanode.DataNode: Receiving BP-565327583-10.0.0.68-1530188595210:blk_1073745648_4863 src: /10.0.0.111:39256 dest: /10.0.0.109:50010
2018-07-18 07:17:13,741 INFO impl.FsDatasetAsyncDiskService: Scheduling blk_1073745637_4851 file /var/lib/hadoop/current/datanode/current/BP-565327583-10.0.0.68-1530188595210/current/finalized/subdir0/subdir14/blk_1073745637 for deletion
2018-07-18 07:23:54,481 INFO datanode.DataNode: Exception for BP-565327583-10.0.0.68-1530188595210:blk_1073745643_4857
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
        at java.lang.Thread.run(Thread.java:748)
2018-07-18 07:23:54,481 INFO datanode.DataNode: Exception for BP-565327583-10.0.0.68-1530188595210:blk_1073745644_4858
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
        at java.lang.Thread.run(Thread.java:748)
2018-07-18 07:25:30,773 INFO datanode.DataNode: Receiving BP-565327583-10.0.0.68-1530188595210:blk_1073745654_4869 src: /10.0.0.36:57252 dest: /10.0.0.109:50010
2018-07-18 07:25:30,823 INFO datanode.DataNode: Receiving BP-565327583-10.0.0.68-1530188595210:blk_1073745653_4868 src: /10.0.0.111:41172 dest: /10.0.0.109:50010
2018-07-18 07:45:50,444 INFO impl.FsDatasetImpl: initReplicaRecovery: blk_1073745643_4857, recoveryId=4875, replica=ReplicaBeingWritten, blk_1073745643_4857, RBW
  getNumBytes()     = 2978271
  getBytesOnDisk()  = 2978271
  getVisibleLength()= 2978271
  getVolume()       = /var/lib/hadoop/current/datanode/current
  getBlockFile()    = /var/lib/hadoop/current/datanode/current/BP-565327583-10.0.0.68-1530188595210/current/rbw/blk_1073745643
  bytesAcked=2978271
  bytesOnDisk=2978271
2018-07-18 07:56:34,747 INFO datanode.DataNode: Exception for BP-565327583-10.0.0.68-1530188595210:blk_1073745646_4860
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
        at java.lang.Thread.run(Thread.java:748)
2018-07-22 07:39:05,567 INFO datanode.VolumeScanner: Now rescanning bpid BP-565327583-10.0.0.68-1530188595210 on volume /var/lib/hadoop/current/datanode, after more than 504 hour(s)

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题