扫描数据时accumulo的平板服务器出错

cig3rfwq  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(580)

我在accumulo中有一组表,其中有一个主服务器和两个平板服务器,其中包含一组存储数百万条记录的表。问题是每当我扫描表以获取一些记录时,tablet服务器日志总是抛出这个错误

2015-11-12 04:38:56,107 [hdfs.DFSClient] WARN : Failed to connect to /192.168.250.12:50010 for block, add to deadNodes and continue. java.io.IOException: Got error, status message opReadBlock BP-1881591466-192.168.1.111-1438767154643:blk_1073773956_33167 received exception java.io.IOException:  Offset 16320 and length 20 don't match block BP-1881591466-192.168.1.111-1438767154643:blk_1073773956_33167 ( blockLen 0 ), for OP_READ_BLOCK, self=/192.168.250.202:55915, remote=/192.168.250.12:50010, for file /accumulo/tables/1/default_tablet/F0000gne.rf, for pool BP-1881591466-192.168.1.111-1438767154643 block 1073773956_33167
java.io.IOException: Got error, status message opReadBlock BP-1881591466-192.168.1.111-1438767154643:blk_1073773956_33167 received exception java.io.IOException:  Offset 16320 and length 20 don't match block BP-1881591466-192.168.1.111-1438767154643:blk_1073773956_33167 ( blockLen 0 ), for OP_READ_BLOCK, self=/192.168.250.202:55915, remote=/192.168.250.12:50010, for file /accumulo/tables/1/default_tablet/F0000gne.rf, for pool BP-1881591466-192.168.1.111-1438767154643 block 1073773956_33167
 at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140)
        at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:456)
        at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:424)
        at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:818)
        at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:697)
        at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355)
        at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:618)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:844)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:896)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:697)
        at java.io.DataInputStream.readShort(DataInputStream.java:312)
        at org.apache.accumulo.core.file.rfile.bcfile.Utils$Version.<init>(Utils.java:264)
        at org.apache.accumulo.core.file.rfile.bcfile.BCFile$Reader.<init>(BCFile.java:823)
        at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.init(CachableBlockFile.java:246)
        at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getBCFile(CachableBlockFile.java:257)
        at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.access$100(CachableBlockFile.java:137)
        at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader$MetaBlockLoader.get(CachableBlockFile.java:209)
        at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getBlock(CachableBlockFile.java:313)
        at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getMetaBlock(CachableBlockFile.java:368)
        at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getMetaBlock(CachableBlockFile.java:137)
        at org.apache.accumulo.core.file.rfile.RFile$Reader.<init>(RFile.java:843)
        at org.apache.accumulo.core.file.rfile.RFileOperations.openReader(RFileOperations.java:79)
        at org.apache.accumulo.core.file.DispatchingFileFactory.openReader(DispatchingFileFactory.java:69)
        at org.apache.accumulo.tserver.tablet.Compactor.openMapDataFiles(Compactor.java:279)
        at org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:322)
        at org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
        at org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1976)
        at org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2093)
        at org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
        at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
        at java.lang.Thread.run(Thread.java:745)

我认为这是一个与hdfs相关的问题,而不是accumulo问题,所以我检查了datanode的日志,发现了相同的消息,

Offset 16320 and length 20 don't match block BP-1881591466-192.168.1.111-1438767154643:blk_1073773956_33167 ( blockLen 0 ), for OP_READ_BLOCK, self=/192.168.250.202:55915, remote=/192.168.250.12:50010, for file /accumulo/tables/1/default_tablet/F0000gne.rf, for pool BP-1881591466-192.168.1.111-1438767154643 block 1073773956_33167

但作为日志中的信息。我不明白的是为什么我会犯这个错误。
我可以看到,我试图访问的文件(bp-1881591466-192.168.1.111-1438767154643)的池名包含一个ip地址(192.168.1.111),该地址与任何服务器(self和remote)的ip地址都不匹配。实际上,192.168.1.111是hadoop主服务器的旧ip地址,但我已经更改了它。我使用域名而不是ip地址,所以我唯一更改的地方是集群中机器的主机文件。hadoop/accumulo配置都不使用ip地址。有人知道这是什么问题吗?我已经花了好几天时间在这上面了,但还是没能弄明白。

ldioqlga

ldioqlga1#

您收到的错误表明accumulo无法从hdfs读取其文件的一部分。namenode报告块位于特定的datanode上(在您的示例中, 192.168.250.12 ). 但是,当accumulo试图从该数据节点读取数据时,它失败了。
这可能表示hdfs中存在损坏的块,或临时网络问题。你可以试着跑 hadoop fsck / (确切的命令可能会有所不同,取决于版本)以执行hdfs的运行状况检查。
此外,datanode中的ip地址不匹配似乎表明datanode对它所属的hdfs池感到困惑。您应该在仔细检查datanode的配置、dns和 /etc/hosts 任何异常。

相关问题