hadoop在示例存储容量为420gb的ec2示例中出现“溢出失败”异常

ojsjcaue  于 2021-06-04  发布在  Hadoop
关注(0)|答案(1)|浏览(265)

我使用的是hadoop2.3.0,在centos 6.4 amazon ec2示例上安装了单节点集群(psuedo分布式模式),示例存储容量为420gb,ram为7.5gb,我的理解是“溢出失败”异常只有在节点磁盘空间耗尽时才会发生,在运行map/reduce任务一小段时间后(没有接近420gb的数据),我得到了以下异常。
我想提到的是,我将同一节点上的hadoop安装从8gb的ebs卷(我最初安装它的地方)移动到同一节点上420gb的示例存储卷,并相应地更改了$hadoop\u home环境变量和其他属性以指向示例存储卷,hadoop2.3.0现在是完全包含在420gb驱动器中。
但是,我仍然看到以下异常,你能告诉我除了磁盘空间之外还有什么可以导致溢出失败的异常吗?

2014-02-28 15:35:07,630 ERROR [IPC Server handler 12 on 58189] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1393591821307_0013_m_000000_0 - exited : 
java.io.IOException: Spill failed
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.checkSpillException(MapTask.java:1533)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1442)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Unknown Source)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for attempt_1393591821307_0013_m_000000_0_spill_26.out
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:402)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
    at org.apache.hadoop.mapred.YarnOutputFiles.getSpillFileForWrite(YarnOutputFiles.java:159)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1564)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$900(MapTask.java:853)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$SpillThread.run(MapTask.java:1503)

2014-02-28 15:35:07,604 WARN [main] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:java.io.IOException: Spill failed
2014-02-28 15:35:07,605 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: Spill failed
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.checkSpillException(MapTask.java:1533)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1442)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Unknown Source)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for attempt_1393591821307_0013_m_000000_0_spill_26.out
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:402)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
    at org.apache.hadoop.mapred.YarnOutputFiles.getSpillFileForWrite(YarnOutputFiles.java:159)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1564)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$900(MapTask.java:853)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$SpillThread.run(MapTask.java:1503)
jckbn6z7

jckbn6z71#

我可以通过将hadoop.tmp.dir值设置为instace存储上的某个值来解决这个问题,默认情况下,它指向ebs支持的根卷。

相关问题