奇数错误!!hdinsight hadoop mapreduce失败,代码为255

xqnpmsa8  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(380)

我使用的是microsoftazure的hdinsight,有1个头部节点和1个数据节点。
如果在我编写的mapreduce程序中使用一个小数据集(85mb),那么一切都可以正常工作,并且在容器/blob中获得所需的输出。较大的文件失败,错误如下。
我读过一些文章,其中提到mapreduce.map.memory.mb应该设置为“1024”,这样Map器就有了更多的内存。考虑到我有190gb的文件要处理,而且集群中没有一台机器的内存接近这个数量,我不明白这是如何扩展的。
我确信我遗漏了一些小的东西,但是有人知道我应该如何1)解决这个问题2)这样我就可以将mapreduce进程扩展到大的输入文件而不出现这些错误吗?
如果使用8 gb的输入文件,则会出现以下错误:

15/08/01 05:43:17 INFO mapreduce.Job:  map 56% reduce 0%
15/08/01 05:43:23 INFO mapreduce.Job:  map 57% reduce 0%
15/08/01 05:43:29 INFO mapreduce.Job:  map 58% reduce 0%
15/08/01 05:43:30 INFO mapreduce.Job: Task Id : attempt_1438405138600_0006_m_0
010_0, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess
iled with code 255
        at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed
ava:320)
        at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.ja
:533)
        at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
        at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfor
tion.java:1594)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)

15/08/01 05:43:31 INFO mapreduce.Job:  map 57% reduce 0%
15/08/01 05:43:36 INFO mapreduce.Job:  map 58% reduce 0%
15/08/01 05:43:43 INFO mapreduce.Job:  map 59% reduce 0%
15/08/01 05:43:47 INFO mapreduce.Job:  map 60% reduce 0%
15/08/01 05:43:53 INFO mapreduce.Job:  map 61% reduce 0%
15/08/01 05:43:59 INFO mapreduce.Job:  map 62% reduce 0%
15/08/01 05:44:05 INFO mapreduce.Job:  map 63% reduce 0%
15/08/01 05:44:05 INFO mapreduce.Job: Task Id : attempt_1438405138600_0006_m_0
010_1, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess
iled with code 255
        at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed
ava:320)
        at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.ja
:533)
        at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
        at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfor
tion.java:1594)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)

15/08/01 05:44:06 INFO mapreduce.Job:  map 61% reduce 0%
15/08/01 05:44:07 INFO mapreduce.Job:  map 62% reduce 0%
15/08/01 05:44:16 INFO mapreduce.Job:  map 63% reduce 0%
15/08/01 05:44:20 INFO mapreduce.Job:  map 64% reduce 0%
15/08/01 05:44:27 INFO mapreduce.Job:  map 65% reduce 0%
15/08/01 05:44:35 INFO mapreduce.Job:  map 66% reduce 0%
15/08/01 05:44:41 INFO mapreduce.Job:  map 69% reduce 0%
15/08/01 05:44:48 INFO mapreduce.Job:  map 70% reduce 0%
15/08/01 05:44:48 INFO mapreduce.Job: Task Id : attempt_1438405138600_0006_m_0
010_2, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess
iled with code 255
        at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed
ava:320)
        at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.ja
:533)
        at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
        at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfor
tion.java:1594)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)

15/08/01 05:44:49 INFO mapreduce.Job:  map 68% reduce 0%
15/08/01 05:44:50 INFO mapreduce.Job:  map 71% reduce 0%
15/08/01 05:45:02 INFO mapreduce.Job:  map 72% reduce 0%
15/08/01 05:45:05 INFO mapreduce.Job:  map 75% reduce 0%
15/08/01 05:45:13 INFO mapreduce.Job:  map 76% reduce 0%
15/08/01 05:45:18 INFO mapreduce.Job:  map 77% reduce 0%
15/08/01 05:45:21 INFO mapreduce.Job:  map 78% reduce 0%
15/08/01 05:45:23 INFO mapreduce.Job:  map 100% reduce 100%
15/08/01 05:45:28 INFO mapreduce.Job: Job job_1438405138600_0006 failed with s
te FAILED due to: Task failed task_1438405138600_0006_m_000010
Job failed as tasks failed. failedMaps:1 failedReduces:0

15/08/01 05:45:28 INFO mapreduce.Job: Counters: 35
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=605590393
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                WASB: Number of bytes read=5915160278
                WASB: Number of bytes written=0
                WASB: Number of read operations=0
                WASB: Number of large read operations=0
                WASB: Number of write operations=0
        Job Counters
                Failed map tasks=4
                Killed map tasks=3
                Launched map tasks=18
                Other local map tasks=3
                Rack-local map tasks=15
                Total time spent by all maps in occupied slots (ms)=1441061
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=1441061
                Total vcore-seconds taken by all map tasks=1441061
                Total megabyte-seconds taken by all map tasks=1475646464
        Map-Reduce Framework
                Map input records=16375
                Map output records=319985
                Map output bytes=5051193751
                Map output materialized bytes=604451353
                Input split bytes=1210
                Combine input records=0
                Spilled Records=319985
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=2836
                CPU time spent (ms)=1003820
                Physical memory (bytes) snapshot=9417678848
                Virtual memory (bytes) snapshot=13221376000
                Total committed heap usage (bytes)=11603542016
        File Input Format Counters
                Bytes Read=5915148288
15/08/01 05:45:28 ERROR streaming.StreamJob: Job not Successful!
Streaming Command Failed!
jdgnovmf

jdgnovmf1#

第一:1024是1gb的ram。a1通常是人们在测试时使用的标准大小,即1.75GB,然后使用a0,即768mb。不过,这并不像是内存问题。我预计会出现与此类似的错误:
“container[pid=container\u 1406552545451\u 0009\u 01\u000002,containerid=container\u 234132\u 0001\u 01\u000001]正在超出物理内存限制运行。当前使用量:使用了512MB物理内存中的569.1MB;使用了970.1 mb的1.0 gb虚拟内存。“杀死容器。”
这个错误输出对我来说就像是作业配置问题。您是否确保您的构建目标是x64并且未选中首选32位?请参见msdn上的此线程:https://social.msdn.microsoft.com/forums/en-us/d79befb1-be5d-4c5a-bb05-30ea9fccc475/hdinsight-mapreduce-fails-with-pipemapredwaitoutputthreads-subprocess-failed-with-code-255?forum=hdinsight

相关问题