容器内存错误:hadoop

cedebl8k  于 2021-06-04  发布在  Hadoop
关注(0)|答案(1)|浏览(404)

有人能告诉我需要改变什么Yarn结构吗。当我尝试执行hadoop streaming mr作业(python脚本)时,不断出现此错误: Container is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 4.2 GB of 6.3 GB virtual memory used. Killing container. 请告诉我这里怎么了

15/01/31 16:02:56 INFO mapreduce.Job:  map 0% reduce 0%
15/01/31 16:03:58 INFO mapreduce.Job: Task Id : attempt_1422733582475_0003_m_000008_0, Status : FAILED
Container [pid=22881,containerID=container_1422733582475_0003_01_000011] is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 4.2 GB of 6.3 GB virtual memory used. Killing container.
Dump of the process-tree for container_1422733582475_0003_01_000011 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 22887 22881 22881 22881 (java) 704 42 1435754496 105913 /usr/java/jdk1.7.0_67-cloudera/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.net.preferIPv4Stack=true -Xmx825955249 -Djava.io.tmpdir=/var/yarn/nm/usercache/sravisha/appcache/application_1422733582475_0003/container_1422733582475_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1422733582475_0003/container_1422733582475_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 134.68.246.152 54472 attempt_1422733582475_0003_m_000008_0 11
        |- 22928 22887 22881 22881 (python) 4326 881 2979037184 691955 python /var/yarn/nm/usercache/sravisha/appcache/application_1422733582475_0003/container_1422733582475_0003_01_000011/./methratio.py -r -g
        |- 22881 22878 22881 22881 (bash) 0 0 108654592 303 /bin/bash -c /usr/java/jdk1.7.0_67-cloudera/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Djava.net.preferIPv4Stack=true -Xmx825955249 -Djava.io.tmpdir=/var/yarn/nm/usercache/sravisha/appcache/application_1422733582475_0003/container_1422733582475_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1422733582475_0003/container_1422733582475_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 134.68.246.152 54472 attempt_1422733582475_0003_m_000008_0 11 1>/var/log/hadoop-yarn/container/application_1422733582475_0003/container_1422733582475_0003_01_000011/stdout 2>/var/log/hadoop-yarn/container/application_1422733582475_0003/container_1422733582475_0003_01_000011/stderr

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

15/01/31 16:04:10 INFO mapreduce.Job:  map 4% reduce 0%
15/01/31 16:04:10 INFO mapreduce.Job: Task Id : attempt_1422733582475_0003_m_000007_0, Status : FAILED
Container [pid=17533,containerID=container_1422733582475_0003_01_000007] is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 4.2 GB of 6.3 GB virtual memory used. Killing container.
Dump of the process-tree for container_1422733582475_0003_01_000007 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 17539 17533 17533 17533 (java) 715 54 1438490624 105230 /usr/java/jdk1.7.0_67-cloudera/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.net.preferIPv4Stack=true -Xmx825955249 -Djava.io.tmpdir=/var/yarn/nm/usercache/sravisha/appcache/application_1422733582475_0003/container_1422733582475_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1422733582475_0003/container_1422733582475_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 134.68.246.152 54472 attempt_1422733582475_0003_m_000007_0 7
        |- 17583 17539 17533 17533 (python) 4338 1749 2979041280 688205 python /var/yarn/nm/usercache/sravisha/appcache/application_1422733582475_0003/container_1422733582475_0003_01_000007/./methratio.py -r -g
        |- 17533 17531 17533 17533 (bash) 0 0 108654592 294 /bin/bash -c /usr/java/jdk1.7.0_67-cloudera/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Djava.net.preferIPv4Stack=true -Xmx825955249 -Djava.io.tmpdir=/var/yarn/nm/usercache/sravisha/appcache/application_1422733582475_0003/container_1422733582475_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1422733582475_0003/container_1422733582475_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 134.68.246.152 54472 attempt_1422733582475_0003_m_000007_0 7 1>/var/log/hadoop-yarn/container/application_1422733582475_0003/container_1422733582475_0003_01_000007/stdout 2>/var/log/hadoop-yarn/container/application_1422733582475_0003/container_1422733582475_0003_01_000007/stderr

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

我的配置:

<property>
    <name>yarn.app.mapreduce.am.command-opts</name>
    <value>-Djava.net.preferIPv4Stack=true -Xmx825955249</value>
  </property>
  <property>
    <name>mapreduce.map.java.opts</name>
    <value>-Djava.net.preferIPv4Stack=true -Xmx825955249</value>
  </property>
  <property>
    <name>mapreduce.reduce.java.opts</name>
    <value>-Djava.net.preferIPv4Stack=true -Xmx825955249</value>
  </property>
  <property>
    <name>mapreduce.map.memory.mb</name>
    <value>3072</value>
  </property>
  <property>
    <name>mapreduce.map.cpu.vcores</name>
    <value>1</value>
  </property>
  <property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>1024</value>
  </property>
2eafrhcq

2eafrhcq1#

我妈妈也遇到了同样的问题。我通过将jdb从oracle-8改为oracle-7解决了这个问题

相关问题