我在Centsos6.4上的笔记本电脑上以伪分布式模式运行Hadoop2.2,内存为8GBRAM。
每当我提交一个作业时,我都会收到一个错误,表明虚拟内存使用量超出了预期值,如下所示。
我已经将yarn-site.xml中的比率yarn.nodenamager.vmem-pmem-ratio更改为10(10x1gb),但是虚拟内存的增加没有超过默认的2.1gb,如下面的错误消息所示,容器正在被终止。
有人能告诉我是否有任何其他设置需要更改吗?提前谢谢!
错误消息:
INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2, Status : FAILED
Container [pid=12013,containerID=container_1388632710048_0009_01_000004] is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1388632710048_0009_01_000004 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728 /usr/local/bin/phantomjs --webdriver=15358 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
|- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498 attempt_1388632710048_0009_m_000000_2 4 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr
|- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539 /usr/local/bin/phantomjs --webdriver=29062 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
|- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727 /usr/local/bin/phantomjs --webdriver=5958 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
|- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732 /usr/local/bin/phantomjs --webdriver=31836 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
|- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538 /usr/local/bin/phantomjs --webdriver=24519 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
|- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216 /usr/local/bin/phantomjs --webdriver=10175 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
|- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036 /usr/local/bin/phantomjs --webdriver=5043 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
|- 12018 12013 12013 12013 (java) 996 41 820924416 79595 /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498 attempt_1388632710048_0009_m_000000_2 4
|- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545 /usr/local/bin/phantomjs --webdriver=12650 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
|- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542 /usr/local/bin/phantomjs --webdriver=18444 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
Container killed on request. Exit code is 143
1条答案
按热度按时间idv4meu81#
你测试过换衣服吗
yarn.scheduler.maximum-allocation-mb最大分配
或
yarn.nodemanager.resource.memory-mb内存
在yarn-site.xml中?
我认为这种属性会有帮助。选中关键字“memory”的默认值