针对低资源的Yarn调整hadoop

pcrecxhr  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(279)

我想测试一个由几个计算机组成的集群:每个计算机有2个内核和256mb的ram。通过遵循cloudera的教程,我尝试了向hadoop2.6.0介绍我的低资源节点管理器(ubuntu14.04)。我有以下配置:
mapred-site.xml:

<configuration>
        <property>
                <name>mapred.job.tracker</name>
                <value>hadoop-master:54311</value>
        </property>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>hadoop-master:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>hadoop-master:19888</value>
        </property>
        <property>
                <name>mapred.task.profile</name>
                <value>true</value>
        </property>
        <property>
                <name>mapreduce.map.memory.mb</name>
                <value>200</value>
        </property>
        <property>
                <name>mapreduce.reduce.memory.mb</name>
                <value>200</value>
        </property>
        <property>
                <name>mapreduce.map.java.opts.max.heap</name>
                <value>160</value>
        </property>
        <property>
                <name>mapreduce.reduce.java.opts.max.heap</name>
                <value>160</value>
        </property>
</configuration>

yarn-site.xml:

<configuration>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
                <value> org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
        <property>
                <name>yarn.nodemanager.resource.memory-mb</name>
                <value>200</value>
        </property>
        <property>
                <name>yarn.nodemanager.resource.cpu-vcores</name>
                <value>2</value>
        </property>
        <property>
                <name>yarn.scheduler.minimum-allocation-mb</name>
                <value>100</value>
        </property>
        <property>
                <name>yarn.scheduler.maximum-allocation-mb</name>
                <value>200</value>
        </property>
        <property>
                <name>yarn.scheduler.increment-allocation-mb</name>
                <value>100</value>
        </property>
        <property>
                <name>yarn.scheduler.maximum-allocation-vcores</name>
                <value>2</value>
        </property>
        <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>hadoop-master</value>
        </property>
        <property>
                <name>yarn.resourcemanager.resource-tracker.address</name>
                <value>hadoop-master:8025</value>
        </property>
        <property>
                <name>yarn.resourcemanager.scheduler.address</name>
                <value>hadoop-master:8030</value>
        </property>
        <property>
                <name>yarn.resourcemanager.address</name>
                <value>hadoop-master:8050</value>
        </property>
        <property>
            <name>yarn.nodemanager.remote-app-log-dir</name>
            <value>/app-logs</value>
        </property> 
        <property>
                <name>yarn.nodemanager.local-dirs</name>
                <value>file:///usr/local/hadoop/local</value>
        </property>
        <property>
                <name>yarn.app.mapreduce.am.resource.mb</name>
                <value>200</value>
        </property> 
</configuration>

但是,当我尝试运行一个小的pi生成示例时,我得到以下错误:

yarn jar hadoop-mapreduce-examples-2.6.0.jar pi 1 1
Number of Maps  = 1
Samples per Map = 1
Wrote input for Map #0
Starting Job
16/01/28 19:23:24 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/10.0.3.100:8050
16/01/28 19:23:25 INFO input.FileInputFormat: Total input paths to process : 1
16/01/28 19:23:25 INFO mapreduce.JobSubmitter: number of splits:1
16/01/28 19:23:26 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1454008935455_0001
16/01/28 19:23:26 INFO impl.YarnClientImpl: Submitted application application_1454008935455_0001
16/01/28 19:23:26 INFO mapreduce.Job: The url to track the job: http://hadoop-master:8088/proxy/application_1454008935455_0001/
16/01/28 19:23:26 INFO mapreduce.Job: Running job: job_1454008935455_0001
16/01/28 19:23:34 INFO mapreduce.Job: Job job_1454008935455_0001 running in uber mode : false
16/01/28 19:23:34 INFO mapreduce.Job:  map 0% reduce 0%
16/01/28 19:23:34 INFO mapreduce.Job: Job job_1454008935455_0001 failed with state FAILED due to: Application application_1454008935455_0001 failed 2 times due to AM Container for appattempt_1454008935455_0001_000002 exited with  exitCode: -103
For more detailed output, check application tracking page:http://hadoop-master:8088/proxy/application_1454008935455_0001/Then, click on links to logs of each attempt.
Diagnostics: Container [pid=847,containerID=container_1454008935455_0001_02_000001] is running beyond virtual memory limits. Current usage: 210.8 MB of 200 MB physical memory used; 1.3 GB of 420.0 MB virtual memory used. Killing container.
Dump of the process-tree for container_1454008935455_0001_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 855 847 847 847 (java) 466 16 1410424832 53695 /usr/lib/jvm/java-7-openjdk-i386/jre/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/usr/local/hadoop/logs/userlogs/application_1454008935455_0001/container_1454008935455_0001_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 
    |- 847 845 847 847 (bash) 0 0 5431296 276 /bin/bash -c /usr/lib/jvm/java-7-openjdk-i386/jre/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/usr/local/hadoop/logs/userlogs/application_1454008935455_0001/container_1454008935455_0001_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/usr/local/hadoop/logs/userlogs/application_1454008935455_0001/container_1454008935455_0001_02_000001/stdout 2>/usr/local/hadoop/logs/userlogs/application_1454008935455_0001/container_1454008935455_0001_02_000001/stderr  

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
16/01/28 19:23:34 INFO mapreduce.Job: Counters: 0
Job Finished in 9.962 seconds
java.io.FileNotFoundException: File does not exist: hdfs://hadoop-master:9000/user/hduser/QuasiMonteCarlo_1454009003268_765740795/out/reduce-out
    at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
    at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1750)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1774)
    at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
    at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

这个配置有错误吗?或者hadoop不是为这么低的资源而设计的。我这样做只是为了学习。

jmo0nnb3

jmo0nnb31#

是的,你会因为资源不足而惹上麻烦的。出于测试目的,禁用内存检查:

<property>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>false</value>
  </property>

  <property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
  </property>

为了 yarn.scheduler.minimum-allocation-mb 因为实际的保留内存是在增量步骤中使用的,所以可能会更低。i、 如果你把它设为 100 和请求 101 ,Yarn会把它卷起来 200 . vmem 检查是不可靠的,默认情况下应该在yarn上禁用imho。

相关问题