500gb或1tb上的hadoop 2.6和2.7 apache terasort

ecbunoof  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(368)

运行map和减速器启动时,map从0变为100,出现以下故障:

15/05/12 07:21:27 INFO terasort.TeraSort: starting
15/05/12 07:21:27 WARN util.NativeCodeLoader: Unable to load native-hadoop     library for your platform... using builtin-java classes where applicable
15/05/12 07:21:29 INFO input.FileInputFormat: Total input paths to process :    18000

Spent 1514ms computing base-splits.
Spent 109ms computing TeraScheduler splits.
Computing input splits took 1624ms
Sampling 10 splits of 18000
Making 1 from 100000 sampled records
Computing parititions took 315ms
Spent 1941ms computing partitions.
15/05/12 07:21:30 INFO client.RMProxy: Connecting to ResourceManager at    n1/192.168.2.1:8032
15/05/12 07:21:31 INFO mapreduce.JobSubmitter: number of splits:18000
15/05/12 07:21:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1431389162125_0001
15/05/12 07:21:31 INFO impl.YarnClientImpl: Submitted application application_1431389162125_0001
15/05/12 07:21:31 INFO mapreduce.Job: The url to track the job: http://n1:8088/proxy/application_1431389162125_0001/
15/05/12 07:21:31 INFO mapreduce.Job: Running job: job_1431389162125_0001
15/05/12 07:21:37 INFO mapreduce.Job: Job job_1431389162125_0001 running in uber mode : false
15/05/12 07:21:37 INFO mapreduce.Job:  map 0% reduce 0%
15/05/12 07:21:47 INFO mapreduce.Job:  map 1% reduce 0%
15/05/12 07:22:01 INFO mapreduce.Job:  map 2% reduce 0%
15/05/12 07:22:13 INFO mapreduce.Job:  map 3% reduce 0%
15/05/12 07:22:25 INFO mapreduce.Job:  map 4% reduce 0%
15/05/12 07:22:38 INFO mapreduce.Job:  map 5% reduce 0%
15/05/12 07:22:50 INFO mapreduce.Job:  map 6% reduce 0%
15/05/12 07:23:02 INFO mapreduce.Job:  map 7% reduce 0%
15/05/12 07:23:15 INFO mapreduce.Job:  map 8% reduce 0%
15/05/12 07:23:27 INFO mapreduce.Job:  map 9% reduce 0%
15/05/12 07:23:40 INFO mapreduce.Job:  map 10% reduce 0%
15/05/12 07:23:52 INFO mapreduce.Job:  map 11% reduce 0%
15/05/12 07:24:02 INFO mapreduce.Job:  map 100% reduce 100%
15/05/12 07:24:06 INFO mapreduce.Job: Job job_1431389162125_0001 failed with  state FAILED due to: Task failed task_1431389162125_0001_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1

这是默认配置,每次都会失败。
我在xml中插入的任何配置都是为了找出这个问题,但是我仍然有问题,作业只有在开始缩减时才失败。

2ic8powd

2ic8powd1#

yarn处理资源管理,还提供可以使用mapreduce和实时工作负载的批处理工作负载。
内存设置可以在Yarn容器级别以及Map器和还原器级别进行设置。以Yarn容器大小为增量请求内存。mapper和reducer任务在容器中运行。

mapreduce.map.memory.mb and mapreduce.reduce.memory.mb

上面的参数描述了map reduce任务的内存上限,如果此任务订阅的内存超过此上限,则会终止相应的容器。
这些参数分别决定可分配给mapper和reduce任务的最大内存量。让我们看一个示例:mapper由配置参数mapreduce.map.memory.mb中定义的内存上限绑定。
但是,如果yarn.scheduler.minimum-allocation-mb的值大于mapreduce.map.memory.mb的值,则遵循yarn.scheduler.minimum-allocation-mb并给出该大小的容器。
此参数需要小心设置,如果设置不正确,可能会导致性能不佳或内存不足错误。

mapreduce.reduce.java.opts and mapreduce.map.java.opts

此属性值需要小于mapreduce.map.memory.mb/mapreduce.reduce.memory.mb中定义的map/reduce任务的上限,因为它应该适合于map/reduce任务的内存分配。

相关问题