“cannot allocate memory”错误-内存不足,java运行时环境无法继续

qfe3c7zg  于 2021-05-30  发布在  Hadoop
关注(0)|答案(0)|浏览(611)

我一直在尝试使用hipi-134mb处理1000张图像,尝试使用网站入门中给出的计算平均值教程。该程序运行良好的500图像和100图像。对于1000张图片,hib创建速度很快,但我无法运行mapreduce。它表示内存不足,java运行时环境无法继续。我的硬盘中有64 gb的可用空间,我已经将mapred-site.xml更新为-

<property>
 <name>mapred.child.java.opts</name>
 <value>-Xmx8192m</value>
</property>

不过,我还是无法让它运行。

15/06/20 07:41:42 INFO mapreduce.Job: Job job_local2138318905_0001 running in uber mode : false
15/06/20 07:41:42 INFO mapreduce.Job:  map 0% reduce 0%
15/06/20 07:41:47 INFO mapred.LocalJobRunner: map > map
15/06/20 07:41:48 INFO mapreduce.Job:  map 9% reduce 0%
15/06/20 07:41:50 INFO mapred.LocalJobRunner: map > map
15/06/20 07:41:51 INFO mapreduce.Job:  map 15% reduce 0%
15/06/20 07:41:53 INFO mapred.LocalJobRunner: map > map
15/06/20 07:41:54 INFO mapreduce.Job:  map 20% reduce 0%

**OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000ef867000, 102338560, 0) failed; error='Cannot allocate memory' (errno=12)**

There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 102338560 bytes for committing reserved memory.
An error report file with more information is saved as:
/home/ubuntu/hipi-release/hs_err_pid10440.log

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题