我是spark的新手,正在尝试在我的hadoop节点centos7上运行spark,它位于vmware上:2gb ram、20gb磁盘、1cpu
我收到此错误消息:
[root@xie1 spark]# bin/spark-shell
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 716177408, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 716177408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /opt/spark/hs_err_pid79417.log
在谷歌搜索之后,我查看了下面的mem:
[root@xie1 spark]# cat /proc/meminfo
MemTotal: 1868688 kB
MemFree: 76428 kB
MemAvailable: 80840 kB
Buffers: 68 kB
Cached: 92172 kB
SwapCached: 189260 kB
Active: 1158888 kB
Inactive: 426036 kB
Active(anon): 1108308 kB
Inactive(anon): 389960 kB
Active(file): 50580 kB
Inactive(file): 36076 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 2097148 kB
SwapFree: 282684 kB
Dirty: 76 kB
Writeback: 0 kB
AnonPages: 1303360 kB
Mapped: 42176 kB
Shmem: 5596 kB
Slab: 95580 kB
SReclaimable: 34792 kB
SUnreclaim: 60788 kB
KernelStack: 19616 kB
PageTables: 32960 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3031492 kB
Committed_AS: 6290644 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 173440 kB
VmallocChunk: 34359561216 kB
HardwareCorrupted: 0 kB
AnonHugePages: 319488 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 102272 kB
DirectMap2M: 1994752 kB
DirectMap1G: 0 kB
“vmalloctotal:34359738367kb”的含义是什么?它超出了我分配给vm的磁盘大小20g。vmalloctotal在哪些方面可以调整?我猜spark需要更多的资源,基于我目前分配的vm,我能做什么?怎么做呢?
非常感谢你。
暂无答案!
目前还没有任何答案,快来回答吧!