cassandra CompressedAndomAccessReader内存不足错误

krugob8w  于 2021-06-10  发布在  Cassandra
关注(0)|答案(0)|浏览(277)
java.lang.OutOfMemoryError: Java heap space
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.<init>(CompressedRandomAccessReader.java:73) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:48) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.createPooledReader(CompressedPoolingSegmentedFile.java:95) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:62) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1779) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.db.columniterator.SimpleSliceReader.<init>(SimpleSliceReader.java:57) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.db.columniterator.SSTableSliceIterator.<init>(SSTableSliceIterator.java:42) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:185) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:273) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1925) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1758) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:57) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1494) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2179) ~[apache-cassandra-2.1.5.jar:2.1.5]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_79]
at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-2.1.5.jar:2.1.5]

面对上述例外情况,不完全确定问题是如何/为什么产生的Cassandra。其中一个假设是,这是在数据同步和分区期间发生的。
设置为3节点集群,每个节点的最大堆容量为4gb。尤其是在节点已经运行了很长时间(比如100天)之后
使用的java版本是1.5

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题