spark executor:检测到托管内存泄漏

ih99xse1  于 2021-06-08  发布在  Kafka
关注(0)|答案(0)|浏览(296)

我正在使用mesos集群部署spark作业(客户机模式)。我有三台服务器,可以运行spark作业。但是,过了一段时间(几天),我得到了一个错误:

5/11/03 19:55:50 ERROR Executor: Managed memory leak detected; size = 33554432 bytes, TID = 387939
15/11/03 19:55:50 ERROR Executor: Exception in task 2.1 in stage 6534.0 (TID 387939)
java.io.FileNotFoundException: /tmp/blockmgr-3acec504-4a55-4aa8-a3e5-dda97ce5d055/03/temp_shuffle_cb37f147-c055-4014-a6ae-fd505cb49f57 (Too many open files)
    at java.io.FileOutputStream.open(Native Method)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
    at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:88)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.insertAll(BypassMergeSortShuffleWriter.java:110)
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

现在,它使所有流式处理批被排队并显示为“处理”(4042/streaming/)。在我手动重新启动spark作业并再次提交之前,他们都无法继续。
我的spark工作只是从kafka读取数据,并对mongo做一些更新(有相当多的更新查询通过;但我将Spark流的持续时间配置为5分钟左右;所以它不应该引起问题)。
过了一会儿,因为没有工作能成功;spark kafka阅读器开始显示错误:

ERROR Executor: Exception in task 5.3 in stage 7561.0 (TID 392220)
org.apache.spark.SparkException: Couldn't connect to leader for topic bid_inventory 9: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
    at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator$$anonfun$connectLeader$1.apply(KafkaRDD.scala:164)
    at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator$$anonfun$connectLeader$1.apply(KafkaRDD.scala:164)

但一旦重启,一切都开始正常运转。
有人知道为什么会这样吗?谢谢。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题