spark流kafka作业陷入“处理”阶段

7xllpg7q  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(465)

我有一个流作业,它读取kafka(@1min批处理),并在一些操作之后将其发布到http端点。每隔几个小时,它就会陷入“处理”阶段,然后开始排队作业:

在检查正在运行的“executors”(在appui页面中)之后,我发现6个executors中只有1个显示了2个“活动任务”。

在为此执行线程转储时,它显示了“executor task launch worker”线程池(源)的2个线程。这些线程都出现了相同的错误:

完全可读错误:

java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:502)
java.net.InetAddress.checkLookupTable(InetAddress.java:1393)
java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1310)
java.net.InetAddress.getAllByName0(InetAddress.java:1276)
java.net.InetAddress.getAllByName(InetAddress.java:1192)
java.net.InetAddress.getAllByName(InetAddress.java:1126)
java.net.InetAddress.getByName(InetAddress.java:1076)
java.net.InetSocketAddress.<init>(InetSocketAddress.java:220)
kafka.network.BlockingChannel.liftedTree1$1(BlockingChannel.scala:59)
kafka.network.BlockingChannel.connect(BlockingChannel.scala:49)
kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:44)
kafka.consumer.SimpleConsumer.getOrMakeConnection(SimpleConsumer.scala:151)
kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:69)
kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator.fetchBatch(KafkaRDD.scala:193)
org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator.getNext(KafkaRDD.scala:209)
org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461)
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:161)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
org.apache.spark.scheduler.Task.run(Task.scala:85)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)

这似乎是一个jdk错误,必须在jdk 7中修复-我确保我使用的是“1.8.0\u 101(oracle corporation)”。我尝试在命令行中添加以下内容(如这里所建议的),但没有解决问题:

-Djava.net.preferIPv4Stack=true -Dnetworkaddress.cache.ttl=60

有人对调试/修复这个问题有什么想法吗?

  • 编辑:重命名问题以删除混淆的jdk原因
wkftcu5l

wkftcu5l1#

原来是内核级的bughttps://bugzilla.redhat.com/show_bug.cgi?id=1209433 linux内核版本4.0.6解决了这个问题,我的工作人员运行的主机上有内核版本3.5.6的rhel。希望在内核版本为4.5的较新centos机器上部署后,这不会成为问题。
我发现每当它陷入“checklookuptable”或“lookupallhostaddr”时,它们都是对底层操作系统的本机(jni)调用。

相关问题