尝试为检查点作业启动flink作业主机时超时

ecbunoof  于 2021-06-21  发布在  Flink
关注(0)|答案(1)|浏览(341)

我想让Flink从检查站恢复过来。在大多数情况下,这似乎是可行的,但在部署到我们的登台环境大约一周后,作业管理器已经开始崩溃循环,因为在尝试启动作业的“作业主控程序”时超时。
我使用高可用性模式下部署的Flink1.7.2和zookeeper 3.4.9-1757313只是为了方便检查点恢复。我在kubernetes上只部署了一个作业管理器作为一个有状态集。一定是什么原因导致了服务器崩溃,当服务器恢复时,启动(可能是)已恢复作业的作业主控形状的代码似乎出现了故障。
我以前见过一次,清除了所有的flink zookeeper条目( rmr /flink 然后重新启动flink集群“修复”了这个问题。
这是flink配置

blob.server.port: 6124
    blob.storage.directory: s3://...
    high-availability: zookeeper
    high-availability.zookeeper.quorum: zookeeper:2181
    high-availability.zookeeper.path.root: /flink
    high-availability.storageDir: s3://...
    high-availability.jobmanager.port: 6070
    jobmanager.archive.fs.dir: s3://...
    state.backend: rocksdb
    state.backend.fs.checkpointdir: s3://...
    state.checkpoints.dir: s3://...
    state.checkpoints.num-retained: 2
    web.log.path: /var/log/flink.log
    web.upload.dir: /var/flink-recovery/flink-web-upload
    zookeeper.sasl.disable: true
    s3.access-key: __S3_ACCESS_KEY_ID__
    s3.secret-key: __S3_SECRET_KEY__

以下是flink jobmaster状态集上的容器端口:

ports:
- containerPort: 8081
  name: ui
- containerPort: 6123
  name: rpc
- containerPort: 6124
  name: blob
- containerPort: 6125
  name: query
- containerPort: 9249
  name: prometheus
- containerPort: 6070
  name: ha

我希望flink能够成功地从s3中的检查点进行恢复,但是作业管理器在启动时崩溃,堆栈跟踪如下:

2019-06-18 14:02:05,123 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Fatal error occurred in the cluster entrypoint.
org.apache.flink.util.FlinkException: JobMaster for job f13131ca883d6cf92f69a52cff4f1017 failed.
    at org.apache.flink.runtime.dispatcher.Dispatcher.jobMasterFailed(Dispatcher.java:759)
    at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$startJobManagerRunner$6(Dispatcher.java:339)
    at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
    at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
    at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
    at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:332)
    at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:158)
    at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:70)
    at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:142)
    at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
    at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
    at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
    at akka.actor.ActorCell.invoke(ActorCell.scala:495)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
    at akka.dispatch.Mailbox.run(Mailbox.scala:224)
    at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.util.FlinkException: Could not start the job manager.
    at org.apache.flink.runtime.jobmaster.JobManagerRunner.lambda$verifyJobSchedulingStatusAndStartJobManager$2(JobManagerRunner.java:340)
    at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
    at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
    at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/jobmanager_2#-806528277]] after [10000 ms]. Sender[null] sent message of type "org.apache.flink.runtime.rpc.messages.UnfencedMessage".
    at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)
    at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
    at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
    at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
    at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
    at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
    at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
    at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
    at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
    ... 1 more

我在这儿真是不知所措。我不太了解Flink的内部运作,所以这个例外对我来说意义不大。任何线索都将不胜感激。
编辑:我一直在翻Flink的源代码。当一个领导者试图从存储在zookeeper中的检查点信息还原其作业图时,在其当选后引发此异常。要弄清这个例外的确切来源是相当麻烦的,因为它都被未来和阿克卡所包围。我的猜测是,这是在作业经理启动jobmaster子流程以调度作业图之后发生的。有点猜测,但我认为作业管理器正试图从其jobmaster获取新作业的状态,但jobmaster线程已进入死锁状态(也许它也可能已死机,尽管我当时希望有堆栈跟踪),因此ask超时。看起来真是个笨蛋。
注:以下为 UnfencedMessage ask was for是在作业管理器中本地使用的(这与异常中的接收参与者是作业管理器相一致),因此我们可以消除作业管理器和任务管理器之间的网络未命中配置。

baubqpgj

baubqpgj1#

我在Flink上装jar行刑前用 /jars/upload 终结点。似乎Flink的性能坦克当它有太多的jar上传。所有端点都没有响应,包括 /jobs/<job_id> 终结点。在FlinkUI中加载作业图概览需要1-2分钟。我想这个rest端点使用的akka与作业管理器使用的是同一个角色。我想我一定是到了一个临界点,开始导致超时。我已经减少了30多个jar的数量,只有4个最新版本,Flink再次作出React。

相关问题