flink datastream API consume Kafka data then throw导致:异常[已关闭]

xqkwcwgp  于 2023-06-21  发布在  Apache
关注(0)|答案(1)|浏览(128)

**关闭。**此题需要debugging details。目前不接受答复。

编辑问题以包括desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem。这将帮助其他人回答这个问题。
7小时前关闭
Improve this question

2023 - 06 - 14 20:25:25,930 INFO org. apache. flink. runtime. executiongraph. ExecutionGraph []-Source:op_log(2/3)(211ed06233b4390f125c3672360ab285_feca28aff5a3958840bee985ee7de4d3_1_2)在container_e20_1685601734703_0108_01_000013@www.example.com(dataPort = 35833)上从RUNNING切换到FAILED。bmd3-20.ihuman.com一个或多个提取器在org上遇到异常。Apache退缩连接器基地来源读者。fetcher。SplitFetcherManager。checkErrors(SplitFetcherManager.java:261)~[flink-connector-files-1.17.1.jar:1.17.1]在org.Apache退缩连接器基地来源读者。SourceReaderBase。getNextFetch(SourceReaderBase.java:169)~[flink-connector-files-1.17.1.jar:1.17.1]在org.Apache退缩连接器基地来源读者。SourceReaderBase。pollNext(SourceReaderBase.java:131)~[flink-connector-files-1.17.1.jar:1.17.1]在org.Apache退缩流媒体API操作符。SourceOperator。emitNext(SourceOperator.java:417)~[flink-dist-1.17.1.jar:1.17.1]在org.Apache退缩流媒体运行时间。io.StreamTaskSourceInput。emitNext(StreamTaskSourceInput.java:68)~[flink-dist-1.17.1.jar:1.17.1]在org.Apache退缩流媒体运行时间。io.StreamOneInputProcessor。processInput(StreamOneInputProcessor.java:65)~[flink-dist-1.17.1.jar:1.17.1]在org.Apache退缩流媒体运行时间。任务。StreamTask。processInput(StreamTask.java:550)~[flink-dist-1.17.1.jar:1.17.1]在org.Apache退缩流媒体运行时间。任务。邮箱邮箱处理器。runMailboxLoop(MailboxProcessor.java:231)~[flink-dist-1.17.1.jar:1.17.1]在org.Apache退缩流媒体运行时间。任务。StreamTask。runMailboxLoop(StreamTask.java:839)~[flink-dist-1.17.1.jar:1.17.1]在org.Apache退缩流媒体运行时间。任务。StreamTask。invoke(StreamTask.java:788)~[flink-dist-1.17.1.jar:1.17.1]在org.Apache退缩运行时间。任务管理器任务。runWithSystemExitMonitoring(Task.java:952)~[flink-dist-1.17.1.jar:1.17.1]在org.Apache退缩运行时间。任务管理器任务。restoreAndInvoke(Task.java:931)~[flink-dist-1.17.1.jar:1.17.1]在org.Apache退缩运行时间。任务管理器任务。doRun(Task.java:745)~[flink-dist-1.17.1.jar:1.17.1]在www.example.com(Task.java:562)~[flink-dist-1.17.1.jar:1.17.1] at www.example.com(Thread.java:748)~[?:1.8.0_231]引起原因: www.example.com :1.8.0_231]在www.example.com(FutureTask.java:266)~[?java.lang.Thread.run:1.8.0_231] at java. util. concurrent. ThreadPoolExecutor $www.example.com(ThreadPoolExecutor.java:624)~[?:1.8.0_231]...引起原因:org. apache. kafka. clients. consumer. OffsetOutOfRangeException:org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run3.2.3 rack:null)],epoch = 0}}超出分区dw_dwd_op_log-0的范围,位于org. apache. kafka. clients. consumer. internals. Fetcher. handleOffsetOutOfRange(Fetcher.java:1405)~[kafka-clients-3.2.3.jar:?RunnableAdapter.call] at org. apache. kafka. clients. consumer. internals. Fetcher. collectFetch(Fetcher.java:658)~[kafka-clients-3.2.3.jar:?] at org. apache. kafka. clients. consumer. KafkaConsumer. pollForFetches(KafkaConsumer.java:1313)~[kafka-clients-3.2.3.jar:?java.util.concurrent.FutureTask.run] at org. apache. kafka. clients. consumer. KafkaConsumer. poll(KafkaConsumer.java:1215)~[kafka-clients-3.2.3.jar:?]在org.Apache退缩连接器Kafka。来源读者。KafkaPartitionSplitReader。fetch(KafkaPartitionSplitReader.java:101)~[flink-connector-kafka-1.17.1.jar:1.17.1]在www.example.com(FetchTask.java:58)~[flink-connector-files-1.17.1.jar:1.17.1]在org.Apache退缩连接器基地来源读者。fetcher。分离器runOnce(SplitFetcher.java:162)~[flink-connector-files-1.17.1.jar:1.17.1]在www.example.com(SplitFetcher.java:114)~[flink-connector-files-1.17.1.jar:1.17.1]在java中。效用并发Executors $www.example.com(Executors.java:511)~[?:1.8.0_231]在www.example.com(FutureTask.java:266)~[?Worker.run:1.8.0_231] at java. util. concurrent. ThreadPoolExecutor $www.example.com(ThreadPoolExecutor.java:624)~[?:1.8.0_231]... 1更多2023 - 06 - 14 20:25:25,931 INFO org. apache. flink. runtime. source. coordinator. SourceCoordinator []-在源的子任务1(#2)失败后删除已注册的读取器Source:op_log. 2023 - 06 - 14 20:25:25,932 INFO org. apache. flink. runtime. jobmaster. JobMaster []-将重新启动51个任务以恢复失败的任务 Fetch position FetchPosition{offset=1972976704, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[bmd3-21.ihuman.com:9092 (id: 323 rack: null)], epoch=0}} is out of range for partition dw_dwd_op_log-0 at org.apache.kafka.clients.consumer.internals.Fetcher.handleOffsetOutOfRange(Fetcher.java:1405) ~[kafka-clients-3.2.3.jar:?] at org.apache.kafka.clients.consumer.internals.Fetcher.initializeCompletedFetch(Fetcher.java:1357) ~[kafka-clients-3.2.3.jar:?] at org.apache.kafka.clients.consumer.internals.Fetcher.collectFetch(Fetcher.java:658) ~[kafka-clients-3.2.3.jar:?] at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1313) ~[kafka-clients-3.2.3.jar:?] at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1242) ~[kafka-clients-3.2.3.jar:?] at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1215) ~[kafka-clients-3.2.3.jar:?] at org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.fetch(KafkaPartitionSplitReader.java:101) ~[flink-connector-kafka-1.17.1.jar:1.17.1] at org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run (FetchTask.java:58) ~[flink-connector-files-1.17.1.jar:1.17.1] at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:162) ~[flink-connector-files-1.17.1.jar:1.17.1] at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run (SplitFetcher.java:114) ~[flink-connector-files-1.17.1.jar:1.17.1] at java.util.concurrent.Executors$RunnableAdapter.call (Executors.java:511) ~[?:1.8.0_231] at java.util.concurrent.FutureTask.run (FutureTask.java:266) ~[?:1.8.0_231] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_231] at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:624) ~[?:1.8.0_231] ... 1 more 2023-06-14 20:25:25,931 INFO org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Removing registered reader after failure for subtask 1 (#2) of source Source: op_log. 2023-06-14 20:25:25,932 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - 51 tasks will be restarted to recover the failed task

enter image description here
图像上的红框是问题的Kafka源
我原以为是flink版本的问题,但是当我把flink版本从1.13.3改为1.17.1时,又抛出了异常,这并没有解决问题;我想这可能是由Kafka偏移和检查点提交引起的,所以我将kafka源设置为exactly一次,但它不起作用。所以有人可以提供一些好主意给我解决它。谢谢,上帝保佑你。

h43kikqp

h43kikqp1#

据我所知,当客户端(在本例中是Flink应用程序)尝试从特定偏移量消费时,会发生此错误,但数据在Kafka代理上不再可用。这通常是因为Kafka代理的保留期已经过期,所以没有数据可以获取。

相关问题