删除过期的偏移量kafka groupmetadatamanager

5uzkadbs  于 2021-06-04  发布在  Kafka
关注(0)|答案(0)|浏览(1488)

在我们的弹性堆栈架构中,我们一直使用kafka作为filebeat和logstash之间的消息代理。它是一个3节点集群。最近,我们在kafka server.log中收到了错误日志,数据没有被处理到logstash。我们查阅并尝试了其他帖子中提出的各种解决方案,但似乎没有什么能解决这个问题。集群重启似乎暂时解决了这个问题,但它不断出现。
以下是kafka server.log中的日志:

[2020-10-08 00:20:15,927] INFO Deleted offset index /opt/kafka/ELK/kafka_2.11-2.1.1/data/__consumer_offsets-6/00000000000000000000.index.deleted. (kafka.log.LogSegment)
[2020-10-08 00:20:15,927] INFO Deleted time index /opt/kafka/ELK/kafka_2.11-2.1.1/data/__consumer_offsets-6/00000000000000000000.timeindex.deleted. (kafka.log.LogSegment)
[2020-10-08 00:20:15,943] INFO Deleted log /opt/kafka/ELK/kafka_2.11-2.1.1/data/__consumer_offsets-6/00000000000007535366.log.deleted. (kafka.log.LogSegment)
[2020-10-08 00:20:15,943] INFO Deleted offset index /opt/kafka/ELK/kafka_2.11-2.1.1/data/__consumer_offsets-6/00000000000007535366.index.deleted. (kafka.log.LogSegment)
[2020-10-08 00:20:15,943] INFO Deleted time index /opt/kafka/ELK/kafka_2.11-2.1.1/data/__consumer_offsets-6/00000000000007535366.timeindex.deleted. (kafka.log.LogSegment)
[2020-10-08 00:20:16,597] INFO [Log partition=__consumer_offsets-18, dir=/opt/kafka/ELK/kafka_2.11-2.1.1/data] Deleting segment 0 (kafka.log.Log)
[2020-10-08 00:20:16,597] INFO [Log partition=__consumer_offsets-18, dir=/opt/kafka/ELK/kafka_2.11-2.1.1/data] Deleting segment 7536598 (kafka.log.Log)
[2020-10-08 00:20:16,597] INFO Deleted log /opt/kafka/ELK/kafka_2.11-2.1.1/data/__consumer_offsets-18/00000000000000000000.log.deleted. (kafka.log.LogSegment)
[2020-10-08 00:20:16,597] INFO Deleted offset index /opt/kafka/ELK/kafka_2.11-2.1.1/data/__consumer_offsets-18/00000000000000000000.index.deleted. (kafka.log.LogSegment)
[2020-10-08 00:20:16,597] INFO Deleted time index /opt/kafka/ELK/kafka_2.11-2.1.1/data/__consumer_offsets-18/00000000000000000000.timeindex.deleted. (kafka.log.LogSegment)
[2020-10-08 00:20:16,610] INFO Deleted log /opt/kafka/ELK/kafka_2.11-2.1.1/data/__consumer_offsets-18/00000000000007536598.log.deleted. (kafka.log.LogSegment)
[2020-10-08 00:20:16,611] INFO Deleted offset index /opt/kafka/ELK/kafka_2.11-2.1.1/data/__consumer_offsets-18/00000000000007536598.index.deleted. (kafka.log.LogSegment)
[2020-10-08 00:20:16,611] INFO Deleted time index /opt/kafka/ELK/kafka_2.11-2.1.1/data/__consumer_offsets-18/00000000000007536598.timeindex.deleted. (kafka.log.LogSegment)
[2020-10-08 03:36:11,464] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0
 in 3 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-10-08 03:36:11,469] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(__consumer_offsets-17, __consume
r_offsets-32, __consumer_offsets-14, __consumer_offsets-44, __consumer_offsets-47, __consumer_offsets-29, __consumer_offsets-11, __con
sumer_offsets-41, __consumer_offsets-26, __consumer_offsets-23, __consumer_offsets-8, __consumer_offsets-38, __consumer_offsets-20, __
consumer_offsets-5, __consumer_offsets-2, __consumer_offsets-35) (kafka.server.ReplicaFetcherManager)
[2020-10-08 03:36:11,482] INFO [ReplicaFetcherManager on broker 1] Added fetcher to broker BrokerEndPoint(id=0, host=xx.xx.xxx.xx:9092) for partitions Map(__consumer_offsets-8 -> (offset=856, leaderEpoch=655), __consumer_offsets-35 -> (offset=0, leaderEpoch=652), __c
onsumer_offsets-41 -> (offset=0, leaderEpoch=650), __consumer_offsets-23 -> (offset=5032045, leaderEpoch=651), __consumer_offsets-47 -
> (offset=4631, leaderEpoch=652), __consumer_offsets-38 -> (offset=0, leaderEpoch=652), __consumer_offsets-17 -> (offset=1497, leaderE
poch=654), __consumer_offsets-11 -> (offset=2519880, leaderEpoch=652), __consumer_offsets-2 -> (offset=13582, leaderEpoch=653), __cons
umer_offsets-14 -> (offset=13007953, leaderEpoch=648), __consumer_offsets-20 -> (offset=7860975, leaderEpoch=648), __consumer_offsets-
44 -> (offset=0, leaderEpoch=649), __consumer_offsets-5 -> (offset=49214, leaderEpoch=653), __consumer_offsets-26 -> (offset=0, leader
Epoch=651), __consumer_offsets-29 -> (offset=0, leaderEpoch=652), __consumer_offsets-32 -> (offset=2174221, leaderEpoch=637)) (kafka.s
erver.ReplicaFetcherManager)
[2020-10-08 03:36:11,483] INFO [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
[2020-10-08 03:36:11,484] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3
 in 19 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-10-08 03:36:11,487] INFO [GroupMetadataManager brokerId=1] Scheduling unloading of offsets and group metadata from __consumer_of
fsets-2 (kafka.coordinator.group.GroupMetadataManager)
[2020-10-08 03:36:11,487] INFO [GroupMetadataManager brokerId=1] Scheduling unloading of offsets and group metadata from __consumer_of
fsets-5 (kafka.coordinator.group.GroupMetadataManager)
[2020-10-08 03:36:11,487] INFO [GroupMetadataManager brokerId=1] Scheduling unloading of offsets and group metadata from __consumer_of
fsets-8 (kafka.coordinator.group.GroupMetadataManager)
[2020-10-08 03:36:11,487] INFO [GroupMetadataManager brokerId=1] Scheduling unloading of offsets and group metadata from __consumer_of
fsets-11 (kafka.coordinator.group.GroupMetadataManager)
[2020-09-24 05:04:47,642] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-09-24 05:14:47,642] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-09-24 05:24:47,642] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-09-24 05:28:45,099] INFO [ProducerStateManager partition=nextgen-canada-0] Writing producer snapshot at offset 8291365 (kafka.log.ProducerStateManager)
[2020-09-24 05:28:45,101] INFO [Log partition=nextgen-canada-0, dir=/opt/kafka/ELK/kafka_2.11-2.1.1/data] Rolled new log segment at offset 8291365 in 4 ms. (kafka.log.Log)
[2020-09-24 05:34:47,642] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-09-24 05:44:47,642] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-09-24 05:54:47,642] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)```

These are the producer (filebeat) logs:

```INFO kafka/log.go:53 kafka message: client/metadata found some partitions to be leaderless
2020-11-29T20:04:44.371-0600    INFO    kafka/log.go:53 client/metadata retrying after 250ms... (1 attempts remaining)

2020-11-29T20:04:44.622-0600    INFO    kafka/log.go:53 client/metadata fetching metadata for [ignioscheduler1] from broker 30.25.178.102:9094

2020-11-29T20:04:44.623-0600    INFO    kafka/log.go:53 kafka message: client/metadata found some partitions to be leaderless
2020-11-29T20:04:54.750-0600    INFO    kafka/log.go:53 client/metadata fetching metadata for [ignioscheduler1] from broker 30.25.178.101:9093

2020-11-29T20:04:54.751-0600    INFO    kafka/log.go:53 kafka message: client/metadata found some partitions to be leaderless
2020-11-29T20:04:54.753-0600    INFO    kafka/log.go:53 client/metadata retrying after 250ms... (3 attempts remaining)

2020-11-29T20:04:55.004-0600    INFO    kafka/log.go:53 client/metadata fetching metadata for [ignioscheduler1] from broker 30.25.178.101:9093

2020-11-29T20:04:55.005-0600    INFO    kafka/log.go:53 kafka message: client/metadata found some partitions to be leaderless
2020-11-29T20:04:55.006-0600    INFO    kafka/log.go:53 client/metadata retrying after 250ms... (2 attempts remaining)

2020-11-29T20:04:55.259-0600    INFO    kafka/log.go:53 client/metadata fetching metadata for [ignioscheduler1] from broker 30.25.178.101:9093

2020-11-29T20:04:55.260-0600    INFO    kafka/log.go:53 kafka message: client/metadata found some partitions to be leaderless
2020-11-29T20:04:55.261-0600    INFO    kafka/log.go:53 client/metadata retrying after 250ms... (1 attempts remaining)

2020-11-29T20:04:55.512-0600    INFO    kafka/log.go:53 client/metadata fetching metadata for [ignioscheduler1] from broker 30.25.178.102:9094

2020-11-29T20:04:55.513-0600    INFO    kafka/log.go:53 kafka message: client/metadata found some partitions to be leaderless
2020-11-29T20:04:55.514-0600    INFO    kafka/log.go:53 client/metadata fetching metadata for [ignioscheduler1] from broker 30.25.178.102:9094

2020-11-29T20:04:55.515-0600    INFO    kafka/log.go:53 kafka message: client/metadata found some partitions to be leaderless
2020-11-29T20:04:55.516-0600    INFO    kafka/log.go:53 client/metadata retrying after 250ms... (3 attempts remaining)

2020-11-29T20:04:55.767-0600    INFO    kafka/log.go:53 client/metadata fetching metadata for [ignioscheduler1] from broker 30.25.178.101:9093

2020-11-29T20:04:55.768-0600    INFO    kafka/log.go:53 kafka message: client/metadata found some partitions to be leaderless
2020-11-29T20:04:55.770-0600    INFO    kafka/log.go:53 client/metadata retrying after 250ms... (2 attempts remaining)

2020-11-29T20:04:56.021-0600    INFO    kafka/log.go:53 client/metadata fetching metadata for [ignioscheduler1] from broker 30.25.178.101:9093

2020-11-29T20:04:56.022-0600    INFO    kafka/log.go:53 kafka message: client/metadata found some partitions to be leaderless
2020-11-29T20:04:56.023-0600    INFO    kafka/log.go:53 client/metadata retrying after 250ms... (1 attempts remaining)

2020-11-29T20:04:56.274-0600    INFO    kafka/log.go:53 client/metadata fetching metadata for [ignioscheduler1] from broker 30.25.178.101:9093

2020-11-29T20:04:56.275-0600    INFO    kafka/log.go:53 kafka message: client/metadata found some partitions to be leaderless
2020-11-29T20:05:02.437-0600```

Today we received the 'isr-expiration' error logs in the kafka server.log: 

```[2020-11-29 23:29:20,432] INFO [Partition __consumer_offsets-37 broker=2] Expanding ISR from 2,1 to 2,1,0 (kafka.cluster.Partition)
[2020-11-29 23:29:20,662] INFO [Partition __consumer_offsets-20 broker=2] Expanding ISR from 2,1 to 2,1,0 (kafka.cluster.Partition)
[2020-11-29 23:29:20,666] INFO [Partition __consumer_offsets-10 broker=2] Expanding ISR from 2,1 to 2,1,0 (kafka.cluster.Partition)
[2020-11-29 23:29:20,769] INFO [Partition __consumer_offsets-6 broker=2] Expanding ISR from 2,1 to 2,1,0 (kafka.cluster.Partition)
ERROR Uncaught exception in scheduled task 'isr-expiration' (kafka.utils.KafkaScheduler)
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /brokers/topics/topicname/pa
rtitions/0/state
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:130)
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
        at kafka.zookeeper.AsyncResponse.resultException(ZooKeeperClient.scala:539)
        at kafka.zk.KafkaZkClient.conditionalUpdatePath(KafkaZkClient.scala:717)
        at kafka.utils.ReplicationUtils$.updateLeaderAndIsr(ReplicationUtils.scala:33)
        at kafka.cluster.Partition.kafka$cluster$Partition$$updateIsr(Partition.scala:969)
        at kafka.cluster.Partition$$anonfun$2.apply$mcZ$sp(Partition.scala:642)
        at kafka.cluster.Partition$$anonfun$2.apply(Partition.scala:633)
        at kafka.cluster.Partition$$anonfun$2.apply(Partition.scala:633)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
        at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:259)
        at kafka.cluster.Partition.maybeShrinkIsr(Partition.scala:632)
        at kafka.server.ReplicaManager$$anonfun$kafka$server$ReplicaManager$$maybeShrinkIsr$2$$anonfun$apply$43.apply(ReplicaManager.s
cala:1349)
        at kafka.server.ReplicaManager$$anonfun$kafka$server$ReplicaManager$$maybeShrinkIsr$2$$anonfun$apply$43.apply(ReplicaManager.s
cala:1349)
        at scala.Option.foreach(Option.scala:257)
        at kafka.server.ReplicaManager$$anonfun$kafka$server$ReplicaManager$$maybeShrinkIsr$2.apply(ReplicaManager.scala:1349)
        at kafka.server.ReplicaManager$$anonfun$kafka$server$ReplicaManager$$maybeShrinkIsr$2.apply(ReplicaManager.scala:1348)
        at scala.collection.Iterator$class.foreach(Iterator.scala:891)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at kafka.server.ReplicaManager.kafka$server$ReplicaManager$$maybeShrinkIsr(ReplicaManager.scala:1348)
        at kafka.server.ReplicaManager$$anonfun$2.apply$mcV$sp(ReplicaManager.scala:323)
        at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:114)```

Any help or insights on the above would be appreciated.
Thank you.

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题