Kafka制作人:NOT_ENOUGH_REPLICAS在主题分区mytopic-events-0上生成具有相关ID的响应时出错,正在重试

3duebb1j  于 2023-10-15  发布在  Apache
关注(0)|答案(2)|浏览(183)

我们在debezium-mysql-connector日志中每秒看到10次此警告:

[kafka-producer-network-thread | connect-distributed-offsets] 
WARN  org.apache.kafka.clients.producer.internals.Sender - 
[Producer clientId=connect-distributed-offsets] Got error produce response with 
correlation id 34626 on topic-partition 
debezium-events-offset-dev-topic-events2-0, retrying (2147449048 attempts left). 
Error: NOT_ENOUGH_REPLICAS

以下是生产者端配置:

org.apache.kafka.clients.producer.ProducerConfig - Idempotence will be disabled because acks is set to 1, not set to 'all'."
org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:"
"acks = 1"
"auto.include.jmx.reporter = true"
"batch.size = 32768"
"bootstrap.servers = [b-1.eventsmskafka.qgfgzh.c14.kafka.us-east-1.amazonaws.com:9092, b-2.eventsmskafka.qgfgzh.c14.kafka.us-east-1.amazonaws.com:9092, b-3.eventsmskafka.qgfgzh.c14.kafka.us-east-1.amazonaws.com:9092]"
"buffer.memory = 1048576"
"client.dns.lookup = use_all_dns_ips"
"client.id = debezium-cdc-events2-schemahistory"
"compression.type = none"
"connections.max.idle.ms = 540000"
"delivery.timeout.ms = 120000"
"enable.idempotence = false"
"interceptor.classes = []"
"key.serializer = class org.apache.kafka.common.serialization.StringSerializer"
"linger.ms = 0"
"max.block.ms = 10000"
"max.in.flight.requests.per.connection = 5"
"max.request.size = 1048576"
"metadata.max.age.ms = 300000"
"metadata.max.idle.ms = 300000"
"metric.reporters = []"
"metrics.num.samples = 2"
"metrics.recording.level = INFO"
"metrics.sample.window.ms = 30000"
"partitioner.adaptive.partitioning.enable = true"
"partitioner.availability.timeout.ms = 0"
"partitioner.class = null"
"partitioner.ignore.keys = false"
"receive.buffer.bytes = 32768"
"reconnect.backoff.max.ms = 1000"
"reconnect.backoff.ms = 50"
"request.timeout.ms = 30000"
"retries = 1"
"retry.backoff.ms = 100"
"sasl.client.callback.handler.class = null"
"sasl.jaas.config = null"
"sasl.kerberos.kinit.cmd = /usr/bin/kinit"
"sasl.kerberos.min.time.before.relogin = 60000"
"sasl.kerberos.service.name = null"
"sasl.kerberos.ticket.renew.jitter = 0.05"
"sasl.kerberos.ticket.renew.window.factor = 0.8"
"sasl.login.callback.handler.class = null"
"sasl.login.class = null"
"sasl.login.connect.timeout.ms = null"
"sasl.login.read.timeout.ms = null"
"sasl.login.refresh.buffer.seconds = 300"
"sasl.login.refresh.min.period.seconds = 60"
"sasl.login.refresh.window.factor = 0.8"
"sasl.login.refresh.window.jitter = 0.05"
"sasl.login.retry.backoff.max.ms = 10000"
"sasl.login.retry.backoff.ms = 100"
"sasl.mechanism = GSSAPI"
"sasl.oauthbearer.clock.skew.seconds = 30"
"sasl.oauthbearer.expected.audience = null"
"sasl.oauthbearer.expected.issuer = null"
"sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000"
"sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000"
"sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100"
"sasl.oauthbearer.jwks.endpoint.url = null"
"sasl.oauthbearer.scope.claim.name = scope"
"sasl.oauthbearer.sub.claim.name = sub"
"sasl.oauthbearer.token.endpoint.url = null"
"security.protocol = PLAINTEXT"
"security.providers = null"
"send.buffer.bytes = 131072"
"socket.connection.setup.timeout.max.ms = 30000"
"socket.connection.setup.timeout.ms = 10000"
"ssl.cipher.suites = null"
"ssl.enabled.protocols = [TLSv1.2, TLSv1.3]"
"ssl.endpoint.identification.algorithm = https"
"ssl.engine.factory.class = null"
"ssl.key.password = null"
"ssl.keymanager.algorithm = SunX509"
"ssl.keystore.certificate.chain = null"
"ssl.keystore.key = null"
"ssl.keystore.location = null"
"ssl.keystore.password = null"
"ssl.keystore.type = JKS"
"ssl.protocol = TLSv1.3"
"ssl.provider = null"
"ssl.secure.random.implementation = null"
"ssl.trustmanager.algorithm = PKIX"
"ssl.truststore.certificates = null"
"ssl.truststore.location = null"
"ssl.truststore.password = null"
"ssl.truststore.type = JKS"
"transaction.timeout.ms = 60000"
"transactional.id = null"
"value.serializer = class org.apache.kafka.common.serialization.StringSerializer"

有什么办法可以解决这个错误吗?

9jyewag0

9jyewag01#

生产者配置不是你需要看的(客户端id和你的日志不同)。
您需要查看Debezium / Kafka Connect worker配置,适用于offsets.storage.topic=connect-distributed-offsets。错误是说此主题没有健康的副本(另请参阅offset.storage.replication.factor
使用此选项检查ISR列表

kafka-topics --describe --topic=connect-distributed-offsets
4szc88ey

4szc88ey2#

在我们将偏移量主题的副本计数增加到3,分区增加到4之后,这个错误就消失了。当副本计数和分区配置为1时发生此错误。我们在讨论Debezium with AWS MSK NOT_ENOUGH_REPLICAS的解决方案时得到了增加副本数量的想法。

相关问题