kafka producer的spring boot属性:
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.client-id=bam
# spring.kafka.producer.acks= # Number of acknowledgments the producer requires the leader to have received before considering a request complete.
spring.kafka.producer.batch-size=0
spring.kafka.producer.bootstrap-servers=localhost:9092
# spring.kafka.producer.buffer-memory= # Total bytes of memory the producer can use to buffer records waiting to be sent to the server.
spring.kafka.producer.client-id=bam-producer
spring.kafka.consumer.auto-offset-reset=earliest
# spring.kafka.producer.compression-type= # Compression type for all data generated by the producer.
spring.kafka.producer.key-serializer= org.apache.kafka.common.serialization.StringSerializer
# spring.kafka.producer.retries= # When greater than zero, enables retrying of failed sends.
spring.kafka.producer.value-serializer= org.apache.kafka.common.serialization.StringSerializer
# spring.kafka.properties.*= # Additional properties used to configure the client.
当我试图向Kafka主题发送消息时,出现以下异常:
Caused by: org.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for bam-0 due to 30004 ms has passed since last append
at org.springframework.kafka.core.KafkaTemplate$1.onCompletion(KafkaTemplate.java:255)
at org.apache.kafka.clients.producer.internals.RecordBatch.done(RecordBatch.java:109)
at org.apache.kafka.clients.producer.internals.RecordBatch.maybeExpire(RecordBatch.java:160)
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortExpiredBatches(RecordAccumulator.java:245)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:212)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135)
... 1 more
原因:org.apache.kafka.common.errors.timeoutexception:自上次追加后,bam-0的1条记录因30004毫秒而过期
我不明白为什么我会得到这个例外。有人能帮忙吗?
2条答案
按热度按时间cotxawn71#
制作人试图发送消息时超时了。我注意到您在引导中使用localhost。确保代理在本地可用,并在端口9092上侦听。
bvjxkvbb2#
通过将server.properties中的advised.listeners设置为纯文本://:9092解决了问题。
注:Kafka部署在aws上。