我正在使用kafka v0.10.1.1和spring boot。
我想用Kafka的主题传达一个信息 mobile-user
使用以下生产商代码:
主题 mobile-user
有5个分区和2个复制因子。我在问题的最后附上了我的Kafka设置。
package com.service;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.stereotype.Service;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;
import com.shephertz.karma.constant.Constants;
import com.shephertz.karma.exception.KarmaException;
import com.shephertz.karma.util.Utils;
/**
* @author Prakash Pandey
*/
@Service
public class NotificationSender {
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
private static Logger LOGGER = LoggerFactory.getLogger(NotificationSender.class);
// Send Message
public void sendMessage(String topicName, String message) throws KarmaException {
LOGGER.debug("========topic Name===== " + topicName + "=========message=======" + message);
ListenableFuture<SendResult<String, String>> result = kafkaTemplate.send(topicName, message);
result.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
@Override
public void onSuccess(SendResult<String, String> result) {
LOGGER.info("sent message='{}'" + " to partition={}" + " with offset={}", message,
result.getRecordMetadata().partition(), result.getRecordMetadata().offset());
}
@Override
public void onFailure(Throwable ex) {
LOGGER.error(Constants.PRODUCER_MESSAGE_EXCEPTION.getValue() + Utils.getStackTrace(ex));
}
});
LOGGER.debug("Payload sent to kafka");
LOGGER.debug("topic: " + topicName + ", payload: " + message);
}
}
问题:
我能够成功地向Kafka发送消息,但有时我会收到以下错误:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 5000 ms.
2017-10-25 06:21:48, [ERROR] [karma-unified-notification-dispatcher - NotificationDispatcherSender - onFailure:43] Exception in sending message to kafka for queryorg.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 5000 ms.
at org.springframework.kafka.core.KafkaTemplate$1.onCompletion(KafkaTemplate.java:255)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:486)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436)
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.send(DefaultKafkaProducerFactory.java:156)
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:241)
at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:151)
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 5000 ms.
Kafka酒店:
spring.kafka.producer.retries=5
spring.kafka.producer.batch-size=1000
spring.kafka.producer.request.timeout.ms=60000
spring.kafka.producer.linger.ms=10
spring.kafka.producer.acks=1
spring.kafka.producer.buffer-memory=33554432
spring.kafka.producer.max.block.ms=5000
spring.kafka.topic.retention=86400000
spring.zookeeper.hosts=192.20.1.19:2181,10.20.1.20:2181,10.20.1.26:2181
spring.kafka.session.timeout=30000
spring.kafka.connection.timeout=10000
spring.kafka.topic.partition=5
spring.kafka.message.replication=2
spring.kafka.listener.concurrency=1
spring.kafka.listener.poll-timeout=3000
spring.kafka.consumer.auto-commit-interval=1000
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.max-poll-records=200
spring.kafka.consumer.max-poll-interval-ms=300000
如果你能帮我解决这个问题,那将非常有帮助。谢谢。
请注意:我不是每次都收到这封邮件。我成功地向 kafka-topic
并成功地消耗了它 consumer
. 上述错误大致发生在 1000
已成功生成消息。
1条答案
按热度按时间i5desfxk1#
更改默认引导服务器属性:
致您的: