我们有一个现有的springmvc应用程序(saphybris),其中我们希望使用kafkatemplate集成kafka。我在xml中配置了kafka模板,如下所示
<bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
<constructor-arg ref="producerFactory"/>
</bean>
<bean id="producerFactory" class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
<constructor-arg>
<map>
<entry key="bootstrap.servers" value-type="java.lang.String" value="${spring.kafka.bootstrap-servers}" />
<entry key="key.serializer" value-type="java.lang.Class" value="org.apache.kafka.common.serialization.StringSerializer" />
<entry key="value.serializer" value-type="java.lang.Class" value="org.apache.kafka.common.serialization.StringSerializer" />
</map>
</constructor-arg>
</bean>
``` `spring.kafka.bootstrap-servers` 配置为 `localhost:9092` . 请注意,我不能使用spring引导或基于注解的配置,我只能使用基于xml的配置
这是我的样本代码从控制器
@RequestMapping(method = RequestMethod.GET)
public String doRegister(final Model model) throws CMSItemNotFoundException
{
String message = "Dummy Message: "+Math.random();
String topic = "Dummy_Topic";
kafkaTemplate.send(topic,message);
return "pages/register";
}
我可以通过本地kafka设置从命令行客户机发送和接收消息,但是当我尝试从spring mvc应用程序发送消息时,出现以下错误。
ERROR [hybrisHTTP16] [LoggingProducerListener] Exception thrown when sending a message with key='null' and payload='Dummy Message: 0.03670242785185063' to topic Dummy_Topic:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
ERROR [hybrisHTTP16] [LoggingProducerListener] Exception thrown when sending a message with key='null' and payload='Dummy Message: 0.03670242785185063' to topic Dummy_Topic:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
我在正在侦听该主题的命令行客户端上没有收到任何消息。
以下是我对Kafka的看法:
acks = 1
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
++++++编辑+myserver.properties:
broker.id=0
listeners=PLAINTEXT://:9092
host.name=localhost
advertised.host.name= localhost
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
log.dir=D:/kafka_2.11-0.9.0.0/kafka_2.11-0.9.0.0/data
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
+++++编辑结束++++
我无法找出为什么它不能通过我的应用程序工作,但它的作品很好的命令行Kafka生产者。请在这方面帮助我。
谢谢。
暂无答案!
目前还没有任何答案,快来回答吧!