读取字段“topic\u metadata”时出错:读取大小为873589的数组时出错,只有41个字节可用

wvmv3b1j  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(372)

我已经安装了logstashversion5.2.2,方法是在安装了新的ubuntu的vm中下载zip文件。
我创建了一个示例配置文件logstash-sample.conf,其中包含以下条目

input{
        stdin{ }
}
output{
        stdout{ }
}

执行命令$bin/logstash-f logstash-simple.conf,它运行得非常好。
现在在同一台ubuntu机器上,我按照这里提到的完全相同的过程安装了kafkahttps://www.digitalocean.com/community/tutorials/how-to-install-apache-kafka-on-ubuntu-14-04 一直走到第七步。
然后我修改了logstash-sample.conf文件以包含以下内容

input {
        kafka{
                bootstrap_servers => "localhost:9092"
                topics => ["TutorialTopic"]
        }
}
output {
        stdout { codec => rubydebug }
}

这次我得到了以下错误,
sample@sample-virtualbox用法:~/downloads/logstash-5.2.2$bin/logstash-f logstash-sample.conf

Sending Logstash's logs to /home/rs-switch/Downloads/logstash-5.2.2/logs which is now configured via log4j2.properties
[2017-03-07T00:26:25,629][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-03-07T00:26:25,650][INFO ][logstash.pipeline        ] Pipeline main started
[2017-03-07T00:26:26,039][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
log4j:WARN No appenders could be found for logger (org.apache.kafka.clients.consumer.ConsumerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "Ruby-0-Thread-14: /home/rs-switch/Downloads/logstash-5.2.2/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.6/lib/logstash/inputs/kafka.rb:229" org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 873589, only 41 bytes available
        at org.apache.kafka.common.protocol.types.Schema.read(org/apache/kafka/common/protocol/types/Schema.java:73)
        at org.apache.kafka.clients.NetworkClient.parseResponse(org/apache/kafka/clients/NetworkClient.java:380)
        at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(org/apache/kafka/clients/NetworkClient.java:449)
        at org.apache.kafka.clients.NetworkClient.poll(org/apache/kafka/clients/NetworkClient.java:269)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:360)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:224)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:192)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:163)
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java:179)
        at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(org/apache/kafka/clients/consumer/KafkaConsumer.java:974)
        at org.apache.kafka.clients.consumer.KafkaConsumer.poll(org/apache/kafka/clients/consumer/KafkaConsumer.java:938)
        at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
        at RUBY.thread_runner(/home/rs-switch/Downloads/logstash-5.2.2/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.6/lib/logstash/inputs/kafka.rb:239)
        at java.lang.Thread.run(java/lang/Thread.java:745)
[2017-03-07T00:26:28,742][WARN ][logstash.agent           ] stopping pipeline {:id=>"main"}

有人能帮我解决这个问题吗?我真的卡住设置麋鹿从过去几周,但没有成功。

uurv41yg

uurv41yg1#

您很可能存在导致此问题的版本冲突。查看logstashkafka输入插件文档中的兼容性矩阵。
您提到的安装kafka的链接已安装版本0.8.2.1,该版本将不适用于kafka 0.10客户端。kafka有版本检查和向后兼容性,但是只有在代理比客户端更新的情况下,这里不是这样。我建议安装一个当前版本的kafka,自从版本0.8以来已经有了巨大的改进,如果你尝试降级logstash的话,你将会错过这些改进。
查看confluent platform快速入门,以获得一种简单的入门方法。

相关问题