java.io.eofexception:null节点-1已断开

dtcbnfnu  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(1107)

我正在尝试设置麋鹿堆栈与我的一个logstash过滤器在我的本地机器上。
我有一个输入文件,它进入一个kafka队列,由我的过滤器解析。输出到elasticsearch。当我运行test.sh对输入文件运行logstash过滤器时,这就是 logstash --debug 我不确定这个错误可能是什么,我所有的设置都是localhost和默认端口。任何指导,因为这个错误并没有告诉我太多。

[2019-01-23T15:09:27,263][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Initialize connection to node localhost:2181 (id: -1 rack: null) for sending metadata request
[2019-01-23T15:09:27,264][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Initiating connection to node localhost:2181 (id: -1 rack: null)
[2019-01-23T15:09:27,265][DEBUG][org.apache.kafka.common.network.Selector] [Consumer clientId=logstash-0, groupId=test-retry] Created socket with SO_RCVBUF = 342972, SO_SNDBUF = 146988, SO_TIMEOUT = 0 to node -1
[2019-01-23T15:09:27,265][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Completed connection to node -1. Fetching API versions.
[2019-01-23T15:09:27,265][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Initiating API versions fetch from node -1.
[2019-01-23T15:09:27,266][DEBUG][org.apache.kafka.common.network.Selector] [Consumer clientId=logstash-0, groupId=test-retry] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException: null
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) ~[kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:335) ~[kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:296) ~[kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:562) ~[kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:498) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.common.network.Selector.poll(Selector.java:427) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:271) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:161) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:243) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:314) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1181) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115) [kafka-clients-2.0.1.jar:?]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_172]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_172]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_172]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_172]
    at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:423) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:290) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:28) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:90) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:145) [jruby-complete-9.1.13.0.jar:?]
    at usr.local.Cellar.logstash.$6_dot_5_dot_4.libexec.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_kafka_minus_8_dot_2_dot_1.lib.logstash.inputs.kafka.RUBY$block$thread_runner$1(/usr/local/Cellar/logstash/6.5.4/libexec/vendor/bundle/jruby/2.3.0/gems/logstash-input-kafka-8.2.1/lib/logstash/inputs/kafka.rb:253) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:145) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:71) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.runtime.Block.call(Block.java:124) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.RubyProc.call(RubyProc.java:289) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.RubyProc.call(RubyProc.java:246) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:104) [jruby-complete-9.1.13.0.jar:?]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]
[2019-01-23T15:09:27,267][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Node -1 disconnected.
[2019-01-23T15:09:27,267][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,318][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,373][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,424][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,474][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,529][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,579][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,633][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,700][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,753][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,806][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,858][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,913][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,966][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,020][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,073][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,128][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,183][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,234][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,288][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,340][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Initialize connection to node localhost:2181 (id: -1 rack: null) for sending metadata request
xt0899hw

xt0899hw1#

你把logstash设置为连接到zookeeper,而不是kafka。
初始化到节点的连接localhost:2181
确保引导服务器 localhost:9092 ,对你来说

相关问题