无效的http主机:kafka elasticsearch接收器连接器

u0njafvf  于 2021-06-04  发布在  Kafka
关注(0)|答案(0)|浏览(292)

我正在尝试使用elasticsearch作为我的应用程序的db,中间使用kafka connect。kafkaconnect、elasticsearch(版本7)和我的应用程序都运行在同一网络中的容器中。当我尝试从kafkaconnect的容器访问elasticsearch时,它是成功的。但我的连接器不断向我抛出以下错误,我似乎无法找出确切的问题:
错误:

container_standalone    | java.lang.IllegalArgumentException: Invalid HTTP host: elasticsearch:9200/
container_standalone    |   at org.apache.http.HttpHost.create(HttpHost.java:123)
container_standalone    |   at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.lambda$getClientConfig$0(JestElasticsearchClient.java:201)
container_standalone    |   at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
container_standalone    |   at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
container_standalone    |   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
container_standalone    |   at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
container_standalone    |   at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
container_standalone    |   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
container_standalone    |   at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
container_standalone    |   at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.getClientConfig(JestElasticsearchClient.java:201)
container_standalone    |   at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.<init>(JestElasticsearchClient.java:149)
container_standalone    |   at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.<init>(JestElasticsearchClient.java:142)
container_standalone    |   at io.confluent.connect.elasticsearch.ElasticsearchSinkTask.start(ElasticsearchSinkTask.java:122)
container_standalone    |   at io.confluent.connect.elasticsearch.ElasticsearchSinkTask.start(ElasticsearchSinkTask.java:51)
container_standalone    |   at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:300)
container_standalone    |   at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:189)
container_standalone    |   at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
container_standalone    |   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
container_standalone    |   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
container_standalone    |   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
container_standalone    |   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
container_standalone    |   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
container_standalone    |   at java.lang.Thread.run(Thread.java:748)
container_standalone    | [2020-09-23 09:47:09,366] ERROR WorkerSinkTask{id=elasticsearch-sink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:180)

配置文件:

name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=vehicle
topic.index=test-vehicle
connection.url=http://elasticsearch:9200
connection.username=username
connection.password=password
type.name=log
key.ignore=true
schema.ignore=true

elastic.yml文件:

cluster.name: "docker-cluster"
network.host: 0.0.0.0
xpack.license.self_generated.type: trial
xpack.security.enabled: false
xpack.monitoring.collection.enabled: true

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题