无法从logstash docker容器连接到kafka docker容器

cpjpxq1n  于 2021-06-04  发布在  Kafka
关注(0)|答案(3)|浏览(568)

我正在尝试从logstash docker容器连接到kafka docker容器,但始终收到以下消息:

Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.

我的docker-compose.yml文件是

version: '3.2'

services:
  elasticsearch:
    build:
      context: elasticsearch/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./elasticsearch/config/elasticsearch.yml
        target: /usr/share/elasticsearch/config/elasticsearch.yml
        read_only: true
      - type: volume
        source: elasticsearch
        target: /usr/share/elasticsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      ELASTIC_PASSWORD: changeme
    networks:
      - elk
    depends_on:
      - kafka

  logstash:
    build:
      context: logstash/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./logstash/config/logstash.yml
        target: /usr/share/logstash/config/logstash.yml
        read_only: true
      - type: bind
        source: ./logstash/pipeline
        target: /usr/share/logstash/pipeline
        read_only: true
    ports:
      - "5000:5000"
      - "9600:9600"
    links:
      - kafka
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    build:
      context: kibana/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./kibana/config/kibana.yml
        target: /usr/share/kibana/config/kibana.yml
        read_only: true
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch

  zookeeper:
    image: strimzi/kafka:0.11.3-kafka-2.1.0
    container_name: zookeeper
    command: [
      "sh", "-c",
      "bin/zookeeper-server-start.sh config/zookeeper.properties"
    ]
    ports:
      - "2181:2181"
    networks:
      - elk
    environment:
      LOG_DIR: /tmp/logs

  kafka:
    image: strimzi/kafka:0.11.3-kafka-2.1.0
    command: [
      "sh", "-c",
      "bin/kafka-server-start.sh config/server.properties --override listeners=$${KAFKA_LISTENERS} --override advertised.listeners=$${KAFKA_ADVERTISED_LISTENERS} --override zookeeper.connect=$${KAFKA_ZOOKEEPER_CONNECT}"
    ]
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    networks:
      - elk
    environment:
      LOG_DIR: "/tmp/logs"
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

networks:
  elk:
    driver: bridge

volumes:
  elasticsearch:

我的logstash.conf文件是

input {
    kafka{
        bootstrap_servers => "kafka:9092"
        topics => ["logs"]
    }
}

## Add your filters / logstash plugins configuration here

output {
    elasticsearch {
        hosts => "elasticsearch:9200"
        user => "elastic"
        password => "changeme"
    }
}

我所有的容器都正常运行,我可以向容器外的Kafka主题发送消息。

2w3kk1z5

2w3kk1z51#

您可以使用kafka播发侦听器的主机ip地址,这样您的docker服务以及在docker网络外部运行的服务就可以访问它。
Kafka的广告_listeners:plaintext://$主机ip:9092
Kafka听众:纯文本://$host\u ip:9092
为了便于参考,你可以通读这篇文章https://rmoff.net/2018/08/02/kafka-listeners-explained/

e4eetjau

e4eetjau2#

Kafka广告列表应该这样定义

KAFKA_ADVERTISED_LISTENERS:PLAINTEXT://kafka:9092   
KAFKA_LISTENERS: PLAINTEXT://kafka:9092
c9x0cxw0

c9x0cxw03#

您需要根据主机名定义侦听器,在主机名处可以从客户端解析侦听器。如果听众 localhost 然后客户机(logstash)将尝试将其解析为 localhost 从它自己的容器,因此错误。
我已经在这里详细地写过了,但本质上你需要这个:

KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092, PLAINTEXT://kafka:29092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092, PLAINTEXT://kafka:29092

那么docker网络上的任何容器 kafka:29092 所以logstash config

bootstrap_servers => "kafka:29092"

主机本身上的任何客户端都将继续使用 localhost:9092 .
您可以在docker compose中看到这一点:https://github.com/confluentinc/demo-scene/blob/master/build-a-streaming-pipeline/docker-compose.yml#l40

相关问题