ECONNREFUSED使用docker compose连接kibana和elasticsearch

ltqd579y  于 2023-04-03  发布在  Kibana
关注(0)|答案(2)|浏览(196)

我尝试使用docker-compose将kibana与elasticsearch连接,但我得到了错误:无法从Elasticsearch节点检索版本信息。连接ECONNREFUSED XXX:9200
这是我的docker-compose:

version: "2.2"

services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - network.host=0.0.0.0
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=false
      - xpack.security.transport.ssl.enabled=false
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1

  kibana:
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    volumes:
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - elasticsearch.ssl.verificationMode=none
      - SERVER_HOST=0.0.0.0
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120
    
  fscrawler:
    image: dadoonet/fscrawler:2.10-SNAPSHOT
    container_name: fscrawler
    restart: always
    volumes:
      - ./data:/tmp/es:ro
      - ./config:/root/.fscrawler
      - ./logs:/usr/share/fscrawler/logs
    depends_on:
      - es01
    ports:
      - 8080:8080
    command: fscrawler job_name --restart --rest

volumes:
  certs:
    driver: local
  esdata01:
    driver: local
  kibanadata:
    driver: local

这是我的.env:

# THIS FILE IS AUTOMATICALLY GENERATED FROM /contrib/src/main/resources/xxx DIR.

# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=changeme

# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=changeme

# Version of Elastic products
STACK_VERSION=8.6.2

# Set the cluster name
CLUSTER_NAME=docker-cluster

# Set to 'basic' or 'trial' to automatically start the 30-day trial
#LICENSE=basic
LICENSE=trial

# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200

# Port to expose Kibana to the host
KIBANA_PORT=5601

# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=1073741824

# Project namespace (defaults to the current folder name if not set)
COMPOSE_PROJECT_NAME=fscrawler

有人能帮帮我吗?
谢谢
我尝试将端口设置为0.0.0.0,禁用ssl和安全性或使用网络,但都不起作用。

gzjq41n4

gzjq41n41#

你的docker-compose中缺少的是:

environment:
  - discovery.type=single-node

如果你正在寻找一些简单的合成yaml,你可以使用下面的docker-compose.yml

version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.8
    container_name: elasticsearch-ui
    environment:
      - discovery.type=single-node
    ports:
      - 9200:9200
      - 9300:9300
  kibana:
    image: docker.elastic.co/kibana/kibana:7.17.8
    container_name: kibana-ui
    ports:
      - 5601:5601
    depends_on:
      - elasticsearch

将文件另存为docker-compose.yml并运行docker-compose up -d

mkh04yzy

mkh04yzy2#

我用kibana和fscrawler找到了一个解决弹性问题的方法。
坞站-组合:

version: "2.2"

services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01
      - bootstrap.memory_lock=true
      - network.host=0.0.0.0
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=false
      - xpack.security.transport.ssl.enabled=false
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1

  kibana:
    image: docker.elastic.co/kibana/kibana:8.6.2
    container_name: kibana-ui
    ports:
      - 5601:5601
    depends_on:
      - es01
    
  fscrawler:
    image: dadoonet/fscrawler:2.10-SNAPSHOT
    container_name: fscrawler
    restart: always
    volumes:
      - ./data:/tmp/es:ro
      - ./config:/root/.fscrawler
      - ./logs:/usr/share/fscrawler/logs
    depends_on:
      - es01
    ports:
      - 8080:8080
    command: fscrawler job_name --restart --rest

volumes:
  esdata01:
    driver: local
  kibanadata:
    driver: local

配置fscrawler:

name: "job_name"
fs:
  indexed_chars: 100%
  lang_detect: true
  continue_on_error: true
  ocr:
    language: "eng"
    enabled: true
    pdf_strategy: "ocr_and_text"
elasticsearch:
  nodes:
    - url: "http://es01:9200"
  username: "elastic"
  password: "changeme"
  ssl_verification: false
rest :
  url: "http://fscrawler:8080"

当我启动docker-compose时,一切正常。要将kibana连接到elastic,我无法生成enrollment-token(禁用ssl),因此我使用http://ipcontainerelastic:9200手动配置它
不推荐使用此解决方案(因为禁用了SSL安全性),但仅用于测试可能是一个很好的解决方案。
谢谢你的回答,对我帮助很大:)

相关问题