合流Kafka连接码头集装箱问题

zbwhf8kr  于 2021-06-08  发布在  Kafka
关注(0)|答案(1)|浏览(508)

我正在使用以下docker编写代码段:

connect:
    image: confluentinc/cp-kafka-connect:latest
    hostname: connect
    container_name: connect
    depends_on:
      - zookeeper
      - kafka
    ports:
      - "8083:8083"
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 'kafka:9092'
      CONNECT_REST_ADVERTISED_HOST_NAME: connect
      CONNECT_GROUP_ID: compose-connect-group
      CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
      CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_PLUGIN_PATH: /usr/share/java
      CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'

容器似乎启动得很好,但是当我尝试通过connect container rest api添加hdfs sink连接时:

curl -s -X POST -H 'Content-Type: application/json' --data \
@confluent_hdfs.json http://localhost:8083/connectors

其中合流的\u hdfs.json文件包含:

{
  "name": "hdfs-sink",
  "config": {
    "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
    "tasks.max": "1",
    "topics": "test",
    "hdfs.url": "hdfs://localhost:9000",
    "flush.size": "1000",
    "name": "hdfs-sink"
  }
}

我得到一个500 http响应。检查连接器容器的日志显示:

WARN /connectors (org.eclipse.jetty.server.HttpChannel)
javax.servlet.ServletException: javax.servlet.ServletException:
org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: 
io/confluent/connect/hdfs/HdfsSinkConnectorConfig

通过查看此问题,我看到以下帖子:
https://github.com/confluentinc/kafka-connect-hdfs/issues/273
这表明插件路径是错误的。然而,据我所知,我已经将其正确地设置为/usr/share/java,并且我还看到了本文提到的正确配置的符号链接。
此外,在执行请求时:

curl http://localhost:8083/connector-plugins

我看到以下回应:

[
{"class":"io.confluent.connect.hdfs.HdfsSinkConnector","type":"sink","version":"4.1.1"},
{"class":"io.confluent.connect.hdfs.tools.SchemaSourceConnector","type":"source","version":"1.1.1-cp1"},
{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink","version":"1.1.1-cp1"},
{"class":"org.apache.kafka.connect.file.FileStreamSourceConnector","type":"source","version":"1.1.1-cp1"}
]

所以我真的不确定我是在撰写文件中遗漏了什么,还是在这里遗漏了什么?

o2rvlv0m

o2rvlv0m1#

多亏了dawsaw,我完成了您建议的示例,我意识到问题出在我安装的连接器插件上,它将连接器文件夹挂载为一个卷。不幸的是,我在连接容器的错误部分安装了连接器,这似乎损害了容器正确运行的能力。
最后我的工作是:

connect:
image: confluentinc/cp-kafka-connect:4.1.1
container_name: connect
restart: always
ports:
  - "8083:8083"
depends_on:
  - zookeeper
  - kafka
volumes:
  - $PWD/confluentinc-kafka-connect-rabbitmq-1.0.0-preview:/usr/share/java/confluentinc-kafka-connect-rabbitmq-1.0.0-preview
environment:
  CONNECT_BOOTSTRAP_SERVERS: "kafka:9092"
  CONNECT_REST_ADVERTISED_HOST_NAME: "connect"
  CONNECT_REST_PORT: 8083
  CONNECT_GROUP_ID: "connect"
  CONNECT_CONFIG_STORAGE_TOPIC: connect-config
  CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
  CONNECT_STATUS_STORAGE_TOPIC: connect-status
  CONNECT_REPLICATION_FACTOR: 1
  CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
  CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
  CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
  CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
  CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
  CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
  CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
  CONNECT_PLUGIN_PATH: "/usr/share/java"

再次感谢您对此的帮助,并对最初创建的糟糕示例片段表示歉意。

相关问题