kafka连接错误:java.util.concurrent.executionexception:org.apache.kafka.connect.runtime.rest.errors.badrequestexception

daupos2t  于 2021-06-04  发布在  Kafka
关注(0)|答案(1)|浏览(448)

我在试着联系 mysql 以及 kafka 使用连接器。
当我跑的时候 bin/connect-standalone.sh config/connect-standalone.properties test.config ,出现错误。
[2019-11-20 06:02:05219]为test.config(org.apache.kafka.connect.cli.connects)创建作业失败tandalone:110) [2019-11-20 06:02:05,219]连接器出错后停止时出错(org.apache.kafka.connect.cli.connects)tandalone:121)java.util.concurrent.executionexception:org.apache.kafka.connect.runtime.rest.errors.badrequestexception:connector config{“config”={,“database.user”=“root”,“database.port”=“3306”,“include.schema.changes”=“true”,“database.server.name”=“asgard”,,“connector.class”=“io.debezium.connector.mysql.mysqlconnector”,“database.history.kafka.topic”=“dbhistory.demo”,“database.server.id”=“42”,“name”=“mysql source demo customers”,“database.hostname”=“localhost”,{=,“database.password”=“dsm1234”,}=,“database.history.kafka.bootstrap.servers”=”localhost:9092",,“table.whitelist”=“演示客户”,}在org.apache.kafka.connect.util.convertingfuturecallback.result(convertingfuturecallback)中不包含连接器类型。java:79)在org.apache.kafka.connect.util.convertingfuturecallback.get(convertingfuturecallback。java:66)位于org.apache.kafka.connect.cli.connectstandalone.main(connectstandalone.com)。java:118)原因:org.apache.kafka.connect.runtime.rest.errors.badrequestexception:连接器配置{“config”={,“database.user”=“root”,“database.port”=“3306”,“include.schema.changes”=“true”,“database.server.name”=“asgard”,“connector.class”=“io.debezium.connector.mysql.mysqlconnector”,“database.history.kafka.topic”=“dbhistory.demo”,,“database.server.id”=“42”,“name”=“mysql source demo customers”,“database.hostname”=“localhost”,{=,“database.password”=“dsm1234”,}=,“database.history.kafka.bootstrap.servers”=”localhost:9092“,,”table.whitelist“=“演示客户”,}在org.apache.kafka.connect.runtime.abstractherder.validateconnectorconfig(abstractherder)中不包含连接器类型。java:287)在org.apache.kafka.connect.runtime.standaloneherder.putconnectorconfig(standaloneherder。java:192)位于org.apache.kafka.connect.cli.connectstandalone.main(connectstandalone.com)。java:115) [2019-11-20 06:02:05,221]信息kafka connect stopping(org.apache.kafka.connect.runtime)。connect:66)[2019-11-20 06:02:05221]停止rest服务器的信息(org.apache.kafka.connect.runtime.rest.restserver:241)[2019-11-20 06:02:05224]信息已停止http_8083@2a7686a7{http/1.1,[http/1.1]}{0.0.0.0:8083}(org.eclipse.jetty.server.abstract)connector:341) [2019-11-20 06:02:05,225]信息节点0停止清理(org.eclipse.jetty.server。session:167)[2019-11-20 06:02:05226]信息rest服务器已停止(org.apache.kafka.connect.runtime.rest.r)estserver:258)[2019-11-20 06:02:05226]信息牧民停止(org.apache.kafka.connect.runtime.standalone.standal)oneherder:98) [2019-11-20 06:02:05,226]信息工作者正在停止(org.apache.kafka.connect.runtime)。worker:194)[2019-11-20 06:02:05226]信息已停止fileoffsetbackingstore(org.apache.kafka.connect.storage.fileoffsetbackingstore:66)[2019-11-20 06:02:05226]信息工作者已停止(org.apache.kafka.connect.runtime)。worker:215) [2019-11-20 06:02:05,227]信息herder已停止(org.apache.kafka.connect.runtime.standalone.standal)oneherder:115)[2019-11-20 06:02:05227]信息Kafka连接已停止(org.apache.kafka.connect.runtime)。connect:71)
这是我的 test.config :

{
  "name": "mysql-source-demo-customers",
  "config": {
      "connector.class": "io.debezium.connector.mysql.MySqlConnector",
      "database.hostname": "localhost",
      "database.port": "3306",
      "database.user": "root",
      "database.password": "dsm1234",
      "database.server.id": "42",
      "database.server.name": "asgard",
      "table.whitelist": "demo.customers",
      "database.history.kafka.bootstrap.servers": "localhost:9092",
      "database.history.kafka.topic": "dbhistory.demo" ,
      "include.schema.changes": "true"
  }
}

这是我的 connect-standalone.properties :

bootstrap.servers=localhost:9092

# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will

# need to configure these based on the format they want their data in when loaded from or stored into Kafka

key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter

# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply

# it to

key.converter.schemas.enable=true
value.converter.schemas.enable=true

offset.storage.file.filename=/tmp/connect.offsets

# Flush much faster than normal, which is useful for testing/debugging

offset.flush.interval.ms=10000

plugin.path=/home/ec2-user/share/confluent-hub-components

错误日志显示“不包含连接器类型”。
我在stackoverflow上发现了一个类似的问题并进行了跟踪,但它对我无效或与我的案例无关(我问了另一个类似的问题,是关于plugin.path的)

roejwanj

roejwanj1#

当我换衣服的时候 test.config 就像下面一样,它是有效的(json格式->常规属性格式)
测试配置

name=mysql-source-demo-customers
tasks.max=1
connector.class =io.debezium.connector.mysql.MySqlConnector
database.hostname=localhost
database.port = 3306
database.user =root
database.password= dsm1234
database.server.id= 1234
database.server.name =jin
table.whitelist= demo.customers
database.history.kafka.bootstrap.servers=localhost:9092
database.history.kafka.topic= dbhistory.demo
include.schema.changes =true

我也试着说出 test.configtest.properties ,但更改文件扩展名不会影响任何内容。

相关问题