confluent kafka connect-jdbcsourcetask:java.sql.sqlexception:java堆空间

yfjy0ee7  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(432)

我尝试在mysql中使用mode timestamp,但这样做时,它不会在kafka队列中创建任何主题,而且也没有错误日志。
下面是我正在使用的连接器属性,

{
        "name": "jdbc_source_mysql_reqistrations_local",
        "config": {
                 "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
                 "key.converter": "io.confluent.connect.avro.AvroConverter",
                 "key.converter.schema.registry.url": "http://localhost:8081",
                 "value.converter": "io.confluent.connect.avro.AvroConverter",
                 "value.converter.schema.registry.url": "http://localhost:8081",
                 "tasks.max": "5",
                 "connection.url": "jdbc:mysql://localhost:3306/prokafka?zeroDateTimeBehavior=ROUND&user=kotesh&password=kotesh",
                 "poll.interval.ms":"100000000",
                 "query": "SELECT Language, matriid, DateUpdated from usersdata.user",
                 "mode": "timestamp",
                 "timestamp.column.name": "DateUpdated",
                 "validate.non.null": "false",
                 "batch.max.rows":"10",
                 "topic.prefix": "mysql-local-"
        }
}

启动:

./bin/confluent load jdbc_source_mysql_registration_local -d /home/prokafka/config-json/kafka-connect-jdbc-local-mysql.json

{
  "name": "jdbc_source_mysql_reqistrations_local",
  "config": {
    "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
    "key.converter": "io.confluent.connect.avro.AvroConverter",
    "key.converter.schema.registry.url": "http://localhost:8081",
    "value.converter": "io.confluent.connect.avro.AvroConverter",
    "value.converter.schema.registry.url": "http://localhost:8081",
    "tasks.max": "5",
    "connection.url": "jdbc:mysql://localhost:3306/prokafka?zeroDateTimeBehavior=ROUND&user=kotesh&password=kotesh",
    "poll.interval.ms": "100000000",
    "query": "SELECT Language, matriid, DateUpdated from usersdata.users",
    "mode": "timestamp",
    "timestamp.column.name": "DateUpdated",
    "validate.non.null": "false",
    "batch.max.rows": "10",
    "topic.prefix": "mysql-local-",
    "name": "jdbc_source_mysql_reqistrations_local"
  },
  "tasks": [
    {
      "connector": "jdbc_source_mysql_reqistrations_local",
      "task": 0
    }
  ],
  "type": null
}
bq9c1y66

bq9c1y661#

sqlexception:java堆空间
似乎您加载了太多的数据,连接无法处理,必须增加堆大小
例如,将其增加到6gb(或更多)
我没有尝试使用confluent cli来实现这一点,但是根据代码,这可能会起作用

confluent stop connect 
export CONNECT_KAFKA_HEAP_OPTS="-Xmx6g"
confluent start connect

如果这台计算机上的内存有限,请分别从mysql数据库、kafka代理、zookeeper、schema注册表等运行connect。

相关问题