如何在kafka主题中告诉debezuim mysql source connector停止重新获取现有表的快照?

wljmcqd8  于 2021-06-04  发布在  Kafka
关注(0)|答案(1)|浏览(444)

我正在使用debezium mysql cdc源代码连接器将数据库从mysql移动到kafka。连接器工作正常,除了快照,它的行为怪异;连接器成功地获取了第一个快照,然后在几个小时后由于堆内存限制而下降(这不是问题所在)。我暂停了连接器,停止了集群上的工作进程,修复了问题,然后再次启动了工作进程。。。连接器现在运行正常,但再次拍摄快照!看起来连接器没有从中断的位置恢复。我觉得我的身体有点不对劲。我用的是debezium 0.95。
我换了衣服 snapshot.mode=initialinitial_only 但没用。
连接属性:

{
  "properties": {
    "connector.class": "io.debezium.connector.mysql.MySqlConnector",
    "snapshot.locking.mode": "minimal",
    "errors.log.include.messages": "false",
    "table.blacklist": "mydb.someTable",
    "include.schema.changes": "true",
    "database.jdbc.driver": "com.mysql.cj.jdbc.Driver",
    "database.history.kafka.recovery.poll.interval.ms": "100",
    "poll.interval.ms": "500",
    "heartbeat.topics.prefix": "__debezium-heartbeat",
    "binlog.buffer.size": "0",
    "errors.log.enable": "false",
    "key.converter": "org.apache.kafka.connect.json.JsonConverter",
    "snapshot.fetch.size": "100000",
    "errors.retry.timeout": "0",
    "database.user": "kafka_readonly",
    "database.history.kafka.bootstrap.servers": "bootstrap:9092",
    "internal.database.history.ddl.filter": "DROP TEMPORARY TABLE IF EXISTS .+ /\\* generated by server \\*/,INSERT INTO mysql.rds_heartbeat2\\(.*\\) values \\(.*\\) ON DUPLICATE KEY UPDATE value \u003d .*,FLUSH RELAY LOGS.*,flush relay logs.*",
    "heartbeat.interval.ms": "0",
    "header.converter": "org.apache.kafka.connect.json.JsonConverter",
    "autoReconnect": "true",
    "inconsistent.schema.handling.mode": "fail",
    "enable.time.adjuster": "true",
    "gtid.new.channel.position": "latest",
    "ddl.parser.mode": "antlr",
    "database.password": "pw",
    "name": "mysql-cdc-replication",
    "errors.tolerance": "none",
    "database.history.store.only.monitored.tables.ddl": "false",
    "gtid.source.filter.dml.events": "true",
    "max.batch.size": "2048",
    "connect.keep.alive": "true",
    "database.history": "io.debezium.relational.history.KafkaDatabaseHistory",
    "snapshot.mode": "initial_only",
    "connect.timeout.ms": "30000",
    "max.queue.size": "8192",
    "tasks.max": "1",
    "database.history.kafka.topic": "history-topic",
    "snapshot.delay.ms": "0",
    "database.history.kafka.recovery.attempts": "100",
    "tombstones.on.delete": "true",
    "decimal.handling.mode": "double",
    "snapshot.new.tables": "parallel",
    "database.history.skip.unparseable.ddl": "false",
    "value.converter": "org.apache.kafka.connect.json.JsonConverter",
    "table.ignore.builtin": "true",
    "database.whitelist": "mydb",
    "bigint.unsigned.handling.mode": "long",
    "database.server.id": "6022",
    "event.deserialization.failure.handling.mode": "fail",
    "time.precision.mode": "adaptive_time_microseconds",
    "errors.retry.delay.max.ms": "60000",
    "database.server.name": "host",
    "database.port": "3306",
    "database.ssl.mode": "disabled",
    "database.serverTimezone": "UTC",
    "task.class": "io.debezium.connector.mysql.MySqlConnectorTask",
    "database.hostname": "host",
    "database.server.id.offset": "10000",
    "connect.keep.alive.interval.ms": "60000",
    "include.query": "false"
  }
}
jv4diomz

jv4diomz1#

我可以证实贡纳的回答。在快照过程中遇到一些问题,必须重新启动整个快照过程。现在,连接器不支持在某个点恢复快照。我觉得你的配置很好。希望这有帮助。

相关问题