从主题创建表,并得到一些关于longdeserializer接收的数据大小不是8的序列化异常

jucafojl  于 2021-06-04  发布在  Kafka
关注(0)|答案(2)|浏览(416)

环境(docker):


# 5.5.1

image: confluentinc/cp-zookeeper:latest

# 2.13-2.6.0

image: wurstmeister/kafka:latest

# 5.5.1

image: confluentinc/cp-schema-registry:latest

# 5.5.1

image: confluentinc/cp-kafka-connect:latest

# 0.11.0

image: confluentinc/ksqldb-server:latest

Kafka主题内容来自Kafka连接(使用debezium)。
当我使用查询时( select * from user emit changes ),显示了大部分内容,但丢失了一些内容。
我尝试查看ksqldb服务器的日志,发现一些错误消息:

ksqldb-server      | [2020-08-29 12:44:23,008] ERROR {"type":0,"deserializationError":{"errorMessage":"Error deserializing DELIMITED message from topic: pa.new_pa.user","recordB64":null,"cause":["Size of data received by LongDeserializer is not 8"],"topic":"pa.new_pa.user"},"recordProcessingError":null,"productionError":null} (processing.CTAS_USER2_0.KsqlTopic.Source.deserializer:44)
ksqldb-server      | [2020-08-29 12:44:23,008] WARN Exception caught during Deserialization, taskId: 0_0, topic: pa.new_pa.user, partition: 0, offset: 23095 (org.apache.kafka.streams.processor.internals.StreamThread:36)
ksqldb-server      | org.apache.kafka.common.errors.SerializationException: Error deserializing DELIMITED message from topic: pa.new_pa.user
ksqldb-server      | Caused by: org.apache.kafka.common.errors.SerializationException: Size of data received by LongDeserializer is not 8
ksqldb-server      | [2020-08-29 12:44:23,009] WARN stream-thread [_confluent-ksql-default_query_CTAS_USER2_0-6637e2a8-c417-49fa-bb65-d0d1a5205af1-StreamThread-1] task [0_0] Skipping record due to deserialization error. topic=[pa.new_pa.user] partition=[0] offset=[23095] (org.apache.kafka.streams.processor.internals.RecordDeserializer:88)
ksqldb-server      | org.apache.kafka.common.errors.SerializationException: Error deserializing DELIMITED message from topic: pa.new_pa.user
ksqldb-server      | Caused by: org.apache.kafka.common.errors.SerializationException: Size of data received by LongDeserializer is not 8

我尝试使用偏移量为“23095”的消息,看起来很好。

[2020-08-29 13:24:12,021] INFO [Consumer clientId=consumer-console-consumer-37294-1, groupId=console-consumer-37294] Subscribed to partition(s): pa.new_pa.user-0 (org.apache.kafka.clients.consumer.KafkaConsumer)
[2020-08-29 13:24:12,026] INFO [Consumer clientId=consumer-console-consumer-37294-1, groupId=console-consumer-37294] Seeking to offset 23095 for partition pa.new_pa.user-0 (org.apache.kafka.clients.consumer.KafkaConsumer)
[2020-08-29 13:24:12,570] INFO [Consumer clientId=consumer-console-consumer-37294-1, groupId=console-consumer-37294] Cluster ID: rdsgvpoESzer6IAxQDlLUA (org.apache.kafka.clients.Metadata)
{"id":8191,"parent_id":{"long":8184},"upper_id":0,"username":"app0623c","domain":43,"role":1,"modified_at":1598733553000,"blacklist_modified_at":{"long":1598733768000},"tied_at":{"long":1598733771000},"name":"test","enable":1,"is_default":0,"bankrupt":0,"locked":0,"tied":0,"checked":0,"failed":0,"last_login":{"long":1598733526000},"last_online":{"long":1598733532000},"last_ip":{"bytes":"ÿÿ\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000"},"last_country":{"string":"MY"},"last_city_id":0}

这是我的源连接器配置,还有:

CREATE SOURCE CONNECTOR `pa_source_unwrap` WITH(
    "connector.class" = 'io.debezium.connector.mysql.MySqlConnector',
    "tasks.max" = '1',
    "database.hostname" = 'docker.for.mac.host.internal',
    "database.port" = '3306',
    "database.user" = 'root',
    "database.password" = 'xxxxxxx',
    "database.service.id" = '10001',
    "database.server.name" = 'pa',
    "database.whitelist" = 'new_pa',
    "table.whitelist" = 'new_pa.user, new_pa.user_created, new_pa.cash',
    "database.history.kafka.bootstrap.servers" = 'kafka:9092',
    "database.history.kafka.topic" = 'schema-changes.pa',
    "transforms" = 'unwrap',
    "transforms.unwrap.type" = 'io.debezium.transforms.ExtractNewRecordState',
    "transforms.unwrap.delete.handling.mode" = 'drop',
    "transforms.unwrap.drop.tombstones" = 'true',
    "key.converter" = 'io.confluent.connect.avro.AvroConverter',
    "value.converter" = 'io.confluent.connect.avro.AvroConverter',
    "key.converter.schema.registry.url" = 'http://schema-registry:8081',
    "value.converter.schema.registry.url" = 'http://schema-registry:8081',
    "key.converter.schemas.enable" = 'true',
    "value.converter.schemas.enable" = 'true'
);

CREATE TABLE user (`id` BIGINT PRIMARY KEY) WITH (
    KAFKA_TOPIC = 'pa.new_pa.user',
    VALUE_FORMAT = 'AVRO'
);

主题架构(自动生成):

Key:
{
  "connect.name": "pa.new_pa.user.Key",
  "fields": [
    {
      "name": "id",
      "type": "long"
    }
  ],
  "name": "Key",
  "namespace": "pa.new_pa.user",
  "type": "record"
}
Value:
{
  "connect.name": "pa.new_pa.user.Value",
  "fields": [
    {
      "name": "id",
      "type": "long"
    },
    {
      "default": null,
      "name": "parent_id",
      "type": [
        "null",
        "long"
      ]
    },
    {
      "default": 0,
      "name": "upper_id",
      "type": {
        "connect.default": 0,
        "type": "long"
      }
    },
    {
      "name": "username",
      "type": "string"
    },
    {
      "name": "domain",
      "type": "int"
    },
    {
      "name": "role",
      "type": {
        "connect.type": "int16",
        "type": "int"
      }
    },
    {
      "name": "modified_at",
      "type": {
        "connect.name": "io.debezium.time.Timestamp",
        "connect.version": 1,
        "type": "long"
      }
    },
    {
      "default": null,
      "name": "blacklist_modified_at",
      "type": [
        "null",
        {
          "connect.name": "io.debezium.time.Timestamp",
          "connect.version": 1,
          "type": "long"
        }
      ]
    },
    {
      "default": null,
      "name": "tied_at",
      "type": [
        "null",
        {
          "connect.name": "io.debezium.time.Timestamp",
          "connect.version": 1,
          "type": "long"
        }
      ]
    },
    {
      "name": "name",
      "type": "string"
    },
    {
      "name": "enable",
      "type": {
        "connect.type": "int16",
        "type": "int"
      }
    },
    {
      "name": "is_default",
      "type": {
        "connect.type": "int16",
        "type": "int"
      }
    },
    {
      "name": "bankrupt",
      "type": {
        "connect.type": "int16",
        "type": "int"
      }
    },
    {
      "name": "locked",
      "type": {
        "connect.type": "int16",
        "type": "int"
      }
    },
    {
      "name": "tied",
      "type": {
        "connect.type": "int16",
        "type": "int"
      }
    },
    {
      "name": "checked",
      "type": {
        "connect.type": "int16",
        "type": "int"
      }
    },
    {
      "name": "failed",
      "type": {
        "connect.type": "int16",
        "type": "int"
      }
    },
    {
      "default": null,
      "name": "last_login",
      "type": [
        "null",
        {
          "connect.name": "io.debezium.time.Timestamp",
          "connect.version": 1,
          "type": "long"
        }
      ]
    },
    {
      "default": null,
      "name": "last_online",
      "type": [
        "null",
        {
          "connect.name": "io.debezium.time.Timestamp",
          "connect.version": 1,
          "type": "long"
        }
      ]
    },
    {
      "default": null,
      "name": "last_ip",
      "type": [
        "null",
        "bytes"
      ]
    },
    {
      "default": null,
      "name": "last_country",
      "type": [
        "null",
        "string"
      ]
    },
    {
      "name": "last_city_id",
      "type": "long"
    }
  ],
  "name": "Value",
  "namespace": "pa.new_pa.user",
  "type": "record"
}
nue99wik

nue99wik1#

这个 id 字段类型为 BIGINT .
我尝试修改配置: key.converter = org.apache.kafka.connect.storage.LongConverter (参考:ksqldb microsite),获取错误: LongConverter could not be found .
key.converter = org.apache.kafka.connect.converters.LongConverter (ref),并获取错误:

kafka-connect      | [2020-09-03 06:48:25,194] ERROR WorkerSourceTask{id=pa_source_avro2-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
kafka-connect      | org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
kafka-connect      |    at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
kafka-connect      |    at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
kafka-connect      |    at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:294)
kafka-connect      |    at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:323)
kafka-connect      |    at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:247)
kafka-connect      |    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
kafka-connect      |    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
kafka-connect      |    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
kafka-connect      |    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
kafka-connect      |    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
kafka-connect      |    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
kafka-connect      |    at java.lang.Thread.run(Thread.java:748)

我使用json+avro转换器,并创建表:

CREATE SOURCE CONNECTOR `pa_source_avro3` WITH (
    "connector.class" = 'io.debezium.connector.mysql.MySqlConnector',
    "tasks.max" = '1',
    "database.hostname" = 'docker.for.mac.host.internal',
    "database.port" = '3306',
    "database.user" = 'root',
    "database.password" = '6881703',
    "database.service.id" = '10001',
    "database.server.name" = 'pa3',
    "database.whitelist" = 'new_pa',
    "table.whitelist" = 'new_pa.user, new_pa.user_created, new_pa.cash',
    "database.history.kafka.bootstrap.servers" = 'kafka:9092',
    "database.history.kafka.topic" = 'schema-changes.pa3',
    "transforms" = 'unwrap',
    "transforms.unwrap.type" = 'io.debezium.transforms.ExtractNewRecordState',
    "key.converter" = 'org.apache.kafka.connect.json.JsonConverter',
    "value.converter" = 'io.confluent.connect.avro.AvroConverter',
    "key.converter.schema.registry.url" = 'http://schema-registry:8081',
    "value.converter.schema.registry.url" = 'http://schema-registry:8081',
    "key.converter.schemas.enable" = 'false',
    "value.converter.schemas.enable" = 'true',
    "include.schema.changes" = 'true'
);

CREATE TABLE user_with_string_key (`row_id` STRING PRIMARY KEY) WITH (
    KAFKA_TOPIC = 'pa3.new_pa.user',
    VALUE_FORMAT = 'AVRO'
);

CREATE TABLE user_with_bigint_key (`row_id` BIGINT PRIMARY KEY) WITH (
    KAFKA_TOPIC = 'pa3.new_pa.user',
    VALUE_FORMAT = 'AVRO'
);

我可以从mysql获取所有数据,但是得到的内容不同 row_id . 原因可能是 KAFKA format .

String:
{
  "row_id": "{\"id\":1}",
  "ID": 1,
  "PARENT_ID": null,
  "UPPER_ID": 0,
  "USERNAME": "bmw999",
  "DOMAIN": 1,
  "ROLE": 3,
  "MODIFIED_AT": 1532017653000,
  "BLACKLIST_MODIFIED_AT": null,
  "TIED_AT": null,
  "NAME": "bmw999",
  "ENABLE": 1,
  "IS_DEFAULT": 0,
  "BANKRUPT": 1,
  "LOCKED": 1,
  "TIED": 0,
  "CHECKED": 0,
  "FAILED": 0,
  "LAST_LOGIN": 1539806130000,
  "LAST_ONLINE": 1508259330000,
  "LAST_COUNTRY": null,
  "LAST_CITY_ID": 0
}
BigInt:
{
  "row_id": 8872770094665183000,
  "ID": 1,
  "PARENT_ID": null,
  "UPPER_ID": 0,
  "USERNAME": "bmw999",
  "DOMAIN": 1,
  "ROLE": 3,
  "MODIFIED_AT": 1532017653000,
  "BLACKLIST_MODIFIED_AT": null,
  "TIED_AT": null,
  "NAME": "bmw999",
  "ENABLE": 1,
  "IS_DEFAULT": 0,
  "BANKRUPT": 1,
  "LOCKED": 1,
  "TIED": 0,
  "CHECKED": 0,
  "FAILED": 0,
  "LAST_LOGIN": 1539806130000,
  "LAST_ONLINE": 1508259330000,
  "LAST_COUNTRY": null,
  "LAST_CITY_ID": 0
}
krugob8w

krugob8w2#

这里的问题是您的密钥在avro中,ksqldb目前只支持kafka格式的密钥(从版本0.12开始)。
avro键正在积极开发中:#4461增加了对avro原语的支持,#4997扩展了这一功能,以支持avro记录中的单键列(如这里所示)。
您正在使用以下配置设置avro密钥格式:

"key.converter" = 'io.confluent.connect.avro.AvroConverter',

你是sql:

CREATE TABLE user (`id` BIGINT PRIMARY KEY) WITH (
    KAFKA_TOPIC = 'pa.new_pa.user',
    VALUE_FORMAT = 'AVRO'
);

正在设置 VALUE_FORMATAVRO ,但密钥格式当前为 KAFKA . 因此,您可以使用:

"key.converter" = 'org.apache.kafka.connect.converters.IntegerConverter',

…将密钥转换为正确的格式。在ksqldb microsite上提供有关用于kafka格式的正确转换器的更多信息。

相关问题