kafka connect转换regexrouter退出时出现不可恢复的异常

lpwwtiir  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(442)

我制作了一个kafka管道来将sqlserver表复制到s3
在sink期间,我正在尝试使用regexrouter函数转换主题名称:

"transforms":"dropPrefix",      
    "transforms.dropPrefix.type":"org.apache.kafka.connect.transforms.RegexRouter",  
    "transforms.dropPrefix.regex":"SQLSERVER-TEST-(.*)",  
    "transforms.dropPrefix.replacement":"$1"

接收器出现故障,并显示以下消息:

org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:586)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
    at io.confluent.connect.s3.S3SinkTask.put(S3SinkTask.java:188)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:564)
    ... 10 more

如果移除转换,管道工作正常
此docker compose可重现问题:

version: '2'
services:

  smtproblem-zookeeper:
    image: zookeeper
    container_name: smtproblem-zookeeper
    ports:
      - "2181:2181"

  smtproblem-kafka:
    image: confluentinc/cp-kafka:5.0.0
    container_name: smtproblem-kafka
    ports:
      - "9092:9092"
    links:
      - smtproblem-zookeeper
      - smtproblem-minio
    environment:
      KAFKA_ADVERTISED_HOST_NAME : localhost
      KAFKA_ZOOKEEPER_CONNECT: smtproblem-zookeeper:2181/kafka
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://smtproblem-kafka:9092
      KAFKA_CREATE_TOPICS: "_schemas:3:1:compact"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

  smtproblem-schema_registry:
    image: confluentinc/cp-schema-registry:5.0.0
    container_name: smtproblem-schema-registry
    ports:
      - "8081:8081"
    links:
      - smtproblem-kafka
      - smtproblem-zookeeper
    environment:
      SCHEMA_REGISTRY_HOST_NAME: http://smtproblem-schema_registry:8081
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://smtproblem-kafka:9092
      SCHEMA_REGISTRY_GROUP_ID: schema_group

  smtproblem-kafka-connect:
    image: confluentinc/cp-kafka-connect:5.0.0
    container_name: smtproblem-kafka-connect
    command: bash -c "wget -P /usr/share/java/kafka-connect-jdbc http://central.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/6.4.0.jre8/mssql-jdbc-6.4.0.jre8.jar && /etc/confluent/docker/run"
    ports:
      - "8083:8083"
    links:
      - smtproblem-zookeeper
      - smtproblem-kafka
      - smtproblem-schema_registry
      - smtproblem-minio
    environment:
      CONNECT_BOOTSTRAP_SERVERS: smtproblem-kafka:9092
      CONNECT_REST_PORT: 8083
      CONNECT_GROUP_ID: "connect_group"
      CONNECT_OFFSET_FLUSH_INTERVAL_MS: 1000
      CONNECT_CONFIG_STORAGE_TOPIC: "connect_config"
      CONNECT_OFFSET_STORAGE_TOPIC: "connect_offsets"
      CONNECT_STATUS_STORAGE_TOPIC: "connect_status"

      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1

      CONNECT_KEY_CONVERTER: "io.confluent.connect.avro.AvroConverter"
      CONNECT_VALUE_CONVERTER: "io.confluent.connect.avro.AvroConverter"

      CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: "http://smtproblem-schema_registry:8081"
      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: "http://smtproblem-schema_registry:8081"

      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_REST_ADVERTISED_HOST_NAME: "smtproblem-kafka_connect"

      CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
      CONNECT_LOG4J_LOGGERS: org.reflections=ERROR
      CONNECT_PLUGIN_PATH: "/usr/share/java"

      AWS_ACCESS_KEY_ID: localKey
      AWS_SECRET_ACCESS_KEY: localSecret

  smtproblem-minio:
    image: minio/minio:edge
    container_name: smtproblem-minio
    ports:
      - "9000:9000"
    entrypoint: sh
    command: -c 'mkdir -p /data/datalake && minio server /data'
    environment:
      MINIO_ACCESS_KEY: localKey
      MINIO_SECRET_KEY: localSecret
    volumes:
      - "./minioData:/data"

  smtproblem-sqlserver:
    image: microsoft/mssql-server-linux:2017-GA
    container_name: smtproblem-sqlserver
    environment:
      ACCEPT_EULA: "Y"
      SA_PASSWORD: "Azertyu&"
    ports:
      - "1433:1433"

在sqlserver容器中创建数据库:

$ sudo docker exec -it smtproblem-sqlserver bash

# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Azertyu&'

创建测试数据库:

create database TEST
GO
use TEST
GO
CREATE TABLE TABLE_TEST (id INT, name NVARCHAR(50), quantity INT, cbMarq INT NOT NULL IDENTITY(1,1), cbModification smalldatetime DEFAULT (getdate()))
GO
INSERT INTO TABLE_TEST VALUES (1, 'banana', 150, 1); INSERT INTO TABLE_TEST VALUES (2, 'orange', 154, 2);
GO

exit
exit

创建源连接器:

curl -X PUT http://localhost:8083/connectors/sqlserver-TEST-source-bulk/config -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.password": "Azertyu&",
"validate.non.null": "false",
"tasks.max": "3",
"table.whitelist": "TABLE_TEST",
"mode": "bulk",
"topic.prefix": "SQLSERVER-TEST-",
"connection.user": "SA",
"connection.url": "jdbc:sqlserver://smtproblem-sqlserver:1433;database=TEST"
}'

创建接收器连接器:

curl -X PUT http://localhost:8083/connectors/sqlserver-TEST-sink/config -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{
"topics": "SQLSERVER-TEST-TABLE_TEST",
"topics.dir": "TABLE_TEST",
"s3.part.size": 5242880,
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"tasks.max": 3,
"schema.compatibility": "NONE",
"s3.region": "us-east-1",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
"s3.bucket.name": "datalake",
"store.url": "http://smtproblem-minio:9000",
"flush.size": 1,
"transforms":"dropPrefix",      
"transforms.dropPrefix.type":"org.apache.kafka.connect.transforms.RegexRouter",  
"transforms.dropPrefix.regex":"SQLSERVER-TEST-(.*)",  
"transforms.dropPrefix.replacement":"$1"
}'

错误可以在kafka connect ui中显示,也可以使用curl status命令显示:

curl -X GET http://localhost:8083/connectors/sqlserver-TEST-sink/status

谢谢你的帮助

mkshixfv

mkshixfv1#

所以,如果我们调试,我们可以看到它试图做什么。。。

有一个具有原始主题名称的hashmap( SQLSERVER_TEST_TABLE_TEST-0 ),并且已应用转换( TABLE-TEST-0 ),因此如果我们查找“new”topicname,它就找不到topicpartition的s3编写器。
因此,Map返回null,随后的 .buffer(record) 抛出一个npe。
我以前也有类似的用例——在一个s3路径中编写多个主题,最后不得不编写一个自定义分区器,例如。 class MyPartitioner extends DefaultPartitioner .
如果您使用这样的自定义代码构建jar,请将其置于 usr/share/java/kafka-connect-storage-common ,然后编辑的连接器配置 partitioner.class ,它应该按预期工作。
我不确定这是否是一个“bug”,比如说,因为备份调用堆栈时,无法获取对regex转换的引用 topicPartitionWriters 用源主题名称声明。
如果有的话,存储连接器配置应该允许一个单独的regex转换来编辑 encodedPartition (写入文件的路径)

相关问题