kafka connect与jdbcconnectionsource连接器无法创建任务(连接器正在运行,但任务未运行)

o4tp2gmn  于 2021-06-04  发布在  Kafka
关注(0)|答案(1)|浏览(559)

我似乎经常根据查询从jdbcconnectionsource创建kafka connect连接器,连接器创建成功,状态为“running”,但没有创建任何任务。在我容器的控制台日志中,我看不到任何错误的迹象:没有错误,没有警告,没有解释任务失败的原因。我可以让其他连接器工作,但有时一个不行。
当连接器无法创建正在运行的任务时,如何获取更多信息以进行故障排除?
我将在下面发布一个连接器配置示例。
我正在使用kafka connect 5.4.1-ccs。
连接器配置(它是jdbc后面的oracle数据库):

{
    "name": "FiscalYear",
    "config": {
        "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
        "tasks.max": 1,
        "connection.url": "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=myhost.example.com)(PORT=1521))(LOAD_BALANCE=OFF)(FAILOVER=OFF)(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=MY_DB_PRI)(UR=A)))",
        "connection.user":"myuser",
        "connection.password":"mypass",
        "mode": "timestamp",
        "timestamp.column.name": "MAINT_TS",
        "topic.prefix": "MyTeam.MyTopicName",
        "poll.interval.ms": 5000,
        "value.converter" : "org.apache.kafka.connect.json.JsonConverter",
        "value.converter.schemas.enable": "false",
        "numeric.mapping": "best_fit",

        "_comment": "The query is wrapped in `select * from ()` so that JdbcSourceConnector can automatically append a WHERE clause.",
        "query": "SELECT * FROM (SELECT fy_nbr, min(fy_strt_dt) fy_strt_dt, max(fy_end_dt) fy_end_dt FROM myuser.fsc_dt fd WHERE fd.fy_nbr >= 2020 and fd.fy_nbr < 2022 group by fy_nbr)/* outer query must have no WHERE clause so that the source connector can append one of its own */"
    }
}

以及创建我的worker的dockerfile:

FROM confluentinc/cp-kafka-connect:latest

# each "CONNECT_" env var refers to a Kafka Connect setting; e.g. CONNECT_REST_PORT refers to setting rest.port

# see also https://docs.confluent.io/current/connect/references/allconfigs.html

ENV CONNECT_BOOTSTRAP_SERVERS="d.mybroker.example.com:9092"
ENV CONNECT_REST_PORT="8083"
ENV CONNECT_GROUP_ID="MyGroup2" 

ENV CONNECT_CONFIG_STORAGE_TOPIC="MyTeam.ConnectorConfig" 
ENV CONNECT_OFFSET_STORAGE_TOPIC="MyTeam.ConnectorOffsets" 
ENV CONNECT_STATUS_STORAGE_TOPIC="MyTeam.ConnectorStatus" 

ENV CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" 
ENV CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" 

ENV CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter"  
ENV CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" 

ENV CONNECT_LOG4J_ROOT_LOGLEVEL="INFO"

COPY ojdbcDrivers /usr/share/java/kafka-connect-jdbc

(我还通过helm chart设置了rest-hostname环境变量,所以上面没有设置它。)
在它旋转起来之后,我创建连接器,然后从rest“/status”中获取它:

{"name":"FiscalYear","connector":{"state":"RUNNING","worker_id":"10.1.2.3:8083"},"tasks":[],"type":"source"}
xj3cbfub

xj3cbfub1#

当连接器无法创建正在运行的任务时,如何获取更多信息以进行故障排除?
我会提高你的Kafka连接工人的日志记录水平。因为您使用的是ApacheKafka2.4,所以可以动态地执行此操作,这非常有用。向kafka connect工作进程发出此rest api调用:

curl -X PUT http://localhost:8083/admin/loggers/io.confluent \
     -H "Content-Type:application/json" -d '{"level": "TRACE"}'

这会将任何合流连接器的所有消息都转储到 TRACE . 它还返回单个记录器的列表,您可以从中挑选不同的记录器,并根据需要将其特定的记录器级别调高或调低。例如:

curl -X PUT http://localhost:8083/admin/loggers/io.confluent.connect.jdbc.dialect.DatabaseDialects \
     -H "Content-Type:application/json" -d '{"level": "INFO"}'

裁判:https://rmoff.net/2020/01/16/changing-the-logging-level-for-kafka-connect-dynamically/

相关问题