hdfs接收器连接器故障排除

yizd12fk  于 2021-05-29  发布在  Hadoop
关注(0)|答案(0)|浏览(242)

kerberized hadoop集群。连接示例已在边缘节点上启动。在独立模式下,一切正常。在分布式模式下,连接器是基于日志添加的,但是一旦检查了rest调用,就不会返回连接器,并且数据不会从kafka主题写入hdfs。

2019-05-16T12:22:53.657 TRACE xxx connector dev Submitting connector config write request xxx_connector_test_topic_2 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:529)
2019-05-16T12:22:53.661 TRACE xxx connector dev Retrieving loaded class 'io.confluent.connect.hdfs.HdfsSinkConnector' from 'PluginClassLoader{pluginLocation=file:/data/home/u_rw_xxx/kafka-connect/confluent-4.1.1/share/java/kafka-connect-hdfs/}' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:325)
2019-05-16T12:22:53.661 DEBUG xxx connector dev Getting plugin class loader for connector: 'io.confluent.connect.hdfs.HdfsSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:107)
2019-05-16T12:22:53.665 TRACE xxx connector dev Class 'org.apache.kafka.connect.storage.StringConverter' not found. Delegating to parent (org.apache.kafka.connect.runtime.isolation.PluginClassLoader:100)
2019-05-16T12:22:53.666 TRACE xxx connector dev Retrieving loaded class 'org.apache.kafka.connect.storage.StringConverter' from 'sun.misc.Launcher$AppClassLoader@764c12b6' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:325)
2019-05-16T12:22:53.696 TRACE xxx connector dev Handling connector config request xxx_connector_test_topic_2 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:538)
2019-05-16T12:22:53.697 TRACE xxx connector dev Submitting connector config xxx_connector_test_topic_2 false [] (org.apache.kafka.connect.runtime.distributed.DistributedHerder:550)
2019-05-16T12:22:53.697 DEBUG xxx connector dev Writing connector configuration {connector.class=io.confluent.connect.hdfs.HdfsSinkConnector, tasks.max=1, topics=xxx_test_topic_2, hadoop.conf.dir=/usr/hdp/current/hadoop-client/conf/, hdfs.url=/dev/src/xxx/kk/land/test/, hdfs.authentication.kerberos=true, connect.hdfs.principal=u_rw_xxx@XXXHDP1.YYYY.ZZ, connect.hdfs.keytab=/data/home/u_rw_xxx/u_rw_xxx.keytab, hdfs.namenode.principal=nn/_HOST@XXXHDP1.YYYY.ZZ, hive.integration=false, hive.database=dev_src_xxx_data, partitioner.class=io.confluent.connect.hdfs.partitioner.TimeBasedPartitioner, format.class=io.confluent.connect.hdfs.json.JsonFormat, key.converter=org.apache.kafka.connect.storage.StringConverter, key.converter.schemas.enable=false, value.converter=org.apache.kafka.connect.storage.StringConverter, value.converter.schemas.enable=false, flush.size=100, rotate.interval.ms=60000, partition.duration.ms=300000, path.format='day'=YYYYMMdd, locale=DE, timezone=UTC, name=xxx_connector_test_topic_2} for connector xxx_connector_test_topic_2 configuration (org.apache.kafka.connect.storage.KafkaConfigBackingStore:294)
2019-05-16T12:22:53.993 INFO xxx connector dev 127.0.0.1 - - [16/May/2019:10:22:53 +0000] "POST /connectors/ HTTP/1.1" 201 1092  469 (org.apache.kafka.connect.runtime.rest.RestServer:60)

由于对list connectors的rest调用没有检索到任何连接器,因此它的创建失败。为此,我希望得到一些错误信息,或者至少是一些警告。
但是,通过restapi添加连接器会失败,这与这个汇合cli问题非常相似。
欢迎对进一步的故障排除有任何意见。提前谢谢!
附笔。:
如日志中所示,使用合流的4.1.1连接器版本和jsonformat类,该类将被写入到使用stringconverter serealized的hdfs中。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题