配置kafka connect分布式连接器日志(connectdistributed.out)

m0rkklqb  于 2021-06-08  发布在  Kafka
关注(0)|答案(2)|浏览(631)

目前有两种 Kafka Connect 正在收集日志。 connect-rest.log.2018-07-01-21 , connect-rest.log.2018-07-01-22 ... connectDistributed.out 问题是我不知道如何配置 connectDistributed.out Kafka连接文件。以下是文件的示例输出:

[2018-07-11 08:42:40,798] INFO WorkerSinkTask{id=elasticsearch-sink- 
connector-0} Committing offsets asynchronously using sequence number 
216: {test-1=OffsetAndMetadata{offset=476028, metadata=‘’}, 
test-0=OffsetAndMetadata{offset=478923, metadata=‘’}, 
test-2=OffsetAndMetadata{offset=477944, metadata=‘’}} 
(org.apache.kafka.connect.runtime.WorkerSinkTask:325)
[2018-07-11 08:43:40,798] INFO WorkerSinkTask{id=elasticsearch-sink-connector0} 
Committing offsets asynchronously using sequence number 217: 
{test-1=OffsetAndMetadata{offset=476404, metadata=‘’}, 
test-0=OffsetAndMetadata{offset=479241, metadata=‘’}, 
test-2=OffsetAndMetadata{offset=478316, metadata=‘’}} 
(org.apache.kafka.connect.runtime.WorkerSinkTask:325)

由于没有配置任何日志选项,文件大小会随着时间的推移而变大。今天,它达到20gb,我不得不手动清空文件。所以我的问题是如何配置它 connectDistributed.out ? 我正在为其他组件(如kafka代理日志)配置日志选项。
下面是下与kafka connect相关的一些日志配置 confluent-4.1.0/etc/kafka 我正在用的。

log4j.属性

log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

# Change the two lines below to adjust ZK client logging

log4j.logger.org.I0Itec.zkclient.ZkClient=INFO
log4j.logger.org.apache.zookeeper=INFO

# Change the two lines below to adjust the general broker logging level (output to server.log and stdout)

log4j.logger.kafka=INFO
log4j.logger.org.apache.kafka=INFO

# Change to DEBUG or TRACE to enable request logging

log4j.logger.kafka.request.logger=WARN, requestAppender
log4j.additivity.kafka.request.logger=false

# Uncomment the lines below and change log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output

# related to the handling of requests

# log4j.logger.kafka.network.Processor=TRACE, requestAppender

# log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender

# log4j.additivity.kafka.server.KafkaApis=false

log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
log4j.additivity.kafka.network.RequestChannel$=false

log4j.logger.kafka.controller=TRACE, controllerAppender
log4j.additivity.kafka.controller=false

log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
log4j.additivity.kafka.log.LogCleaner=false

log4j.logger.state.change.logger=TRACE, stateChangeAppender
log4j.additivity.state.change.logger=false

# Access denials are logged at INFO level, change to DEBUG to also log allowed accesses

log4j.logger.kafka.authorizer.logger=INFO, authorizerAppender
log4j.additivity.kafka.authorizer.logger=false

connect-log4j.属性

log4j.rootLogger=INFO, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n

log4j.logger.org.apache.zookeeper=ERROR
log4j.logger.org.I0Itec.zkclient=ERROR
log4j.logger.org.reflections=ERROR

log4j.appender.kafkaConnectRestAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaConnectRestAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.kafkaConnectRestAppender.File=/home/ec2-user/logs/connect-rest.log
log4j.appender.kafkaConnectRestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaConnectRestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.logger.org.apache.kafka.connect.runtime.rest=INFO, kafkaConnectRestAppender
log4j.additivity.org.apache.kafka.connect.runtime.rest=false
ekqde3dh

ekqde3dh1#

这个 connectDistributed.out 文件仅在使用守护程序模式时存在,例如。

connect-distributed -daemon connect-distributed.properties

原因:来自 kafka-run-class 脚本, CONSOLE_OUTPUT_FILE 设置为 connectDistributed.out ```

Launch mode

if [ "x$DAEMON_MODE" = "xtrue" ]; then
nohup $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null &
else...


#### 选项1:加载自定义log4j属性文件

您可以更新 `KAFKA_LOG4J_OPTS` 环境变量,指向在启动connect之前要创建的任何log4j属性文件(请参见下面的示例)

$ export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file://path/to/connect-log4j-new.properties"
$ connect-distributed connect-distributed.properties

注意:不使用 `-daemon` 在这里
如果你不再有 `ConsoleAppender` 在log4j属性中,这个输出几乎为零,并且只是挂起,所以最好 `nohup` 是的。
默认的log4j配置名为 `connect-log4j.properties` ,在汇合台地,这是 `etc/kafka/` 文件夹。这是默认情况下的样子

log4j.rootLogger=INFO, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender

为了设置最大日志文件大小,您需要将根记录器更改为转到fileappender而不是consoleappender,但我更喜欢使用 `DailyRollingFileAppender` . 
下面是一个例子

log4j.rootLogger=INFO, stdout, FILE

log4j.appender.FILE=org.apache.log4j.DailyRollingFileAppender
log4j.appender.FILE.DatePattern='.'yyyy-MM-dd
log4j.appender.FILE.File=/var/log/kafka-connect/connect.log
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n

log4j.logger.org.apache.zookeeper=ERROR
log4j.logger.org.I0Itec.zkclient=ERROR
log4j.logger.org.reflections=ERROR

hfsqlsce

hfsqlsce2#

我没有足够的声誉发表评论(但我可以回答……),所以我只想指出,作为对cricket回答的回应,我不会使用dailyrollingfileappender。在文档中,它说由于同步和数据丢失问题,这是不明智的。相反,我将结合使用rollingfileappender和timebasedrollingpolicy。在kafka connect中使用dailyrollingfileappender的一些奇怪行为之后,我注意到了这一点。您将需要在类路径中包含log4j“extras”jar来实现这一点。

相关问题