我无法让kafka connect发送指标: kafka.consumer:*
以及 kafka.connect:*
到石墨。在此处找到列表:https://kafka.apache.org/documentation/#connect_monitoring
到目前为止,我在/usr/share/java/kafka-graphite-clients-0.10.2.jar中构建并添加了这个jar
https://github.com/apakulov/kafka-graphite
并通过将其发布到connect rest api来更新worker配置:
{
"metric.reporters": "org.apache.kafka.common.metrics.GraphiteReporter",
"kafka.graphite.metrics.reporter.enabled": true,
"kafka.graphite.metrics.host": "my.graphite.host",
"kafka.graphite.metrics.prefix": "my.prefix",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"tasks.max": "10",
"topics": "topic1,topic2",
"s3.region": "us-east-1",
"s3.bucket.name": "my-bucket",
"s3.part.size": "5242880",
"s3.compression.type": "gzip",
"timezone": "UTC",
"rotate.schedule.interval.ms": "900000",
"flush.size": "1000000",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"format.class": "io.confluent.connect.s3.format.bytearray.ByteArrayFormat",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"schema.compatibility": "NONE",
"name": "s3-sink"
}
我好像找不到石墨搬运工,我也不知道为什么。这难道不是将这些指标转化为graphite的正确方法吗?
暂无答案!
目前还没有任何答案,快来回答吧!