我正在使用下面的HTTP接收器连接器配置,它仍然在一个接一个地发送记录。它应该在一批50条消息中发送数据。
{
"name": "HTTPSinkConnector_1",
"config": {
"topics": "topic_1",
"tasks.max": "1",
"connector.class": "io.confluent.connect.http.HttpSinkConnector",
"http.api.url": "http://localhost/messageHandler",
"request.method": "POST",
"key.converter":"org.apache.kafka.connect.storage.StringConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"confluent.topic.bootstrap.servers": "kafka:19092",
"confluent.topic.replication.factor": "1",
"batching.enabled": true,
"batch.max.size": 50,
"reporter.bootstrap.servers": "kafka:19092",
"reporter.result.topic.name": "success-responses",
"reporter.result.topic.replication.factor": "1",
"reporter.error.topic.name": "error-responses",
"reporter.error.topic.replication.factor": "1",
"request.body.format": "json"
}
}
有没有人能建议一下是否还有其他财产丢失?
1条答案
按热度按时间lawou6xi1#
HTTP接收器连接器不会对包含不同Kafka头值的消息的请求进行批处理。
https://docs.confluent.io/kafka-connectors/http/current/overview.html#features
解决方法是: