logstash kafka输入性能/配置调整

kcwpcxri  于 2021-06-07  发布在  Kafka
关注(0)|答案(0)|浏览(374)

我使用logstash将数据从kafka传输到elasticsearch,得到以下错误:

WARN org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Auto offset commit failed for group kafka-es-sink: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured session.timeout.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.

我试图调整会话超时(30000)和最大轮询记录(250)。
本主题以avro格式每秒生成1000个事件。有10个分区(2个服务器)和两个logstash示例,每个示例有5个使用者线程。
我对每秒100-300个事件的其他主题没有问题。
我认为这应该是一个配置问题,因为在同一主题上,kafka和elasticsearch之间还有第二个连接器,工作正常(confluent的kafka connect elasticsearch)
主要目的是比较Kafka连接和logstash作为连接器。也许有人也有一些经验?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题