kafka消费者进程并发订单

wgx48brx  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(442)

我让一个生产者编写一个有100个分区的kafka主题,它根据用户id选择分区,因此用户的消息必须按照提交到队列的顺序进行处理。
负责消费的服务有2-10个示例,每个示例在其配置中都有:

spring.cloud.stream.bindings.input.consumer.concurrency=10
spring.cloud.stream.bindings.input.consumer.partitioned=true

我最近注意到,尽管使用者开始按顺序处理分区消息,但有时一条消息在它之后的一条消息之前处理,因为它比下一条消息更容易处理。
保持服务当前的处理速度对我来说很重要,因为我不熟悉spring cloud stream的线程模型,所以我想请教一下其他人。确保一个用户的消息只在前一个消息完成后才被处理的最佳方法是什么?
--编辑--
根据要求,提供更多相关参数。
Kafka版本:0.10.2.1
spring cloud stream版本:1.1.0.release
活页夹参数:

spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset=false
spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOnError=true
spring.cloud.stream.kafka.bindings.input.consumer.enableDlq=true

打印到控制台的使用者配置:

2018-12-11 09:56:51,975 [RMI TCP Connection(6)-127.0.0.1] INFO  [AbstractConfig::logAll] - ConsumerConfig values: 
    metric.reporters = []
    metadata.max.age.ms = 300000
    partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
    reconnect.backoff.ms = 50
    sasl.kerberos.ticket.renew.window.factor = 0.8
    max.partition.fetch.bytes = 1048576
    bootstrap.servers = [localhost:9092]
    ssl.keystore.type = JKS
    enable.auto.commit = true
    sasl.mechanism = GSSAPI
    interceptor.classes = null
    exclude.internal.topics = true
    ssl.truststore.password = null
    client.id = consumer-11
    ssl.endpoint.identification.algorithm = null
    max.poll.records = 2147483647
    check.crcs = true
    request.timeout.ms = 40000
    heartbeat.interval.ms = 3000
    auto.commit.interval.ms = 5000
    receive.buffer.bytes = 65536
    ssl.truststore.type = JKS
    ssl.truststore.location = null
    ssl.keystore.password = null
    fetch.min.bytes = 1
    send.buffer.bytes = 131072
    value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
    group.id = 
    retry.backoff.ms = 100
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    ssl.trustmanager.algorithm = PKIX
    ssl.key.password = null
    fetch.max.wait.ms = 500
    sasl.kerberos.min.time.before.relogin = 60000
    connections.max.idle.ms = 540000
    session.timeout.ms = 30000
    metrics.num.samples = 2
    key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
    ssl.protocol = TLS
    ssl.provider = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.keystore.location = null
    ssl.cipher.suites = null
    security.protocol = PLAINTEXT
    ssl.keymanager.algorithm = SunX509
    metrics.sample.window.ms = 30000
    auto.offset.reset = latest

打印到控制台的生产者配置:

2018-12-11 09:56:52,439 [-kafka-listener-1] INFO  [AbstractConfig::logAll] - ProducerConfig values: 
    metric.reporters = []
    metadata.max.age.ms = 300000
    reconnect.backoff.ms = 50
    sasl.kerberos.ticket.renew.window.factor = 0.8
    bootstrap.servers = [localhost:9092]
    ssl.keystore.type = JKS
    sasl.mechanism = GSSAPI
    max.block.ms = 60000
    interceptor.classes = null
    ssl.truststore.password = null
    client.id = producer-5
    ssl.endpoint.identification.algorithm = null
    request.timeout.ms = 30000
    acks = 1
    receive.buffer.bytes = 32768
    ssl.truststore.type = JKS
    retries = 0
    ssl.truststore.location = null
    ssl.keystore.password = null
    send.buffer.bytes = 131072
    compression.type = none
    metadata.fetch.timeout.ms = 60000
    retry.backoff.ms = 100
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    buffer.memory = 33554432
    timeout.ms = 30000
    key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    ssl.trustmanager.algorithm = PKIX
    block.on.buffer.full = false
    ssl.key.password = null
    sasl.kerberos.min.time.before.relogin = 60000
    connections.max.idle.ms = 540000
    max.in.flight.requests.per.connection = 5
    metrics.num.samples = 2
    ssl.protocol = TLS
    ssl.provider = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    batch.size = 16384
    ssl.keystore.location = null
    ssl.cipher.suites = null
    security.protocol = PLAINTEXT
    max.request.size = 1048576
    value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
    ssl.keymanager.algorithm = SunX509
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    linger.ms = 0
8wigbo56

8wigbo561#

分区分布在容器线程中。
如果容器并发性为10,并且您有20个分区,那么通常会为每个使用者(线程)分配2个分区。
这保证了分区内的交货顺序。

相关问题