spring集成kafka config consumer从指定分区接收消息

u5rb5r59  于 2021-06-08  发布在  Kafka
关注(0)|答案(1)|浏览(304)

我开始在我的项目中使用spring集成kafka,我可以生成和使用来自kafka的消息。但现在,我想生成到特定分区的消息,并使用来自特定分区的消息。
例如,我想向分区3生成消息,而使用者将只接收来自分区3的消息。
到目前为止,我的主题有8个分区,我可以向特定分区生成消息,但是我还没有找到配置使用者只从特定分区接收消息的方法。
因此,关于如何使用spring集成kafka配置使用者的任何建议,或者任何其他需要使用kafkaconsumer.java类的建议,因为它可以从特定分区接收消息。
谢谢。
这是我的密码:
kafka-producer-context.xml

<int:publish-subscribe-channel id="inputToKafka" />

<int-kafka:outbound-channel-adapter
    id="kafkaOutboundChannelAdapter" kafka-producer-context-ref="kafkaProducerContext"
    auto-startup="true" order="1" channel="inputToKafka" />
<int-kafka:producer-context id="kafkaProducerContext"
    producer-properties="producerProps">
    <int-kafka:producer-configurations>
        <int-kafka:producer-configuration 
            broker-list="127.0.0.1:9092,127.0.0.1:9093,127.0.0.1:9094"
            async="true" topic="testTopic"
            key-class-type="java.lang.String" 
            key-encoder="encoder"
            value-class-type="java.lang.String" 
            value-encoder="encoder"
            partitioner="partitioner"
            compression-codec="default" />
    </int-kafka:producer-configurations>
</int-kafka:producer-context>

<util:properties id="producerProps">
    <prop key="queue.buffering.max.ms">500</prop>
    <prop key="topic.metadata.refresh.interval.ms">3600000</prop>
    <prop key="queue.buffering.max.messages">10000</prop>
    <prop key="retry.backoff.ms">100</prop>
    <prop key="message.send.max.retries">2</prop>
    <prop key="send.buffer.bytes">5242880</prop>
    <prop key="socket.request.max.bytes">104857600</prop>
    <prop key="socket.receive.buffer.bytes">1048576</prop>
    <prop key="socket.send.buffer.bytes">1048576</prop>
    <prop key="request.required.acks">1</prop>
</util:properties>

<bean id="encoder"
    class="org.springframework.integration.kafka.serializer.common.StringEncoder" />

<bean id="partitioner" class="org.springframework.integration.kafka.support.DefaultPartitioner"/>

<task:executor id="taskExecutor" pool-size="5"
    keep-alive="120" queue-capacity="500" />

kafkaproducer.java

public class KafkaProducer {

private static final Logger logger = LoggerFactory
        .getLogger(KafkaProducer.class);

@Autowired
private MessageChannel inputToKafka;

public void sendMessage(String message) {

    try {
        inputToKafka.send(MessageBuilder.withPayload(message)
                    .setHeader(KafkaHeaders.TOPIC, "testTopic")
                    .setHeader(KafkaHeaders.PARTITION_ID, 3).build());
    } catch (Exception e) {
        logger.error(String.format(
                "Failed to send [ %s ] to topic %s ", message, topic),
                e);
    }
}

}
kafka-consumer-context.xml

<int:channel id="inputFromKafka">
    <int:dispatcher task-executor="kafkaMessageExecutor" />
</int:channel>

<int-kafka:zookeeper-connect id="zookeeperConnect"
    zk-connect="127.0.0.1:2181" zk-connection-timeout="6000"
    zk-session-timeout="6000" zk-sync-time="2000" />

<int-kafka:inbound-channel-adapter
    id="kafkaInboundChannelAdapter" kafka-consumer-context-ref="consumerContext"
    auto-startup="true" channel="inputFromKafka">
    <int:poller fixed-delay="10" time-unit="MILLISECONDS"
        max-messages-per-poll="5" />
</int-kafka:inbound-channel-adapter>

<bean id="consumerProperties"
    class="org.springframework.beans.factory.config.PropertiesFactoryBean">
    <property name="properties">
        <props>
            <prop key="auto.offset.reset">smallest</prop>
            <prop key="socket.receive.buffer.bytes">1048576</prop>
            <prop key="fetch.message.max.bytes">5242880</prop>
            <prop key="auto.commit.interval.ms">1000</prop>
        </props>
    </property>
</bean>

<int-kafka:consumer-context id="consumerContext"
    consumer-timeout="1000" zookeeper-connect="zookeeperConnect"
    consumer-properties="consumerProperties">
    <int-kafka:consumer-configurations>
        <int-kafka:consumer-configuration
            group-id="defaultGrp" max-messages="20000">
            <int-kafka:topic id="testTopic" streams="3" />
        </int-kafka:consumer-configuration>
    </int-kafka:consumer-configurations>
</int-kafka:consumer-context>

<task:executor id="kafkaMessageExecutor" pool-size="0-10"
    keep-alive="120" queue-capacity="500" />

<int:outbound-channel-adapter channel="inputFromKafka"
    ref="kafkaConsumer" method="processMessage" />

kafkaconsumer.java

public class KafkaConsumer {

private static final Logger log = LoggerFactory
        .getLogger(KafkaConsumer.class);

@Autowired
KafkaService kafkaService;

public void processMessage(Map<String, Map<Integer, List<byte[]>>> msgs) {
    for (Map.Entry<String, Map<Integer, List<byte[]>>> entry : msgs
            .entrySet()) {
        log.debug("Topic:" + entry.getKey());
        ConcurrentHashMap<Integer, List<byte[]>> messages = (ConcurrentHashMap<Integer, List<byte[]>>) entry
                .getValue();
        log.debug("\n****Partition: \n");
        Set<Integer> keys   = messages.keySet();
        for (Integer i : keys)
            log.debug("p:"+i);
        log.debug("\n**************\n");
        Collection<List<byte[]>> values = messages.values();
        for (Iterator<List<byte[]>> iterator = values.iterator(); iterator
                .hasNext();) {
            List<byte[]> list = iterator.next();
            for (byte[] object : list) {
                String message = new String(object);
                log.debug("Message: " + message);
                try {
                    kafkaService.receiveMessage(message);
                } catch (Exception e) {
                    log.error(String.format("Failed to process message %s",
                            message));
                }
            }
        }

    }
}
}

所以我的问题就在这里。当我向分区3或任何分区生成消息时,kafkaconsumer总是接收消息。我想要的是:kafkaconsumer只接收来自分区3的消息,而不是来自其他分区的消息。
再次感谢。

s4n0splo

s4n0splo1#

您需要使用消息驱动的通道适配器。
作为一种变体,kafkamessagelistenercontainer可以接受org.springframework.integration.kafka.core.partition数组参数来指定主题及其分区对。
您需要使用此构造函数连接侦听器容器,并使用 listener-container 属性。
我们将用一个例子来更新自述文件。

相关问题