如何用java编写kafka客户机来使用来自多个代理的消息?

j13ufse2  于 2021-06-07  发布在  Kafka
关注(0)|答案(2)|浏览(294)

我正在寻找java客户机(kafka consumer)来使用来自多个代理的消息。请给出建议
下面是使用simple partitioner将消息发布到多个代理的代码。
主题是用复制因子“2”和分区“3”创建的。

public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster)
{
    List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
    int numPartitions = partitions.size();
    logger.info("Number of Partitions " + numPartitions);
    if (keyBytes == null) 
    {
        int nextValue = counter.getAndIncrement();
        List<PartitionInfo> availablePartitions = cluster.availablePartitionsForTopic(topic);
        if (availablePartitions.size() > 0) 
        {
            int part = toPositive(nextValue) % availablePartitions.size();
            int selectedPartition = availablePartitions.get(part).partition();
            logger.info("Selected partition is " + selectedPartition);
            return selectedPartition;
        } 
        else 
        {
            // no partitions are available, give a non-available partition
            return toPositive(nextValue) % numPartitions;
        }
    } 
    else 
    {
        // hash the keyBytes to choose a partition
        return toPositive(Utils.murmur2(keyBytes)) % numPartitions;
    }

}

public void publishMessage(String message , String topic)
{
    Producer<String, String> producer = null;
    try
    {
     producer = new KafkaProducer<>(producerConfigs());
     logger.info("Topic to publish the message --" + this.topic);
     for(int i =0 ; i < 10 ; i++)
     {
     producer.send(new ProducerRecord<String, String>(this.topic, message));
     logger.info("Message Published Successfully");
     }
    }
    catch(Exception e)
    {
        logger.error("Exception Occured " + e.getMessage()) ;
    }
    finally
    {
     producer.close();
    }
}

public Map<String, Object> producerConfigs() 
{
    loadPropertyFile();
    Map<String, Object> propsMap = new HashMap<>();
    propsMap.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList);
    propsMap.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    propsMap.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    propsMap.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, SimplePartitioner.class);
    propsMap.put(ProducerConfig.ACKS_CONFIG, "1");
    return propsMap;
}

public Map<String, Object> consumerConfigs() {
    Map<String, Object> propsMap = new HashMap<>();
    System.out.println("properties.getBootstrap()"  + properties.getBootstrap());
    propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, properties.getBootstrap());
    propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
    propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, properties.getAutocommit());
    propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, properties.getTimeout());
    propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, properties.getGroupid());
    propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, properties.getAutooffset());
    return propsMap;
}

@KafkaListener(id = "ID1", topics = "${config.topic}", group = "${config.groupid}")
public void listen(ConsumerRecord<?, ?> record) 
{
    logger.info("Message Consumed " + record);
    logger.info("Partition From which Record is Received " + record.partition());
    this.message = record.value().toString();   
}

bootstrap.servers=[localhost:9092, localhost:9093, localhost:9094]

e0bqpujr

e0bqpujr1#

集群中kafka代理节点的数量与使用者逻辑无关。集群中的节点仅用于容错和引导过程。您将消息传递放置在基于某些自定义逻辑的不同主题分区中,这也不会影响使用者逻辑。即使只有一个使用者,该使用者也会使用来自主题订阅的所有分区的消息。我要求你检查你的Kafka集群与单代理节点代码。。。

ljo96ir5

ljo96ir52#

如果您使用常规java使用者,它将自动从多个代理读取数据。您不需要编写特殊代码。只需订阅您想要消费的主题,消费将自动连接到相应的代理。您只提供一个“单一入口点”代理——客户机自动计算出集群的所有其他代理。

相关问题