pyspark流提交到kafka的偏移量

2exbekwf  于 2021-06-08  发布在  Kafka
关注(0)|答案(1)|浏览(361)

根据文档,可以从(scala)spark流应用程序将offset提交到kafka中。我想从pyspark实现同样的功能。
或者至少将kafka分区、偏移量存储到外部数据存储(rdbms等)中。
但是,用于kafka集成的pyspark api只提供 RDD(offset, value)] 而不是 RDD[ConsumerRecord] (如在斯卡拉)。有没有办法得到 (topic, partition, offset) 从python rdd?还有其他地方?

ajsxfq5m

ajsxfq5m1#

我们可以用多种方式处理抵消。其中一种方法是在每次成功处理数据时将偏移值存储在zookeeper path中,并在再次创建流时读取该值。代码片段如下。

from kazoo.client import KazooClient
zk = KazooClient(hosts='127.0.0.1:2181')
zk.start()
ZOOKEEPER_SERVERS = "127.0.0.1:2181"

def get_zookeeper_instance():
    from kazoo.client import KazooClient
    if 'KazooSingletonInstance' not in globals():
        globals()['KazooSingletonInstance'] = KazooClient(ZOOKEEPER_SERVERS)
        globals()['KazooSingletonInstance'].start()
    return globals()['KazooSingletonInstance']

def save_offsets(rdd):
    zk = get_zookeeper_instance()
    for offset in rdd.offsetRanges():
        path = f"/consumers/{var_topic_src_name}"
        print(path)
        zk.ensure_path(path)
        zk.set(path, str(offset.untilOffset).encode())

    var_offset_path = f'/consumers/{var_topic_src_name}'

    try:
        var_offset = int(zk.get(var_offset_path)[0])
    except:
        print("The spark streaming started First Time and Offset value should be Zero")
        var_offset  = 0
    var_partition = 0
    enter code here
    topicpartion = TopicAndPartition(var_topic_src_name, var_partition)
    fromoffset = {topicpartion: var_offset}
    print(fromoffset)
    kvs = KafkaUtils.createDirectStream(ssc,\
                                        [var_topic_src_name],\
                                        var_kafka_parms_src,\
                                        valueDecoder=serializer.decode_message,\
                                        fromOffsets = fromoffset)
    kvs.foreachRDD(handler)
    kvs.foreachRDD(save_offsets)

当做
karthikeyan rasipalayam durairaj先生

相关问题