kafkaspout中的异常

mwg9r5ms  于 2021-06-24  发布在  Storm
关注(0)|答案(1)|浏览(327)

我得到以下例外风暴拓扑。

java.lang.NoSuchMethodError: org.apache.kafka.common.network.NetworkSend.<init>(Ljava/lang/String;[Ljava/nio/ByteBuffer;)V
    at kafka.network.RequestOrResponseSend.<init>(RequestOrResponseSend.scala:41) ~[stormjar.jar:?]
    at kafka.network.RequestOrResponseSend.<init>(RequestOrResponseSend.scala:44) ~[stormjar.jar:?]
    at kafka.network.BlockingChannel.send(BlockingChannel.scala:112) ~[stormjar.jar:?]
    at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[stormjar.jar:?]
    at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[stormjar.jar:?]
    at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[stormjar.jar:?]
    at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) ~[stormjar.jar:?]
    at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:81) ~[stormjar.jar:?]
    at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:71) ~[stormjar.jar:?]
    at org.apache.storm.kafka.PartitionManager.<init>(PartitionManager.java:135) ~[stormjar.jar:?]
    at org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:110) ~[stormjar.jar:?]
    at org.apache.storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:71) ~[stormjar.jar:?]
    at org.apache.storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) ~[stormjar.jar:?]
    at org.apache.storm.daemon.executor$fn__10727$fn__10742$fn__10773.invoke(executor.clj:654) ~[storm-core-1.2.2.jar:1.2.2]
    at org.apache.storm.util$async_loop$fn__553.invoke(util.clj:484) [storm-core-1.2.2.jar:1.2.2]
    at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]

pom配置:

<dependency>
            <groupId>org.apache.storm</groupId>
            <artifactId>storm-core</artifactId>
            <!-- <version>0.10.0</version> -->
            <version>1.2.2</version>
            <scope>provided</scope>
            <exclusions>
                <exclusion>
                    <artifactId>log4j-core</artifactId>
                    <groupId>org.apache.logging.log4j</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>log4j-api</artifactId>
                    <groupId>org.apache.logging.log4j</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.storm</groupId>
            <artifactId>storm-kafka</artifactId>
            <!-- <version>0.10.0</version> -->
            <version>1.2.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.11</artifactId>
            <version>1.0.1</version>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.zookeeper</groupId>
                    <artifactId>zookeeper</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>log4j</groupId>
                    <artifactId>log4j</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-log4j12</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

我使用的是已弃用的storm kafka库。如果这是上述异常的原因,那么让我知道如何创建Kafka喷口使用风暴Kafka客户端库和传递自定义方案给它。
谢谢。

cclgggtu

cclgggtu1#

你能试着把 org.apache.kafka:kafka-clients 依赖项中的工件,与 kafka_2.11 ?
关于如何使用storm kafka客户端,在storm页面上有文档https://storm.apache.org/releases/2.0.0-snapshot/storm-kafka-client.html 和例子https://github.com/apache/storm/blob/master/examples/storm-kafka-client-examples/src/main/java/org/apache/storm/kafka/spout/kafkaspouttopologymainnamedtopics.java
特别是你想要的是 RecordTranslator .

ByTopicRecordTranslator<String, String> trans = new ByTopicRecordTranslator<>(
            (r) -> new Values(r.topic(), r.partition(), r.offset(), r.key(), r.value()),
            new Fields("topic", "partition", "offset", "key", "value"), TOPIC_0_1_STREAM);
        trans.forTopic(TOPIC_2,
            (r) -> new Values(r.topic(), r.partition(), r.offset(), r.key(), r.value()),
            new Fields("topic", "partition", "offset", "key", "value"), TOPIC_2_STREAM);
        return KafkaSpoutConfig.builder(bootstrapServers, new String[]{TOPIC_0, TOPIC_1, TOPIC_2})
            .setProp(ConsumerConfig.GROUP_ID_CONFIG, "kafkaSpoutTestGroup")
            .setRetry(getRetryService())
            .setRecordTranslator(trans)
            .setOffsetCommitPeriodMs(10_000)
            .setFirstPollOffsetStrategy(EARLIEST)
            .setMaxUncommittedOffsets(250)
.build();

例如,这个将从列出的字段中的每个记录输出主题、分区、偏移量、键和值,并将从主题2向其他订阅主题的不同流发出元组。如果不同的主题不需要不同的方案,可以使用 SimpleRecordTranslator 相反。

相关问题