kafkaspout为log4j抛出noclassdeffounderror

nbnkbykc  于 2021-06-08  发布在  Kafka
关注(0)|答案(3)|浏览(260)

由于某些原因,当我尝试在storm群集上运行拓扑时,出现以下错误:

java.lang.NoClassDefFoundError: Could not initialize class org.apache.log4j.Log4jLoggerFactory
  at org.apache.log4j.Logger.getLogger(Logger.java:39)
  at kafka.utils.Logging$class.logger(Logging.scala:24)
  at kafka.consumer.SimpleConsumer.logger$lzycompute(SimpleConsumer.scala:30)
  at kafka.consumer.SimpleConsumer.logger(SimpleConsumer.scala:30)
  at kafka.utils.Logging$class.info(Logging.scala:67)
  at kafka.consumer.SimpleConsumer.info(SimpleConsumer.scala:30)
  at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:75)
  at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:69)
  at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:128)
  at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79)
  at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:77)
  at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:67)
  at storm.kafka.PartitionManager.<init>(PartitionManager.java:83)
  at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98)
  at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69)
  at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135)
  at backtype.storm.daemon.executor$fn__3373$fn__3388$fn__3417.invoke(executor.clj:565)
  at backtype.storm.util$async_loop$fn__464.invoke(util.clj:463) at clojure.lang.AFn.run(AFn.java:24)
  at java.lang.Thread.run(Thread.java:745)cg

问题是什么?如何解决?
以下是我包含的依赖项:

<dependencies>
    <dependency>
        <groupId>org.apache.storm</groupId>
        <artifactId>storm-core</artifactId>
        <version>0.9.5</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka_2.10</artifactId>
        <version>0.8.2-beta</version>
    </dependency>
    <dependency>
        <groupId>org.apache.storm</groupId>
        <artifactId>storm-kafka</artifactId>
        <version>0.9.5</version>
    </dependency>
    <dependency>
        <groupId>org.codehaus.jackson</groupId>
        <artifactId>jackson-mapper-asl</artifactId>
        <version>1.9.11</version>
    </dependency>
    <dependency>
        <groupId>org.java-websocket</groupId>
        <artifactId>Java-WebSocket</artifactId>
        <version>1.3.0</version>
    </dependency>
    <dependency>
        <groupId>org.twitter4j</groupId>
        <artifactId>twitter4j-core</artifactId>
        <version>[3.0,)</version>
    </dependency>
    <dependency>
        <groupId>org.twitter4j</groupId>
        <artifactId>twitter4j-stream</artifactId>
        <version>[3.0,)</version>
    </dependency>
</dependencies>
dl5txlt9

dl5txlt91#

我和你一样收录Kafka的时候也遇到过同样的问题,所以我根据throw exception用另一种方式收录了Kafka: SLF4J: Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class path, preempting StackOverflowError. SLF4J: See also http://www.slf4j.org/codes.html#log4jDelegationLoop for more details. 您可以通过以下方式包括Kafka:

<dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka_2.10</artifactId>
        <version>${kafka.version}</version>
        <exclusions>
            <exclusion>
                <groupId>org.apache.zookeeper</groupId>
                <artifactId>zookeeper</artifactId>
            </exclusion>
            <exclusion>
                <groupId>org.slf4j</groupId>
                <artifactId>slf4j-log4j12</artifactId>
            </exclusion>
            <exclusion>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
        </exclusion>
    </exclusions>
</dependency>
vptzau2j

vptzau2j2#

问题是,你应该以以下方式包括Kafka:

<dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka_2.10</artifactId>
        <version>0.8.1.1</version>
        <exclusions>
            <exclusion>
                <groupId>org.apache.zookeeper</groupId>
                <artifactId>zookeeper</artifactId>
            </exclusion>
            <exclusion>
                <groupId>log4j</groupId>
                <artifactId>log4j</artifactId>
            </exclusion>
        </exclusions>
    </dependency>

原因如下:
请注意,zookeeper和log4j依赖项被排除在外,以防止版本与storm的依赖项冲突。

70gysomp

70gysomp3#

您是否使用“正确”的日志框架?
从https://storm.apache.org/2013/12/08/storm090-released.html
记录更改
0.9.0中的另一个重要变化与日志记录有关。storm基本上已经切换到SLF4JAPI(由LogbackLogger实现支持)。一些storm依赖于log4japi,因此storm目前依赖于log4j-over-slf4j。
这些更改对使用log4japi的现有拓扑和拓扑组件有影响。
通常,如果可能,storm拓扑和拓扑组件应该使用slf4j api进行日志记录。
如果您不使用与storm相同的日志框架,则需要将使用过的库包含到日志中 jar 与拓扑代码一起归档。

相关问题