部署具有spark依赖性的storm拓扑

iovurdzv  于 2021-06-21  发布在  Storm
关注(0)|答案(1)|浏览(344)

我正在尝试部署一个具有spark依赖关系(版本1.6.1)的storm拓扑(版本1.0.0)。该拓扑通常使用本地集群工作,但不将其提交到集群。我知道spark和storm需要与log4j相关的libs。因此,如果pom文件被修改为:

<!-- Apache Spark -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.10</artifactId>
        <version>${spark.version}</version>
        <exclusions>
            <exclusion>
                <groupId>org.slf4j</groupId>
                <artifactId>slf4j-log4j12</artifactId>
            </exclusion>
            <exclusion>
                <groupId>log4j</groupId>
                <artifactId>log4j</artifactId>
            </exclusion>
        </exclusions>
    </dependency>

出现以下错误:

java.lang.NoSuchMethodError: org.apache.log4j.Logger.setLevel(Lorg/apache/log4j/Level;)V
at org.apache.spark.util.AkkaUtils$$anonfun$org$apache$spark$util$AkkaUtils$$doCreateActorSystem$1.apply(AkkaUtils.scala:75) ~[stormjar.jar:?]
at org.apache.spark.util.AkkaUtils$$anonfun$org$apache$spark$util$AkkaUtils$$doCreateActorSystem$1.apply(AkkaUtils.scala:75) ~[stormjar.jar:?]
at scala.Option.map(Option.scala:145) ~[stormjar.jar:?]
at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:75) ~[stormjar.jar:?]
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53) ~[stormjar.jar:?]
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:52) ~[stormjar.jar:?]
at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1988) ~[stormjar.jar:?]
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) ~[stormjar.jar:?]
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1979) ~[stormjar.jar:?]
at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:55) ~[stormjar.jar:?]
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:266) ~[stormjar.jar:?]
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193) ~[stormjar.jar:?]
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288) ~[stormjar.jar:?]
at org.apache.spark.SparkContext.<init>(SparkContext.scala:457) ~[stormjar.jar:?]
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59) ~[stormjar.jar:?]
at ufrn.imd.engsoft.storm.SentimentAnalyserBolt.prepare(SentimentAnalyserBolt.java:106) ~[stormjar.jar:?]
at org.apache.storm.daemon.executor$fn__8226$fn__8239.invoke(executor.clj:795) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:482) [storm-core-1.0.0.jar:1.0.0]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_65]
2016-04-22 00:29:06.285 o.a.s.util [ERROR] Halting process: ("Worker died")

如果spark相关性中没有任何排除项,则会发生以下错误:

java.lang.ExceptionInInitializerError
at org.apache.log4j.Logger.getLogger(Logger.java:39) ~[log4j-over-slf4j-1.6.6.jar:1.6.6]
at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:75) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:52) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1988) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1979) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:55) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:266) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.SparkContext.<init>(SparkContext.scala:457) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at ufrn.imd.engsoft.storm.SentimentAnalyserBolt.prepare(SentimentAnalyserBolt.java:106) ~[stormjar.jar:?]
at org.apache.storm.daemon.executor$fn__8226$fn__8239.invoke(executor.clj:795) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:482) [storm-core-1.0.0.jar:1.0.0]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_65]
Caused by: java.lang.IllegalStateException: Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class path, preempting StackOverflowError. See also http://www.slf4j.org/codes.html#log4jDelegationLoop for more details.
at org.apache.log4j.Log4jLoggerFactory.<clinit>(Log4jLoggerFactory.java:49) ~[log4j-over-slf4j-1.6.6.jar:1.6.6]
... 18 more

在上述两种情况下,风暴相关性如下:

<!-- Apache Storm -->
    <dependency>
        <groupId>org.apache.storm</groupId>
        <artifactId>storm-core</artifactId>
        <version>${storm.version}</version>
        <scope>provided</scope>
        <exclusions>
            <exclusion>
                <groupId>org.slf4j</groupId>
                <artifactId>slf4j-log4j12</artifactId>
            </exclusion>
            <exclusion>
                <groupId>log4j</groupId>
                <artifactId>log4j</artifactId>
            </exclusion>
        </exclusions>
    </dependency>

storm文件夹包含以下jar:

- log4-api-2.1
- log4j-core-2.1
- log4j-over-sl4j-1.6.6
- log4j-sl4j-impl-2.1
- sl4j-api-1.7.7
- sl4j-log4j12

也许Spark能够得到上面的jar可以解决这个问题,但我没有找到线索这样做。有人能帮我解决这个问题吗?
谢谢(:!

hivapdat

hivapdat1#

log4j 只定义一个日志接口和 log4j-over-slf4j 以及 slf4j-log4j12 以及这个接口的具体实现。由于storm找到了这两个实现,is不知道该使用哪一个(在您不排除任何内容的情况下)。
但是,在“exclude”情况下,您同时排除了接口和具体实现,因此您得到了关于缺少接口的错误。尝试仅排除实现,而不排除接口。

相关问题