mongo和spark分区失败

nr7wwzry  于 2021-05-24  发布在  Spark
关注(0)|答案(0)|浏览(329)

我创建了一个spring引导应用程序,从mongo集合中获取数据并计算一些统计数据。当我最初创建应用程序时,我使用springboot2.1.6.release版本和greenwich.sr2版本来实现我所需要的springcloud实用程序(使用配置服务器、eureka等等)。我目前正在尝试升级到springboot2.3.0.release和hoxton.sr3,当我运行应用程序时,我得到一个关于mongo分区器的错误(我使用默认的分区器,这意味着我没有在spark conf中真正指定分区器)。
下面是错误跟踪:

WARNING: Partitioning failed.
-----------------------------

Partitioning using the 'DefaultMongoPartitioner$' failed.

Please check the stacktrace to determine the cause of the failure or check the Partitioner API documentation.
Note: Not all partitioners are suitable for all toplogies and not all partitioners support views.%n

-----------------------------

2020-10-02 17:07:55.518 ERROR 23119 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Stopping container due to an Error

java.lang.NoSuchMethodError: com.mongodb.ConnectionString.getThreadsAllowedToBlockForConnectionMultiplier()Ljava/lang/Integer;
    at com.mongodb.MongoClientURI.getOptions(MongoClientURI.java:372) ~[mongo-java-driver-3.12.5.jar:na]
    at com.mongodb.Mongo.createCluster(Mongo.java:733) ~[mongo-java-driver-3.12.5.jar:na]
    at com.mongodb.Mongo.<init>(Mongo.java:315) ~[mongo-java-driver-3.12.5.jar:na]
    at com.mongodb.MongoClient.<init>(MongoClient.java:342) ~[mongo-java-driver-3.12.5.jar:na]
    at com.mongodb.spark.connection.DefaultMongoClientFactory.create(DefaultMongoClientFactory.scala:49) ~[mongo-spark-connector_2.11-2.4.2.jar:2.4.2]
    at com.mongodb.spark.connection.MongoClientCache.acquire(MongoClientCache.scala:55) ~[mongo-spark-connector_2.11-2.4.2.jar:2.4.2]
    at com.mongodb.spark.MongoConnector.acquireClient(MongoConnector.scala:242) ~[mongo-spark-connector_2.11-2.4.2.jar:2.4.2]
    at com.mongodb.spark.MongoConnector.withMongoClientDo(MongoConnector.scala:155) ~[mongo-spark-connector_2.11-2.4.2.jar:2.4.2]
    at com.mongodb.spark.MongoConnector.withDatabaseDo(MongoConnector.scala:174) ~[mongo-spark-connector_2.11-2.4.2.jar:2.4.2]
    at com.mongodb.spark.MongoConnector.hasSampleAggregateOperator(MongoConnector.scala:237) ~[mongo-spark-connector_2.11-2.4.2.jar:2.4.2]
    at com.mongodb.spark.rdd.partitioner.DefaultMongoPartitioner.partitions(DefaultMongoPartitioner.scala:33) ~[mongo-spark-connector_2.11-2.4.2.jar:2.4.2]
    at com.mongodb.spark.rdd.MongoRDD.getPartitions(MongoRDD.scala:135) ~[mongo-spark-connector_2.11-2.4.2.jar:2.4.2]
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273) ~[spark-core_2.11-2.4.7.jar:2.4.7]
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269) ~[spark-core_2.11-2.4.7.jar:2.4.7]
    at scala.Option.getOrElse(Option.scala:121) ~[scala-library-2.11.12.jar:na]
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:269) ~[spark-core_2.11-2.4.7.jar:2.4.7]
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126) ~[spark-core_2.11-2.4.7.jar:2.4.7]
    at org.apache.spark.rdd.RDD.count(RDD.scala:1213) ~[spark-core_2.11-2.4.7.jar:2.4.7]
    at org.apache.spark.api.java.JavaRDDLike$class.count(JavaRDDLike.scala:455) ~[spark-core_2.11-2.4.7.jar:2.4.7]
    at org.apache.spark.api.java.AbstractJavaRDDLike.count(JavaRDDLike.scala:45) ~[spark-core_2.11-2.4.7.jar:2.4.7]
    at com.deathstar.Datahouse.service.SparkService.printStatistics(SparkService.java:55) ~[classes/:na]
    at com.deathstar.Datahouse.service.SparkService.lambda$startSpark$0(SparkService.java:45) ~[classes/:na]
    at java.util.Arrays$ArrayList.forEach(Arrays.java:3880) ~[na:1.8.0_242]
    at com.deathstar.Datahouse.service.SparkService.startSpark(SparkService.java:45) ~[classes/:na]
    at com.deathstar.Datahouse.service.KafkaConsumer.listenHistory(KafkaConsumer.java:28) ~[classes/:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_242]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_242]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_242]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_242]
    at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:171) ~[spring-messaging-5.2.6.RELEASE.jar:5.2.6.RELEASE]
    at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:120) ~[spring-messaging-5.2.6.RELEASE.jar:5.2.6.RELEASE]
    at org.springframework.kafka.listener.adapter.HandlerAdapter.invoke(HandlerAdapter.java:48) ~[spring-kafka-2.5.0.RELEASE.jar:2.5.0.RELEASE]
    at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:334) ~[spring-kafka-2.5.0.RELEASE.jar:2.5.0.RELEASE]
    at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:86) ~[spring-kafka-2.5.0.RELEASE.jar:2.5.0.RELEASE]
    at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:51) ~[spring-kafka-2.5.0.RELEASE.jar:2.5.0.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:1834) ~[spring-kafka-2.5.0.RELEASE.jar:2.5.0.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:1817) ~[spring-kafka-2.5.0.RELEASE.jar:2.5.0.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1760) ~[spring-kafka-2.5.0.RELEASE.jar:2.5.0.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:1701) ~[spring-kafka-2.5.0.RELEASE.jar:2.5.0.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:1599) ~[spring-kafka-2.5.0.RELEASE.jar:2.5.0.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1330) ~[spring-kafka-2.5.0.RELEASE.jar:2.5.0.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1062) ~[spring-kafka-2.5.0.RELEASE.jar:2.5.0.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:970) ~[spring-kafka-2.5.0.RELEASE.jar:2.5.0.RELEASE]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_242]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_242]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_242]

任何帮助都将不胜感激!提前谢谢!

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题