使用pyspark的mesosschedulerdriver问题

j8yoct9x  于 2021-06-26  发布在  Mesos
关注(0)|答案(0)|浏览(150)

我有一个问题,让Pypark连接到mesos。我正在尝试在dc/os中运行jupyter。我想让jupyter连接到spark来使用mesos资源。
在本地模式下运行很好,但是当我尝试在mesos中使用spark时,我得到了一个错误。这个错误几乎是立即出现的,无论spark dispatcher是否运行都无关紧要。
我使用的是spark2.3.2(用2.4.0测试,同样的问题)。spark内部有mesos文件,还提供了libmesos文件的路径。我做错什么了?
以下是部分代码:

import pyspark
conf = pyspark.SparkConf()

# Enable logging

conf.set('spark.eventLog.enabled', True);
conf.set('spark.eventLog.dir', '/tmp/');

# Use all cores on all machines

conf.set('spark.num.executors', 1)
conf.set('spark.executor.memory', '4g')
conf.set('spark.executor.cores', 1)

# Set the parent

# conf.set('spark.master', 'local[8]')

conf.set('spark.master', 'mesos://leader.mesos:5050')
conf.getAll()

sc = pyspark.SparkContext(appName="ETL processor", conf = conf)
sc
from timeit import default_timer as timer

# Parallelize making all labels in Spark

start = timer()
sc.parallelize(list(range(0,N_PARTITIONS)), numSlices=3).map(lambda x: example_method(x)).collect()
sc.stop()
end = timer()

以下是完整的错误消息:

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-117-56fddbf03013> in <module>
     21 conf.getAll()
     22 
---> 23 sc = pyspark.SparkContext(appName="ETL processor", conf = conf)
     24 sc
     25 from timeit import default_timer as timer

~/spark-2.3.2-bin-hadoop2.6/python/pyspark/context.py in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls)
    116         try:
    117             self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,
--> 118                           conf, jsc, profiler_cls)
    119         except:
    120             # If an error occurs, clean up in order to allow future SparkContext creation:

~/spark-2.3.2-bin-hadoop2.6/python/pyspark/context.py in _do_init(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, jsc, profiler_cls)
    178 
    179         # Create the Java SparkContext through Py4J
--> 180         self._jsc = jsc or self._initialize_context(self._conf._jconf)
    181         # Reset the SparkConf to the one actually used by the SparkContext in JVM.
    182         self._conf = SparkConf(_jconf=self._jsc.sc().conf())

~/spark-2.3.2-bin-hadoop2.6/python/pyspark/context.py in _initialize_context(self, jconf)
    288         Initialize SparkContext in function to allow subclass specific initialization
    289         """
--> 290         return self._jvm.JavaSparkContext(jconf)
    291 
    292     @classmethod

spark-2.3.2-bin-hadoop2.6/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1523         answer = self._gateway_client.send_command(command)
   1524         return_value = get_return_value(
-> 1525             answer, self._gateway_client, None, self._fqn)
   1526 
   1527         for temp_arg in temp_args:

spark-2.3.2-bin-hadoop2.6/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.lang.NoClassDefFoundError: Could not initialize class org.apache.mesos.MesosSchedulerDriver
    at org.apache.spark.scheduler.cluster.mesos.MesosSchedulerUtils$class.createSchedulerDriver(MesosSchedulerUtils.scala:105)
    at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.createSchedulerDriver(MesosCoarseGrainedSchedulerBackend.scala:54)
    at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.start(MesosCoarseGrainedSchedulerBackend.scala:207)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
    at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:238)
    at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
    at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题