spark submit不适用于hdfs中的jar

flseospp  于 2021-05-27  发布在  Hadoop
关注(0)|答案(2)|浏览(446)

我的情况是:
apache spark 2.4.4版
hadoop版本2.7.4
我的应用程序jar位于hdfs中。
我的spark提交如下:

/software/spark-2.4.4-bin-hadoop2.7/bin/spark-submit \
--class com.me.MyClass --master spark://host2.local:7077 \
--deploy-mode cluster \
hdfs://host2.local:9000/apps/myapps.jar

我得到这个错误:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.tracing.SpanReceiverHost.get(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;)Lorg/apache/hadoop/tracing/SpanReceiverHost;
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:634)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2598)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2632)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2614)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
    at org.apache.spark.deploy.DependencyUtils$$anonfun$resolveGlobPaths$2.apply(DependencyUtils.scala:144)
    at org.apache.spark.deploy.DependencyUtils$$anonfun$resolveGlobPaths$2.apply(DependencyUtils.scala:139)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
    at org.apache.spark.deploy.DependencyUtils$.resolveGlobPaths(DependencyUtils.scala:139)
    at org.apache.spark.deploy.DependencyUtils$$anonfun$resolveAndDownloadJars$1.apply(DependencyUtils.scala:61)
    at org.apache.spark.deploy.DependencyUtils$$anonfun$resolveAndDownloadJars$1.apply(DependencyUtils.scala:64)
    at scala.Option.map(Option.scala:146)
    at org.apache.spark.deploy.DependencyUtils$.resolveAndDownloadJars(DependencyUtils.scala:60)
    at org.apache.spark.deploy.worker.DriverWrapper$.setupDependencies(DriverWrapper.scala:96)
    at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:60)
    at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)

有什么办法解决这个问题吗?谢谢您。

7z5jn7bk

7z5jn7bk1#

--deploy-mode cluster 在这种情况下会有帮助。把jar放在一起由Yarn团负责。

zaqlnxep

zaqlnxep2#

无需将jar转移到集群中,您可以使用可执行权限从本地id本身运行jar。应用程序构建完成后,将.jar传输到unix用户帐户并授予其可执行权限。看看下面的Sparksubmit:-
spark submit--master yarn--deploy mode cluster--queue default--files“属性文件的完整路径”--driver memory 4g--num executors 8--executor cores 1--executor memory 4g--class“主类名”
“传输到本地unix id的jar的完整路径”
如果需要,可以使用其他spark submit配置参数。请注意,在某些版本中,如果涉及多个spark版本,则必须使用spark2 submit而不是spark submit。

相关问题