从azuredatabricks运行jar时发生spark提交错误

rbl8hiat  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(430)

我正试图从azuredatabricks作业调度程序发出spark submit,目前遇到以下错误。错误说明:file:/tmp/spark事件不存在。我需要一些指针来了解我们需要在azureblob位置(这是我的存储层)还是在azuredbfs位置创建这个目录。
根据下面的链接,当运行来自azuredatabricks作业调度程序的spark submit时,不太清楚在哪里创建目录。
sparkcontext错误-找不到文件/tmp/spark事件不存在
错误:

OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
Warning: Ignoring non-Spark config property: eventLog.rolloverIntervalSeconds
Exception in thread "main" java.lang.ExceptionInInitializerError
    at com.dta.dl.ct.qm.hbase.reverse.pipeline.HBaseVehicleMasterLoad.main(HBaseVehicleMasterLoad.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.FileNotFoundException: File file:/tmp/spark-events does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
    at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:97)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:580)
    at com.dta.dl.ct.qm.hbase.reverse.pipeline.HBaseVehicleMasterLoad$.<init>(HBaseVehicleMasterLoad.scala:32)
    at com.dta.dl.ct.qm.hbase.reverse.pipeline.HBaseVehicleMasterLoad$.<clinit>(HBaseVehicleMasterLoad.scala)
    ... 13 more
hm2xizp9

hm2xizp91#

您需要在驱动程序节点和工作程序上创建此文件夹。为此,一种方法是添加属性 spark.history.fs.logDirectory (位于spark-defaults.conf文件中)的全局初始化脚本。请确保在该属性上定义的文件夹存在,并且可以从驱动程序节点进行访问

相关问题