spark作业失败,因为它找不到hadoop core-site.xml

xuo3flqw  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(469)

我正在尝试运行spark作业,在尝试启动驱动程序时出现以下错误:

16/05/17 14:21:42 ERROR SparkContext: Error initializing SparkContext.
java.io.FileNotFoundException: Added file file:/var/lib/mesos/slave/slaves/0c080f97-9ef5-48a6-9e11-cf556dfab9e3-S1/frameworks/5c37bb33-20a8-4c64-8371-416312d810da-0002/executors/driver-20160517142123-0183/runs/802614c4-636c-4873-9379-b0046c44363d/core-site.xml does not exist.
    at org.apache.spark.SparkContext.addFile(SparkContext.scala:1364)
    at org.apache.spark.SparkContext.addFile(SparkContext.scala:1340)
    at org.apache.spark.SparkContext$$anonfun$15.apply(SparkContext.scala:491)
    at org.apache.spark.SparkContext$$anonfun$15.apply(SparkContext.scala:491)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:491)
    at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
    at com.spark.test.SparkJobRunner.main(SparkJobRunner.java:56)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

我在几个服务器上运行spark,这些服务器是我mesos集群的一部分(不确定这是否正确,但这就是我正在做的),我也在这些服务器上运行hadoop。我在一台服务器上启动了spark主服务器,然后在其他服务器上启动了spark从属服务器。我有3个应用程序,这并不重要,但我有一个ui,用户可以启动spark作业,它将作业放入kafka队列中,我有一个使用sparklauncher创建spark作业的启动程序(见下面的代码),然后我有一个spark驱动程序,它连接到kafka队列,然后处理从ui发送的请求。ui和launcher正在运行。上面提到的spark是集群上自己的进程,驱动程序连接到spark来运行作业。编辑:我已将hdfs-site.xml、core-site.xml和spark-env.sh上载到hadoop,并在spark上下文中指向它们:

SparkConf conf = new SparkConf()
                .setAppName(config.getString(SPARK_APP_NAME))
                .setMaster(sparkMaster)
                .setExecutorEnv("HADOOP_USER_NAME", config.getString(HADOOP_USER, ""))
                .set("spark.mesos.uris", "<hadoop node>:9000/config/core-site.xml,<hadoop node>:9000/config/hdfs-site.xml") 
                .set("spark.files", "core-site.xml,hdfs-site.xml,spark-env.sh") 
                .set("spark.mesos.coarse", "true")
                .set("spark.cores.max", config.getString(SPARK_CORES_MAX))
                .set("spark.driver.memory", config.getString(SPARK_DRIVER_MEMORY))
                .set("spark.driver.extraJavaOptions", config.getString(SPARK_DRIVER_EXTRA_JAVA_OPTIONS, ""))
                .set("spark.executor.memory", config.getString(SPARK_EXECUTOR_MEMORY))
                .set("spark.executor.extraJavaOptions", config.getString(SPARK_EXECUTOR_EXTRA_JAVA_OPTIONS))
                .set("spark.executor.uri", hadoopPath);

以下是启动驱动程序的代码:

SparkLauncher launcher = new SparkLauncher()
            .setMaster(<my spark/mesos master>)
            .setDeployMode("cluster")
            .setSparkHome("/home/spark")
            .setAppResource(<hdfs://path/to/a/spark.jar>)
            .setMainClass(<my main class>);
handle = launcher.startApplication();

我肯定我做错了什么我就是搞不清楚是什么。我对spark、hadoop和mesos都是新手,所以请随意指出我做错了什么。

qqrboqgw

qqrboqgw1#

我的问题是,我没有在集群中的每台服务器上设置$spark\u home/spark-env.sh中的hadoop\u conf\u dir。一旦我确定我能够得到我的Spark工作开始正确。我还意识到我不需要在sparkconf中包含core-site.xml、hdfs-site.xml或spark-env.sh文件,所以我删除了设置“spark.files”的行

相关问题