从本地ide运行到远程spark群集

cmssoen2  于 2021-06-02  发布在  Hadoop
关注(0)|答案(2)|浏览(358)

我们有一个煤焦化的星团,星星之火在Yarn上运行。目前,我们在scala本地编写spark代码,然后构建一个fat jar,将其复制到集群,然后运行spark submit。相反,我希望在本地pc上编写spark代码,并让它直接在集群上运行。有没有直接的方法?spark医生似乎没有这样的模式。
仅供参考,我的本地计算机正在运行windows,群集正在运行cdh。

mbjcgjjk

mbjcgjjk1#

虽然cricket007的答案适用于spark submit,但下面是我使用intellij在远程集群上运行的方法:
首先,确保客户端和服务器端的jar是相同的。因为我们使用的是cdh7.1,所以我确保我所有的jar都来自特定的发行版。
按照cricket007的答案设置hadoop\u conf\u dir和yarn\u conf\u dir。在spark conf中设置“spark.yarn.principal”和“spark.yarn.keytab”。
如果连接到hdfs,请确保在build.sbt中设置了以下排除规则:

libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.6.0-cdh5.7.1" excludeAll ExclusionRule(organization = "javax.servlet")

确保在build.sbt上列出了spark launcher和spark warn jar。

libraryDependencies += "org.apache.spark" %% "spark-launcher" % "1.6.0-cdh5.7.1"

libraryDependencies += "org.apache.spark" %% "spark-yarn" % "1.6.0-cdh5.7.1"

在服务器上找到cdhjar并将它们复制到hdfs上的已知位置。在代码中添加以下行:

final val CDH_JAR_PATH = "/opt/cloudera/parcels/CDH/jars"

final val hadoopJars: Seq[String] = Seq[String](
"hadoop-annotations-2.6.0-cdh5.7.1.jar"
, "hadoop-ant-2.6.0-cdh5.7.1.jar"
, "hadoop-ant-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-archive-logs-2.6.0-cdh5.7.1.jar"
, "hadoop-archives-2.6.0-cdh5.7.1.jar"
, "hadoop-auth-2.6.0-cdh5.7.1.jar"
, "hadoop-aws-2.6.0-cdh5.7.1.jar"
, "hadoop-azure-2.6.0-cdh5.7.1.jar"
, "hadoop-capacity-scheduler-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-common-2.6.0-cdh5.7.1.jar"
, "hadoop-core-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-datajoin-2.6.0-cdh5.7.1.jar"
, "hadoop-distcp-2.6.0-cdh5.7.1.jar"
, "hadoop-examples-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-examples.jar"
, "hadoop-extras-2.6.0-cdh5.7.1.jar"
, "hadoop-fairscheduler-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-gridmix-2.6.0-cdh5.7.1.jar"
, "hadoop-gridmix-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-hdfs-2.6.0-cdh5.7.1.jar"
, "hadoop-hdfs-nfs-2.6.0-cdh5.7.1.jar"
, "hadoop-kms-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-app-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-common-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-core-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-hs-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-jobclient-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-nativetask-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-shuffle-2.6.0-cdh5.7.1.jar"
, "hadoop-nfs-2.6.0-cdh5.7.1.jar"
, "hadoop-openstack-2.6.0-cdh5.7.1.jar"
, "hadoop-rumen-2.6.0-cdh5.7.1.jar"
, "hadoop-sls-2.6.0-cdh5.7.1.jar"
, "hadoop-streaming-2.6.0-cdh5.7.1.jar"
, "hadoop-streaming-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-tools-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-yarn-api-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-applications-distributedshell-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-client-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-common-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-registry-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-common-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-nodemanager-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-resourcemanager-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-web-proxy-2.6.0-cdh5.7.1.jar"
, "hbase-hadoop2-compat-1.2.0-cdh5.7.1.jar"
, "hbase-hadoop-compat-1.2.0-cdh5.7.1.jar")

final val sparkJars: Seq[String] = Seq[String](
"spark-1.6.0-cdh5.7.1-yarn-shuffle.jar",
"spark-assembly-1.6.0-cdh5.7.1-hadoop2.6.0-cdh5.7.1.jar",
"spark-avro_2.10-1.1.0-cdh5.7.1.jar",
"spark-bagel_2.10-1.6.0-cdh5.7.1.jar",
"spark-catalyst_2.10-1.6.0-cdh5.7.1.jar",
"spark-core_2.10-1.6.0-cdh5.7.1.jar",
"spark-examples-1.6.0-cdh5.7.1-hadoop2.6.0-cdh5.7.1.jar",
"spark-graphx_2.10-1.6.0-cdh5.7.1.jar",
"spark-hive_2.10-1.6.0-cdh5.7.1.jar",
"spark-launcher_2.10-1.6.0-cdh5.7.1.jar",
"spark-mllib_2.10-1.6.0-cdh5.7.1.jar",
"spark-network-common_2.10-1.6.0-cdh5.7.1.jar",
"spark-network-shuffle_2.10-1.6.0-cdh5.7.1.jar",
"spark-repl_2.10-1.6.0-cdh5.7.1.jar",
"spark-sql_2.10-1.6.0-cdh5.7.1.jar",
"spark-streaming-flume-sink_2.10-1.6.0-cdh5.7.1.jar",
"spark-streaming-flume_2.10-1.6.0-cdh5.7.1.jar",
"spark-streaming-kafka_2.10-1.6.0-cdh5.7.1.jar",
"spark-streaming_2.10-1.6.0-cdh5.7.1.jar",
"spark-unsafe_2.10-1.6.0-cdh5.7.1.jar",
"spark-yarn_2.10-1.6.0-cdh5.7.1.jar")

def getClassPath(jarNames: Seq[String], pathPrefix: String): String = {
jarNames.foldLeft("")((cp, name) => s"$cp:$pathPrefix/$name").drop(1)

}
创建sparkconf时添加以下行:

.set("spark.driver.extraClassPath", getClassPath(sparkJars ++ hadoopJars, CDH_JAR_PATH))
.set("spark.executor.extraClassPath", getClassPath(sparkJars ++ hadoopJars, CDH_JAR_PATH))
.set("spark.yarn.jars", "hdfs://$YOUR_MACHINE/PATH_TO_JARS/*")

你的程序现在应该可以用了。

anauzrmj

anauzrmj2#

假设您的类路径上有正确的包(由sbt、maven等进行的最简单的设置),您应该能够 spark-submit 从任何地方。这个 --master 标志是真正决定作业分配方式的主要部分。需要考虑的一件事是,您的本地计算机是否没有通过防火墙或其他网络防护(例如)从yarn集群中被阻止(因为您不希望人们在您的集群上随机运行应用程序)
在本地计算机上,您需要集群和安装程序中的hadoop配置文件 $SPARK_HOME/conf 目录以容纳一些与hadoop相关的设置。
从纱页上的Spark。
确保 HADOOP_CONF_DIR 或者 YARN_CONF_DIR 指向包含hadoop集群的(客户端)配置文件的目录。这些配置用于写入hdfs并连接到yarn resourcemanager。此目录中包含的配置将分发到yarn集群,以便应用程序使用的所有容器都使用相同的配置
这些值是从 $SPARK_HOME/conf/spark-env.sh 由于你是kerberized,请参阅长时间运行的spark应用程序
对于长时间运行的应用程序(如spark流作业)要写入hdfs,必须为spark for spark配置kerberos身份验证,并将spark主体和keytab传递给 spark-submit 使用脚本 --principal 以及 --keytab 参数

相关问题