Scala版本:2.11.12
Spark版本:2.4.0
emr-5.23.0
在运行以下命令创建Amazon EMR群集时获取以下内容
spark-submit --class etl.SparkDataProcessor --master yarn --deploy-mode cluster --conf spark.yarn.appMasterEnv.ETL_NAME=foo --conf spark.yarn.appMasterEnv.ETL_SPARK_MASTER=yarn --conf spark.yarn.appMasterEnv.ETL_AWS_ACCESS_KEY_ID=123 --conf spark.yarn.appMasterEnv.ETL_AWS_SECRET_ACCESS_KEY=abc MY-Tool.jar
异常
ERROR ApplicationMaster: Uncaught exception:
java.lang.IllegalStateException: User did not initialize spark context!
at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:485)
at org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$$runImpl(ApplicationMaster.scala:305)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply$mcV$sp(ApplicationMaster.scala:245)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply(ApplicationMaster.scala:245)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply(ApplicationMaster.scala:245)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:773)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
at org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:772)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:244)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:797)
at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
如何创建spark会话(其中sparkMaster = yarn)
lazy val spark: SparkSession = {
val logger: Logger = Logger.getLogger("etl");
val sparkAppName = EnvConfig.ETL_NAME
val sparkMaster = EnvConfig.ETL_SPARK_MASTER
val sparkInstance = SparkSession
.builder()
.appName(sparkAppName)
.master(sparkMaster)
.getOrCreate()
val hadoopConf = sparkInstance.sparkContext.hadoopConfiguration
hadoopConf.set("fs.s3.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
hadoopConf.set("fs.s3a.access.key", EnvConfig.ETL_AWS_ACCESS_KEY_ID)
hadoopConf.set("fs.s3a.secret.key", EnvConfig.ETL_AWS_SECRET_ACCESS_KEY)
logger.info("Created My SparkSession")
logger.info(s"Spark Application Name: $sparkAppName")
logger.info(s"Spark Master: $sparkMaster")
sparkInstance
}
更新:
我确定,由于应用程序逻辑的原因,在某些情况下,我们没有初始化spark会话。因此,当集群终止时,它似乎也试图对会话做一些事情(可能关闭它),因此失败了。现在我已经解决了这个问题,应用程序运行,但从来没有真正完成。目前,在集群模式下运行时,它似乎挂在涉及spark的特定部分:
val data: DataFrame = spark.read
.option("header", "true")
.option("inferSchema", "true")
.csv(s"s3://$csvPath/$fileKey")
.toDF()
20/03/16 18:38:35 INFO Client: Application report for application_1584324418613_0031 (state: RUNNING)
3条答案
按热度按时间q8l4jmvw1#
AFAIK
EnvConfig.ETL_AWS_ACCESS_KEY_ID
和ETL_AWS_SECRET_ACCESS_KEY
没有被填充,因为sparksession不能用null或空值示例化。尝试打印和调试值。同时阅读来自--conf spark.xxx的属性
应该像这个例子。我希望你正在跟踪这个…
一旦你检查过了,这个例子就应该起作用了。
另一个是,
可以使用
--master yarn
或--master local[*]
代替--conf spark.driver.port=20002
可以解决这个问题。其中20002是轨道端口。看起来它在等待特定的端口一段时间,然后重试一段时间,最后失败了。我从这里开始浏览Sparks应用程序主代码,
这有点古怪,但是我们需要等到执行用户类的线程设置了spark.driver.port属性。
你可以试试这个然后告诉我
进一步阅读:Apache Spark : How to change the port the Spark driver listens to
j8ag8udp2#
在我的例子中(在解决了应用程序问题之后),当以集群模式部署时,我需要包括核心和任务节点类型。
ryoqjall3#
我建议在启动应用程序时初始化SparkSession