在ambari上将capacityscheduler从默认更改为dominantresourcecalculator后,无法重新启动spark2 thriftserver、spark shell和sparksql

oaxa6hgo  于 2021-05-27  发布在  Spark
关注(0)|答案(0)|浏览(574)

我将yarn capacityscheduler上的spark从默认更改为ambari上的dominantresourcecalculator,然后重新启动yarn。然后我发现spark2 thriftserver停止,我尝试在ambari上重新启动并使用start thriftserver.sh,两个都失败了。

Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py", line 57, in check_process_status sudo.kill(pid, 0)
File "/usr/lib/ambari-agent/lib/resource_management/core/sudo.py", line 180, in kill
    os.kill(pid, signal)
OSError: [Errno 3] No such process
The above exception was the cause of the following exception:
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_thrift_server.py", line 85, in <module>
    SparkThriftServer().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_thrift_server.py", line 53, in start
    spark_service('sparkthriftserver', upgrade_type=upgrade_type, action='start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_service.py", line 165, in spark_service
    check_process_status(status_params.spark_thrift_server_pid_file)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py", line 61, in check_process_status
    raise ComponentIsNotRunning()
resource_management.core.exceptions.ComponentIsNotRunning

当我使用spark submit或spark shell、spark sql提交作业时,也失败了:

spark-sql --master yarn  --driver-memory 2g  --executor-cores 2 --num-executors 5 --executor-memory 4g

错误消息如下:

Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925)
        at com.im30.idmapping.idmapping_etl.userlog2hive$.main(userlog2hive.scala:21)
        at com.im30.idmapping.idmapping_etl.userlog2hive.main(userlog2hive.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题