apachespark在完成时会杀死所有的用户进程

j13ufse2  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(292)

我正在用hadoopyarn 2.7运行spark2.0.1。
在远程服务器中执行spark应用程序之后,我的用户的所有正在运行的进程(包括ssh部分)都将被终止。
发生了什么事?我没有收到任何错误消息。
一些日志消息:

starting yarn daemons
17/05/12 09:06:47 INFO spark.SparkContext: Running Spark version 2.0.1
17/05/12 09:06:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/05/12 09:06:47 INFO spark.SecurityManager: Changing view acls to: b6p364aa
17/05/12 09:06:47 INFO spark.SecurityManager: Changing modify acls to: b6p364aa
17/05/12 09:06:47 INFO spark.SecurityManager: Changing view acls groups to: 
17/05/12 09:06:47 INFO spark.SecurityManager: Changing modify acls groups to: 
17/05/12 09:06:47 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(b6p364aa); groups with view permissions: Set(); users  with modify permissions: Set(b6p364aa); groups with modify permissions: Set()
17/05/12 09:06:48 INFO util.Utils: Successfully started service 'sparkDriver' on port 39656.
17/05/12 09:06:48 INFO spark.SparkEnv: Registering MapOutputTracker
17/05/12 09:06:48 INFO crail.CrailShuffleManager: crail shuffle started
17/05/12 09:06:48 INFO spark.SparkEnv: Registering BlockManagerMaster
17/05/12 09:06:48 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-f722164a-8e53-4873-9fe7-18c27fb26f91
17/05/12 09:06:48 INFO memory.MemoryStore: MemoryStore started with capacity 2.2 GB
17/05/12 09:06:48 INFO spark.SparkEnv: Registering OutputCommitCoordinator
17/05/12 09:06:48 INFO util.log: Logging initialized @2695ms
17/05/12 09:06:48 INFO server.Server: jetty-9.2.z-SNAPSHOT
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1573919c{/jobs,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@52c991e{/jobs/json,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@-4463d758{/jobs/job,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@-6670f6de{/jobs/job/json,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@-385b3531{/stages,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@479aa38a{/stages/json,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@62d90055{/stages/stage,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6fbd8ab3{/stages/stage/json,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@-6e223477{/stages/pool,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@24236be1{/stages/pool/json,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@-4bd6b5c9{/storage,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@61096f5f{/storage/json,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@-2e568315{/storage/rdd,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@-949a1ee{/storage/rdd/json,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4c515906{/environment,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@71a62530{/environment/json,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@39cf172c{/executors,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@552b0c6d{/executors/json,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@20b3f3e2{/executors/threadDump,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@33f91a5f{/executors/threadDump/json,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@-4c35d70c{/static,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7f700012{/,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@-5c9d764f{/api,null,AVAILABLE}
17/05/12 09:06:48 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@34cb1b19{/stages/stage/kill,null,AVAILABLE}
17/05/12 09:06:48 INFO server.ServerConnector: Started ServerConnector@ac706061{HTTP/1.1}{0.0.0.0:4040}
17/05/12 09:06:48 INFO server.Server: Started @2823ms
17/05/12 09:06:48 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
17/05/12 09:06:48 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://IP:4040
17/05/12 09:06:48 INFO spark.SparkContext: Added JAR file:/home/b6p364aa/softwares/spark-2.0.1/examples/target/scala-2.11/jars/spark-examples_2.11-2.0.1.jar at spark://IP:39656/jars/spark-examples_2.11-2.0.1.jar with timestamp 1494594408970
17/05/12 09:06:49 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/05/12 09:06:49 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
17/05/12 09:06:49 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
17/05/12 09:06:49 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
17/05/12 09:06:49 INFO yarn.Client: Setting up container launch context for our AM
17/05/12 09:06:49 INFO yarn.Client: Setting up the launch environment for our AM container
17/05/12 09:06:49 INFO yarn.Client: Preparing resources for our AM container
17/05/12 09:06:49 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/05/12 09:06:52 INFO yarn.Client: Source and destination file systems are the same. Not copying file:/tmp/spark-23be88b0-2c14-4c2a-aecc-be4af1c343b4/__spark_libs__4322647043635433398.zip
17/05/12 09:06:52 INFO yarn.Client: Uploading resource file:/tmp/spark-23be88b0-2c14-4c2a-aecc-be4af1c343b4/__spark_conf__1131208928944056566.zip -> file:/home/b6p364aa/.sparkStaging/application_1494593860686_0001/__spark_conf__.zip
17/05/12 09:06:52 INFO spark.SecurityManager: Changing view acls to: b6p364aa
17/05/12 09:06:52 INFO spark.SecurityManager: Changing modify acls to: b6p364aa
17/05/12 09:06:52 INFO spark.SecurityManager: Changing view acls groups to: 
17/05/12 09:06:52 INFO spark.SecurityManager: Changing modify acls groups to: 
17/05/12 09:06:52 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(b6p364aa); groups with view permissions: Set(); users  with modify permissions: Set(b6p364aa); groups with modify permissions: Set()
17/05/12 09:06:52 INFO yarn.Client: Submitting application application_1494593860686_0001 to ResourceManager
17/05/12 09:06:52 INFO impl.YarnClientImpl: Submitted application application_1494593860686_0001
17/05/12 09:06:52 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1494593860686_0001 and attemptId None
17/05/12 09:06:53 INFO yarn.Client: Application report for application_1494593860686_0001 (state: ACCEPTED)
17/05/12 09:06:53 INFO yarn.Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1494594412558
     final status: UNDEFINED
     tracking URL: http://p95a04:8088/proxy/application_1494593860686_0001/
     user: b6p364aa
17/05/12 09:06:54 INFO yarn.Client: Application report for application_1494593860686_0001 (state: ACCEPTED)
17/05/12 09:06:55 INFO yarn.Client: Application report for application_1494593860686_0001 (state: ACCEPTED)
17/05/12 09:06:56 INFO yarn.Client: Application report for application_1494593860686_0001 (state: ACCEPTED)
17/05/12 09:06:57 INFO yarn.Client: Application report for application_1494593860686_0001 (state: ACCEPTED)
17/05/12 09:06:58 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
17/05/12 09:06:58 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> p95a04, PROXY_URI_BASES -> http://p95a04:8088/proxy/application_1494593860686_0001), /proxy/application_1494593860686_0001
17/05/12 09:06:58 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
17/05/12 09:06:58 INFO yarn.Client: Application report for application_1494593860686_0001 (state: RUNNING)
17/05/12 09:06:58 INFO yarn.Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: IP
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1494594412558
     final status: UNDEFINED
     tracking URL: http://p95a04:8088/proxy/application_1494593860686_0001/
     user: b6p364aa
17/05/12 09:06:58 INFO cluster.YarnClientSchedulerBackend: Application application_1494593860686_0001 has started running.
17/05/12 09:06:58 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35846.
17/05/12 09:06:58 INFO netty.NettyBlockTransferService: Server created on IP:35846
17/05/12 09:06:58 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, IP, 35846)
17/05/12 09:06:58 INFO storage.BlockManagerMasterEndpoint: Registering block manager IP:35846 with 2.2 GB RAM, BlockManagerId(driver, IP, 35846)
17/05/12 09:06:58 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, IP, 35846)
17/05/12 09:06:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@-78de132b{/metrics/json,null,AVAILABLE}
17/05/12 09:07:03 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (IP:48124) with ID 1
17/05/12 09:07:03 INFO storage.BlockManagerMasterEndpoint: Registering block manager p95a04.pbm.ihost.com:37573 with 1048.8 MB RAM, BlockManagerId(1, p95a04.pbm.ihost.com, 37573)
17/05/12 09:07:04 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (IP:48140) with ID 2
17/05/12 09:07:04 INFO storage.BlockManagerMasterEndpoint: Registering block manager p95a04.pbm.ihost.com:37523 with 1048.8 MB RAM, BlockManagerId(2, p95a04.pbm.ihost.com, 37523)
17/05/12 09:07:04 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
17/05/12 09:07:04 WARN spark.SparkContext: Use an existing SparkContext, some configuration may not take effect.
17/05/12 09:07:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@-2a1bc6a3{/SQL,null,AVAILABLE}
17/05/12 09:07:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2804d375{/SQL/json,null,AVAILABLE}
17/05/12 09:07:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1789226{/SQL/execution,null,AVAILABLE}
17/05/12 09:07:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@-54d402c4{/SQL/execution/json,null,AVAILABLE}
17/05/12 09:07:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@483fd6{/static/sql,null,AVAILABLE}
17/05/12 09:07:04 INFO internal.SharedState: Warehouse path is '/home/b6p364aa/softwares/spark-2.0.1/spark-warehouse'.
17/05/12 09:07:05 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:38
17/05/12 09:07:05 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 10 output partitions
17/05/12 09:07:05 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
17/05/12 09:07:05 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/05/12 09:07:05 INFO scheduler.DAGScheduler: Missing parents: List()
17/05/12 09:07:05 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
17/05/12 09:07:05 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1832.0 B, free 2.2 GB)
17/05/12 09:07:05 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1169.0 B, free 2.2 GB)
17/05/12 09:07:05 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on IP:35846 (size: 1169.0 B, free: 2.2 GB)
17/05/12 09:07:05 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1012
17/05/12 09:07:05 INFO scheduler.DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34)
17/05/12 09:07:05 INFO cluster.YarnScheduler: Adding task set 0.0 with 10 tasks
17/05/12 09:07:05 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, p95a04.pbm.ihost.com, partition 0, PROCESS_LOCAL, 5456 bytes)
17/05/12 09:07:05 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, p95a04.pbm.ihost.com, partition 1, PROCESS_LOCAL, 5456 bytes)
17/05/12 09:07:05 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, p95a04.pbm.ihost.com, partition 2, PROCESS_LOCAL, 5456 bytes)
17/05/12 09:07:05 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, p95a04.pbm.ihost.com, partition 3, PROCESS_LOCAL, 5456 bytes)
17/05/12 09:07:05 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 0 on executor id: 2 hostname: p95a04.pbm.ihost.com.
17/05/12 09:07:05 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 2 on executor id: 2 hostname: p95a04.pbm.ihost.com.
17/05/12 09:07:05 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 1 on executor id: 1 hostname: p95a04.pbm.ihost.com.
17/05/12 09:07:05 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 3 on executor id: 1 hostname: p95a04.pbm.ihost.com.
17/05/12 09:07:05 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on p95a04.pbm.ihost.com:37523 (size: 1169.0 B, free: 1048.8 MB)
17/05/12 09:07:05 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on p95a04.pbm.ihost.com:37573 (size: 1169.0 B, free: 1048.8 MB)
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, p95a04.pbm.ihost.com, partition 4, PROCESS_LOCAL, 5456 bytes)
17/05/12 09:07:06 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 4 on executor id: 1 hostname: p95a04.pbm.ihost.com.
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 775 ms on p95a04.pbm.ihost.com (1/10)
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, p95a04.pbm.ihost.com, partition 5, PROCESS_LOCAL, 5456 bytes)
17/05/12 09:07:06 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 5 on executor id: 1 hostname: p95a04.pbm.ihost.com.
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 788 ms on p95a04.pbm.ihost.com (2/10)
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, p95a04.pbm.ihost.com, partition 6, PROCESS_LOCAL, 5456 bytes)
17/05/12 09:07:06 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 6 on executor id: 2 hostname: p95a04.pbm.ihost.com.
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 831 ms on p95a04.pbm.ihost.com (3/10)
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, p95a04.pbm.ihost.com, partition 7, PROCESS_LOCAL, 5456 bytes)
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 798 ms on p95a04.pbm.ihost.com (4/10)
17/05/12 09:07:06 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 7 on executor id: 2 hostname: p95a04.pbm.ihost.com.
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, p95a04.pbm.ihost.com, partition 8, PROCESS_LOCAL, 5456 bytes)
17/05/12 09:07:06 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 8 on executor id: 1 hostname: p95a04.pbm.ihost.com.
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 50 ms on p95a04.pbm.ihost.com (5/10)
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, p95a04.pbm.ihost.com, partition 9, PROCESS_LOCAL, 5456 bytes)
17/05/12 09:07:06 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 9 on executor id: 1 hostname: p95a04.pbm.ihost.com.
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 50 ms on p95a04.pbm.ihost.com (6/10)
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 64 ms on p95a04.pbm.ihost.com (7/10)
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 63 ms on p95a04.pbm.ihost.com (8/10)
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 44 ms on p95a04.pbm.ihost.com (9/10)
17/05/12 09:07:06 INFO scheduler.TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 48 ms on p95a04.pbm.ihost.com (10/10)
17/05/12 09:07:06 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 
17/05/12 09:07:06 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:38) finished in 0.922 s
17/05/12 09:07:06 INFO scheduler.DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 1.111249 s
Pi is roughly 3.1436431436431436
17/05/12 09:07:06 INFO server.ServerConnector: Stopped ServerConnector@ac706061{HTTP/1.1}{0.0.0.0:4040}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@34cb1b19{/stages/stage/kill,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@-5c9d764f{/api,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7f700012{/,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@-4c35d70c{/static,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@33f91a5f{/executors/threadDump/json,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@20b3f3e2{/executors/threadDump,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@552b0c6d{/executors/json,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@39cf172c{/executors,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@71a62530{/environment/json,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4c515906{/environment,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@-949a1ee{/storage/rdd/json,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@-2e568315{/storage/rdd,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@61096f5f{/storage/json,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@-4bd6b5c9{/storage,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@24236be1{/stages/pool/json,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@-6e223477{/stages/pool,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6fbd8ab3{/stages/stage/json,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@62d90055{/stages/stage,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@479aa38a{/stages/json,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@-385b3531{/stages,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@-6670f6de{/jobs/job/json,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@-4463d758{/jobs/job,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@52c991e{/jobs/json,null,UNAVAILABLE}
17/05/12 09:07:06 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1573919c{/jobs,null,UNAVAILABLE}
17/05/12 09:07:06 INFO ui.SparkUI: Stopped Spark web UI at http://IP:4040
17/05/12 09:07:06 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
17/05/12 09:07:06 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
17/05/12 09:07:06 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
17/05/12 09:07:06 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
17/05/12 09:07:06 INFO cluster.YarnClientSchedulerBackend: Stopped
17/05/12 09:07:06 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/05/12 09:07:06 INFO crail.CrailShuffleManager: shutting down crail shuffle manager
17/05/12 09:07:06 INFO memory.MemoryStore: MemoryStore cleared
17/05/12 09:07:06 INFO storage.BlockManager: BlockManager stopped
17/05/12 09:07:06 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
17/05/12 09:07:06 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/05/12 09:07:06 INFO spark.SparkContext: Successfully stopped SparkContext
17/05/12 09:07:06 INFO util.ShutdownHookManager: Shutdown hook called
17/05/12 09:07:06 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-23be88b0-2c14-4c2a-aecc-be4af1c343b4

stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
no proxyserver to stop
dbf7pr2w

dbf7pr2w1#

我相信您可能已经在yarn客户机模式下提交了,它正在使用yarn资源管理器分配资源,一旦作业执行完成,它就会取消分配资源。但是终止ssh会话很奇怪

相关问题