运行sparkshell会导致“连接被拒绝”

bt1cpqcv  于 2021-05-27  发布在  Hadoop
关注(0)|答案(0)|浏览(461)

我试着在hadoop上运行spark。
当我尝试运行spark shell时,它会导致connectionrejected异常。日志如下所示:

ERROR cluster.YarnClientSchedulerBackend: The YARN application has already ended! It might have been
killed or the Application Master may have failed to start. Check the YARN application logs for more details.
19/10/22 15:13:36 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Application application_1571690458300_0002 failed 2 times due to
 Error launching appattempt_1571690458300_0002_000002. Got exception: java.net.ConnectException: 
Call From master/x.x.x.x(a correct IP:) ) to ubuntu:43856 failed on connection exception: 
java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop
/ConnectionRefused

但是当我用yarn运行wordcount示例时,一切都是好的
正如日志所说的,它正在尝试连接到ubuntu:43856,我想这是试图连接到我的一个奴隶
它应该是slave1:43856(因为我设置了workers文件)。我想问题在这里,但是
单独运行Yarn(无Spark)是可以的。
出口 yarn node -list 命令是:

Node-Id             Node-State Node-Http-Address       Number-of-Running-Containers
    ubuntu:43856                RUNNING       ubuntu:8042                                  0
    ubuntu:37951                RUNNING       ubuntu:8042                                  0
    ubuntu:34335                RUNNING       ubuntu:8042                                  0
    ubuntu:46500                RUNNING       ubuntu:8042                                  0

有很多配置文件,让我知道如果一个(或多个)的文件需要。
提前谢谢。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题