正在重试连接到服务器0.0.0.0/0.0.0.0:8032

tp5buhyn  于 2021-05-27  发布在  Spark
关注(0)|答案(0)|浏览(815)

我在Yarn上擦Spark。hadoop的版本是3.1.1,spark的版本是2.3.2。hadoop集群有3个节点。
我通过用户a提交job1spark。它运行正常。
但是用户b提交job2时,它出错了。
用户a和b在一台机器上。

INFO RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
INFO Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
INFO Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
INFO Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
INFO Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题