将mesos主机的ip设置为slave

1szpjjfi  于 2021-06-21  发布在  Mesos
关注(0)|答案(1)|浏览(330)

我在10.14.56.157、10.14.56.159、10.14.56.160和10.14.56.156上分别有三个mesos从节点和一个主节点。机器的名称是worker1、worker2、worker3和master。
我设法正确地建立了Mesos星团(我相信)。10.0.0.4:5050上的web ui显示了所有三个从机。然后我在星团上运行一个Spark壳。最初一切正常:shell启动,ui显示新框架启动等。然后我尝试运行一个简单的测试:

val numbers = sc.parallelize(1 to 1000000, 1000)

很好,然后

numbers.count

当然,这是spark实际工作的时候。因此,它启动任务,将其发送到从属服务器(我可以在日志中看到),但是没有一个任务完成(状态:lost)。spark最多重试4次,最终放弃。我查看了从机(ui中的沙盒链接)上的日志,得到以下输出:

WARNING: Logging before InitGoogleLogging() is written to STDERR
I0227 13:47:59.842319 17015 fetcher.cpp:76] Fetching URI '/home/user01/spark-1.2.1-bin-hadoop1.tgz'
I0227 13:47:59.842658 17015 fetcher.cpp:179] Copying resource from '/home/user01/spark-1.2.1-bin-hadoop1.tgz' to '/tmp/mesos/slaves/20150226-160235-2620919306-5050-14323-1/frameworks/20150227-132220-2620919306-5050-30420-0001/executors/20150226-160235-2620919306-5050-14323-1/runs/1978f267-cb47-4a6c-bd1f-69e99c00ae13'
I0227 13:48:09.896682 17015 fetcher.cpp:64] Extracted resource '/tmp/mesos/slaves/20150226-160235-2620919306-5050-14323-1/frameworks/20150227-132220-2620919306-5050-30420-0001/executors/20150226-160235-2620919306-5050-14323-1/runs/1978f267-cb47-4a6c-bd1f-69e99c00ae13/spark-1.2.1-bin-hadoop1.tgz' into '/tmp/mesos/slaves/20150226-160235-2620919306-5050-14323-1/frameworks/20150227-132220-2620919306-5050-30420-0001/executors/20150226-160235-2620919306-5050-14323-1/runs/1978f267-cb47-4a6c-bd1f-69e99c00ae13'
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/02/27 13:48:11 INFO MesosExecutorBackend: Registered signal handlers for [TERM, HUP, INT]
I0227 13:48:11.493357 17124 exec.cpp:132] Version: 0.20.1
I0227 13:48:11.496057 17142 exec.cpp:206] Executor registered on slave 20150226-160235-2620919306-5050-14323-1
15/02/27 13:48:11 INFO MesosExecutorBackend: Registered with Mesos as executor ID 20150226-160235-2620919306-5050-14323-1 with 1 cpus
15/02/27 13:48:11 INFO Executor: Starting executor ID 20150226-160235-2620919306-5050-14323-1 on host 10.14.56.160
15/02/27 13:48:11 INFO SecurityManager: Changing view acls to: user01
15/02/27 13:48:11 INFO SecurityManager: Changing modify acls to: user01
15/02/27 13:48:11 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user01); users with modify permissions: Set(user01)
15/02/27 13:48:12 INFO Slf4jLogger: Slf4jLogger started
15/02/27 13:48:12 INFO Remoting: Starting remoting
15/02/27 13:48:12 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@10.14.56.160:42869]
15/02/27 13:48:12 INFO Utils: Successfully started service 'sparkExecutor' on port 42869.
15/02/27 13:48:12 INFO AkkaUtils: Connecting to MapOutputTracker: akka.tcp://sparkDriver@master:48886/user/MapOutputTracker
15/02/27 13:48:12 WARN Remoting: Tried to associate with unreachable remote address [akka.tcp://sparkDriver@master:48886]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: master: Name or service not known
akka.actor.ActorNotFound: Actor not found for:       ActorSelection[Anchor(akka.tcp://sparkDriver@master:48886/), Path(/user/MapOutputTracker)]
    at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:65)
    at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:63)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
    at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
    at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
    at akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)
    at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
    at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:267)
    at akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:508)
    at akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:541)
    at akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:531)
    at akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:87)
    at akka.remote.EndpointWriter.postStop(Endpoint.scala:561)
    at akka.actor.Actor$class.aroundPostStop(Actor.scala:475)
    at akka.remote.EndpointActor.aroundPostStop(Endpoint.scala:415)
    at akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:210)
    at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
    at akka.actor.ActorCell.terminate(ActorCell.scala:369)
    at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
    at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
    at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Exception in thread "Thread-1" I0227 13:48:12.364940 17142 exec.cpp:413] Deactivating the executor libprocess

出现错误的那一行写着:试图与不可访问的远程地址[akka]关联。tcp://sparkdriver@master:48886]
在我看来,从属服务器无法将名称master解析为主服务器的ip。对吗?如果是,如何将其更改为实际ip。如果没有,怎么解决?谢谢!

j0pj023g

j0pj023g1#

如果你打字怎么办 ping master 在其中一台从机上?如果失败了,那就是您的问题,您可以通过在每个从属服务器的 /etc/hosts 文件指向 master 到正确的ip。
你也可以尝试设置 spark.driver.host 当启动spark驱动程序时,更改它所宣传的“主机”。

相关问题