core在docker上运行mesos集群

kninwzqo  于 2021-06-26  发布在  Mesos
关注(0)|答案(1)|浏览(292)

我有一个叫ubuntu\u mesos\u spark的docker图片。我在上面安装了Zookeeper。我这样更改“zoo.cfg”文件:这是node1(150.20.11.157)中的“zoo.cfg”

tickTime=2000
initLimit=10
syncLimit=5
clientPort=2187
dataDir=/var/lib/zookeeper
server.1=0.0.0.0:2888:3888
server.2=150.20.11.157:2888:3888
server.3=150.20.11.137:2888:3888

这是节点1中的“zoo.cfg”(150.20.11.134)

tickTime=2000
initLimit=10
syncLimit=5
clientPort=2187
dataDir=/var/lib/zookeeper
server.1=150.20.11.157:2888:3888
server.2=0.0.0.0:2888:3888
server.3=150.20.11.137:2888:3888

这是节点1中的“zoo.cfg”(150.20.11.137)

tickTime=2000
 initLimit=10
 syncLimit=5
 clientPort=2187
 dataDir=/var/lib/zookeeper
 server.1=150.20.11.157:2888:3888
 server.2=150.20.11.134:2888:3888
 server.3=0.0.0.0:2888:3888

我还在每个节点的“/var/lib/zookeeper”中创建了一个“myid”文件。例如,“150.20.11.157”在myid文件中的id是“1”。我也在码头上安装了mesos和spark。我也有一个由这三个节点组成的Mesos星团。我在这个文件中定义了从属节点的ip地址:“spark/conf/slaves”

150.20.11.134
150.20.11.137

我在“spark/conf/spark env.sh”中添加了这些行:

export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
export SPARK_EXECUTOR_URI=/home/spark/program_file/spark-2.3.2-bin- 
hadoop2.7.tgz

此外,我在“~/.bashrc”文件中添加了以下行:

export SPARK_HOME="/home/spark"
PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.7- 
src.zip:$PYTHO$
export PYSPARK_HOME=/usr/bin/python3.6
export PYSPARK_DRIVER_PYTHON=python3.6
export ZOO_LOG_DIR=/var/log/zookeeper

我想运行“150.20.11.157”中的主代码。我的docker compose是:

version: '3.7'
 services:
  zookeeper:
  image: ubuntu_mesos_spark
  command: /zookeeper-3.4.12/bin/zkServer.sh start
  environment:
   ZOOKEEPER_SERVER_ID: 1
   ZOOKEEPER_CLIENT_PORT: 2187
   ZOOKEEPER_TICK_TIME: 2000
   ZOOKEEPER_INIT_LIMIT: 10
   ZOOKEEPER_SYNC_LIMIT: 5
   ZOOKEEPER_SERVERS: 
   0.0.0.0:2888:3888;150.20.11.134:2888:3888;150.20.11.137:2888:3888
 network_mode: host
 expose:
  - 2187 
  - 2888
  - 3888
 ports:
  - 2187:2187
  - 2888:2888
  - 3888:3888

master:
image: ubuntu_mesos_spark
command: bash -c "sleep 20; /home/mesos-1.7.0/build/bin/mesos- 
master.sh --ip=150.20.11.157 --work_dir=/var/run/mesos"
restart: always
depends_on:
 - zookeeper
environment:
 - MESOS_HOSTNAME="150.20.11.157,150.20.11.134,150.20.11.137"
 - MESOS_QUORUM=1
 - MESOS_LOG_DIR=/var/log/mesos
expose:
 - 5050
 - 4040
 - 7077
 - 8080
ports:
  - 5050:5050
  - 4040:4040
  - 7077:7077
  - 8080:8080

另外,我在从属节点上运行这个compose文件:“150.20.11.134150.20.11.137”:

version: '3.7'
 services:
  zookeeper:
  image: ubuntu_mesos_spark
  command: /zookeeper-3.4.12/bin/zkServer.sh start
  environment:
   ZOOKEEPER_SERVER_ID: 2
   ZOOKEEPER_CLIENT_PORT: 2187
   ZOOKEEPER_TICK_TIME: 2000
   ZOOKEEPER_INIT_LIMIT: 10
   ZOOKEEPER_SYNC_LIMIT: 5
   ZOOKEEPER_SERVERS: 
   0.0.0.0:2888:3888;150.20.11.134:2888:3888;150.20.11.137:2888:3888
 network_mode: host
 expose:
  - 2187 
  - 2888
  - 3888
 ports:
  - 2187:2187
  - 2888:2888
  - 3888:3888

slave:
image: ubuntu_mesos_spark
command: bash -c "/home/mesos-1.7.0/build/bin/mesos-slave.sh -- 
master=150.20.11.157:5050 --work_dir=/var/run/mesos  
--systemd_enable_support=false"
restart: always
privileged: true
network_mode: host
depends_on:
- zookeeper
environment:
 - MESOS_HOSTNAME="150.20.11.157,150.20.11.134,150.20.11.137"
 - MESOS_MASTER=150.20.11.157
 - MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins #also in Dockerfile
 - MESOS_CONTAINERIZERS=docker,mesos
 - MESOS_LOG_DIR=/var/log/mesos
 - MESOS_LOGGING_LEVEL=INFO
expose:
  - 5051
ports:
  - 5051:5051

首先,我在主节点上运行“sudo docker compose up”。然后我在从属节点上运行它。但我有个错误:
在主节点上,错误为:
正在启动marzieh-compose\u zookeeper\u 1。。。完成
重新创建marzieh-compose\u master\u 1。。。完成
连接到marzieh-compose\u zookeeper\u 1,marzieh-compose\u master\u 1
zookeeper| zookeeper jmx默认启用
zookeeper|使用配置:/zookeeper-3.4.12/bin/./conf/zoo.cfg
zookeeper|正在启动zookeeper。。。起动
marzieh-compose\u zookeeper\u 1已退出,代码为0
硕士| i0123 11:46:59.585522 7测井。cpp:201]信息级别日志记录已开始!
硕士1 | i0123 11:46:59.586066 7主要。cpp:242]建造时间:2019-01-21 05:16:39由船长完成| i0123 11:46:59.586097 7干管。cpp:243]版本:1.7.0
掌握| f0123 11:46:59.587368 7过程。cpp:1115]未能初始化:未能在150.20.11.157:5050上绑定:无法分配请求的地址
master| 1 |检查失败堆栈跟踪:
master|@0x7f505ce54b9c google::logmessage::fail()
master|@0x7f505ce54ae0 google::logmessage::sendtolog()
master|@0x7f505ce544b2 google::logmessage::flush()
母版|@0x7f505ce57770
google::logmessagefatal::~logmessagefatal()
master|@0x7f505cd19ed1进程::初始化()
master|@0x55fb7b12981a主
master|@0x7f504f0d0830(未知)
主机1 |@0x55fb7b128b9 |启动
master|bash:line 1:7 aborted(core dumped)/home/mesos-1.7.0/build/bin/mesos-master.sh--ip=150.20.11.157--work|dir=/var/run/mesos
此外,当我在从属节点上运行“sudo docker compose up”时。我有个错误:
从| f0123 11:40:06.878793 1进程。cpp:1115]未能初始化:未能在0.0.0.0:5051上绑定:地址已在使用中
slave| u 1 |检查失败堆栈跟踪:
slave|@0x7fee9d319b9c google::logmessage::fail()
slave|@0x7fee9d319ae0 google::logmessage::sendtolog()
slave|@0x7fee9d3194b2 google::logmessage::flush()
从机1 |@0x7fee9d31c770
google::logmessagefatal::~logmessagefatal()
slave|@0x7fee9d1deed1进程::初始化()
从机|@0x55e99f661784主
slave|@0x7fee8f595830(未知)
从机1 |@0x55e99f65f139 |启动
slave| 1 |在1548243606(unix时间)中止如果您使用的是gnu date,请尝试“date-d@1548243606”
从机| pc:@0x7fee8f5ac196(未知)
从|pid 1(tid 0x7fee9f9f38c0)从pid 0接收到sigsegv(@0x0);堆栈跟踪:
从机|@0x7fee8fee8390(未知)
从机|@0x7fee8f5ac196(未知)
slave|@0x7fee9d32055b google::dumpstacktraceandexit()
slave|@0x7fee9d319b9c google::logmessage::fail()
slave|@0x7fee9d319ae0 google::logmessage::sendtolog()
slave|@0x7fee9d3194b2 google::logmessage::flush()
slave|@0x7fee9d31c770 google::logmessagefatal::~logmessagefatal()
slave|@0x7fee9d1deed1进程::初始化()
从机|@0x55e99f661784主
slave|@0x7fee8f595830(未知)
从机1 |@0x55e99f65f139 |启动
slave| i0123 11:41:07.818897 1日志记录。cpp:201]信息级别日志记录已开始!
从| i0123 11:41:07.819437 1主。cpp:349]建造时间:2019-01-21 05:16:39
从| i0123 11:41:07.819470 1主。cpp:350]版本:1.7.0
从| i0123 11:41:07.823354 1分解器。cpp:69]正在创建默认秘密解析程序
从| e0123 11:41:07.927773 1主。cpp:483]退出,状态1:未能创建容器化程序:无法创建docker容器化程序:未能创建docker:未能获取docker版本:未能执行'docker-h unix:///var/run/docker.sock--version':退出,状态127
我找了很多关于这个的东西,但我想不出来。你能告诉我写docker compose在docker上运行mesos和spark cluster的正确方法吗?
任何帮助都将不胜感激。
提前谢谢。

laawzig2

laawzig21#

问题解决了。我把docker的作品改成这样,主人和奴隶毫无问题地跑起来:
主节点中的“docker compose.yaml”如下所示:

version: '3.7'
services:
zookeeper:
 image: ubuntu_mesos_spark_python3.6_client
 command: /home/zookeeper-3.4.12/bin/zkServer.sh start
 environment:
  ZOOKEEPER_SERVER_ID: 1
  ZOOKEEPER_CLIENT_PORT: 2188
  ZOOKEEPER_TICK_TIME: 2000
  ZOOKEEPER_INIT_LIMIT: 10
  ZOOKEEPER_SYNC_LIMIT: 5
  ZOOKEEPER_SERVERS: 0.0.0.0:2888:3888;150.20.11.157:2888:3888
 network_mode: host
 expose:
  - 2188
  - 2888
  - 3888
 ports:
  - 2188:2188
  - 2888:2888
  - 3888:3888

master:
image: ubuntu_mesos_spark_python3.6_client
command: bash -c "sleep 30; /home/mesos-1.7.0/build/bin/mesos-master.sh 
--ip=150.20.10.136 --work_dir=/var/run/mesos --hostname=x.x.x.x"  ##hostname : 
IP of the master node
restart: always
network_mode: host
depends_on:
 - zookeeper
environment:
- MESOS_HOSTNAME="150.20.11.136"
- MESOS_QUORUM=1
- MESOS_LOG_DIR=/var/log/mesos
expose:
 - 5050
 - 4040
 - 7077
 - 8080
ports:
 - 5050:5050
 - 4040:4040
 - 7077:7077
 - 8080:8080

另外,从节点中的“docker compose.yaml”文件如下:

version: '3.7'
 services:
  zookeeper:
   image: ubuntu_mesos_spark_python3.6_client
   command: /home/zookeeper-3.4.12/bin/zkServer.sh start
   environment:
     ZOOKEEPER_SERVER_ID: 2
     ZOOKEEPER_CLIENT_PORT: 2188
     ZOOKEEPER_TICK_TIME: 2000
     ZOOKEEPER_INIT_LIMIT: 10
     ZOOKEEPER_SYNC_LIMIT: 5
     ZOOKEEPER_SERVERS: 150.20.11.136:2888:3888;0.0.0.0:2888:3888
   network_mode: host
   expose:
   - 2188 
   - 2888
   - 3888
   ports:
   - 2188:2188
   - 2888:2888
   - 3888:3888

 slave:
 image: ubuntu_mesos_spark_python3.6_client
 command: bash -c "sleep 30; /home/mesos-1.7.0/build/bin/mesos-slave.sh 
 --master=150.20.11.136:5050 --work_dir=/var/run/mesos  
 --systemd_enable_support=false"
 restart: always
 privileged: true
 network_mode: host
 depends_on:
 - zookeeper
 environment:
 - MESOS_HOSTNAME="150.20.11.157"
 #- MESOS_MASTER=172.28.10.136
 #- MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins #also in Dockerfile
 #- MESOS_CONTAINERIZERS=docker,mesos
 - MESOS_LOG_DIR=/var/log/mesos
 - MESOS_LOGGING_LEVEL=INFO
expose:
 - 5051
ports:
 - 5051:5051

然后我在每个节点上运行“docker compose up”,它们运行起来没有任何问题。

相关问题