如何使用bash脚本在集群中快速设置spark-on-yarn?

bd1hkmkf  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(314)

在我自学hadoop和spark的过程中,我发现环境设置阶段非常乏味,在这个阶段,我们需要设置一个小集群,并对所有节点上的所有依赖项重复安装和配置。 java ,哈杜德,Spark。有没有办法用bash脚本来完成这个任务?

1szpjjfi

1szpjjfi1#

我第一次尝试了一下,bash肯定能胜任这项工作。一旦为集群正确设置了ssh,下面这样的bash脚本就是一个好的开始。它需要一些改进,但仍然。。。下面的示例脚本目前只考虑hadoop/yarn,但这是一个正在进行的工作。同样的方法可以用于在所有节点上设置java和spark。我将在完成后更新此答案;)

!/bin/bash
for x in hadoop-slave1 hadoop-slave2 hadoop-slave3
 do
    ssh $x  bash -c "' 

    cd ~
    wget https://archive.apache.org/dist/hadoop/core/hadoop-2.7.3/hadoop-2.7.3.tar.gz
    tar -xzvf hadoop-2.7.3.tar.gz
    ln -s hadoop-2.7.3 hadoop
    echo "# HADOOP" >> ~/.bashrc
    echo "export HADOOP_PREFIX=/home/hduser/hadoop" >> ~/.bashrc 
    source ~/.bashrc
    echo "export HADOOP_HOME=$HADOOP_PREFIX" >> ~/.bashrc
    echo "export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin" >> ~/.bashrc
    echo "export HADOOP_COMMON_HOME=$HADOOP_PREFIX" >> ~/.bashrc
    echo "export HADOOP_MAPRED_HOME=$HADOOP_PREFIX" >> ~/.bashrc
    echo "export HADOOP_HDFS_HOME=$HADOOP_PREFIX" >> ~/.bashrc
    echo "export YARN_HOME=$HADOOP_PREFIX" >> ~/.bashrc
    source ~/.bashrc

    mkdir -p ~/tmp   #create tmp dir used and configured in hadoop
    # work around an issue I got with JAVA_HOME env var beeing lost.
    echo export `env | grep ^JAVA_HOME` >> ~/hadoop/etc/hadoop/hadoop-env.sh

    exit

    '"
    # copy config files to slaves. assuming that master is already setup - to be improved
    scp ~/hadoop/etc/hadoop/core-site.xml hduser@$x:~/hadoop/etc/hadoop
    scp ~/hadoop/etc/hadoop/hdfs-site.xml hduser@$x:~/hadoop/etc/hadoop
    scp ~/hadoop/etc/hadoop/yarn-site.xml hduser@$x:~/hadoop/etc/hadoop

done

相关问题