我有以下几点 bash
脚本:
# !/bin/bash
cat /etc/hadoop/conf.my_cluster/slaves | \
while read CMD; do
ssh -o StrictHostKeyChecking=no ubuntu@$CMD "sudo service hadoop-0.20-mapreduce-tasktracker restart"
ssh -o StrictHostKeyChecking=no ubuntu@$CMD "sudo service hadoop-hdfs-datanode restart"
echo $CMD
done
``` `/etc/hadoop/conf.my_cluster/slaves` 拥有5台从机的ip。这个 `datanode` 无法与服务器通信 `jobtracker` ,所以解决方法是重新启动它。输出为:
ubuntu@domU-12-31-39-07-D6-DE:~$ ./test.sh
Warning: Permanently added '54.211.5.233' (ECDSA) to the list of known hosts.
- Stopping Hadoop tasktracker:
stopping tasktracker - Starting Hadoop tasktracker:
starting tasktracker, logging to /var/log/hadoop-0.20-mapreduce/hadoop-hadoop-tasktracker-domU-12-31-39-06-8A-27.out
Warning: Permanently added '54.211.5.233' (ECDSA) to the list of known hosts. - Stopping Hadoop datanode:
stopping datanode - Starting Hadoop datanode:
starting datanode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-datanode-domU-12-31-39-06-8A-27.out
54.211.5.233
但是,在它应该运行的5个ip地址中,只有第一个被执行。我怎样才能解决这个问题?
1条答案
按热度按时间rxztt3cl1#
让我们问问shellcheck:
就这样。