我开始安装vagrant级联hadoop集群github项目,但是出现了一些错误,无法结束安装。
当我“流浪起来”
sina@linux:/media/sina/passport/vagrant-cascading-hadoop-cluster$ sudo vagrant up
Bringing machine 'hadoop1' up with 'virtualbox' provider...
Bringing machine 'hadoop2' up with 'virtualbox' provider...
Bringing machine 'hadoop3' up with 'virtualbox' provider...
Bringing machine 'master' up with 'virtualbox' provider...
==> hadoop1: Importing base box 'cascading-hadoop-base'...
==> hadoop1: Matching MAC address for NAT networking...
==> hadoop1: Setting the name of the VM: vagrant-cascading-hadoop-cluster_hadoop1_1409806559206_53275
==> hadoop1: Clearing any previously set network interfaces...
==> hadoop1: Preparing network interfaces based on configuration...
hadoop1: Adapter 1: nat
hadoop1: Adapter 2: hostonly
==> hadoop1: Forwarding ports...
hadoop1: 22 => 2222 (adapter 1)
==> hadoop1: Running 'pre-boot' VM customizations...
==> hadoop1: Booting VM...
==> hadoop1: Waiting for machine to boot. This may take a few minutes...
hadoop1: SSH address: 127.0.0.1:2222
hadoop1: SSH username: vagrant
hadoop1: SSH auth method: private key
==> hadoop1: Machine booted and ready!
==> hadoop1: Checking for guest additions in VM...
hadoop1: The guest additions on this VM do not match the installed version of
hadoop1: VirtualBox! In most cases this is fine, but in rare cases it can
hadoop1: prevent things such as shared folders from working properly. If you see
hadoop1: shared folder errors, please make sure the guest additions within the
hadoop1: virtual machine match the version of VirtualBox you have installed on
hadoop1: your host and reload your VM.
hadoop1:
hadoop1: Guest Additions Version: 4.2.0
hadoop1: VirtualBox Version: 4.3
==> hadoop1: Setting hostname...
==> hadoop1: Configuring and enabling network interfaces...
==> hadoop1: Mounting shared folders...
hadoop1: /vagrant => /media/sina/passport/vagrant-cascading-hadoop-cluster
hadoop1: /tmp/vagrant-puppet-1/manifests => /media/sina/passport/vagrant-cascading-hadoop-cluster/manifests
hadoop1: /tmp/vagrant-puppet-1/modules-0 => /media/sina/passport/vagrant-cascading-hadoop-cluster/modules
==> hadoop1: Running provisioner: puppet...
==> hadoop1: Running Puppet with datanode.pp...
==> hadoop1: stdin: is not a tty
==> hadoop1: warning: Could not retrieve fact fqdn
==> hadoop1: notice: /Stage[main]/Base/File[/etc/motd]/ensure: defined content as '{md5}0c3e6f224eb6cf6fbff62de3067eaef9'
==> hadoop1: notice: /Stage[main]/Hbase/File[/srv/zookeeper]/ensure: created
==> hadoop1: notice: /Stage[main]/Base/File[/root/.ssh]/ensure: created
==> hadoop1: notice: /Stage[main]/Base/File[/root/.ssh/config]/ensure: defined content as '{md5}880efd788ff2d77bf3989a13a9e0344a'
==> hadoop1: notice: /Stage[main]/Base/File[/root/.ssh/id_rsa.pub]/ensure: defined content as '{md5}622c3becafba74b1f4f1267436cbd28b'
==> hadoop1: notice: /Stage[main]/Base/Ssh_authorized_key[ssh_key]/ensure: created
==> hadoop1: notice: /Stage[main]/Base/Exec[apt-get update]/returns: executed successfully
==> hadoop1: notice: /Stage[main]/Base/Package[openjdk-6-jdk]/ensure: ensure changed 'purged' to 'present'
==> hadoop1: notice: /Stage[main]/Base/File[/root/.ssh/id_rsa]/ensure: defined content as '{md5}a9e4aa776fe92555716b7963488838f6'
==> hadoop1: notice: /Stage[main]/Avahi/Package[avahi-daemon]/ensure: ensure changed 'purged' to 'present'
==> hadoop1: notice: /Stage[main]/Avahi/File[/etc/avahi/avahi-daemon.conf]/content: content changed '{md5}bd8d4eda789abe26c48c1f1f74d19551' to '{md5}e45468ec4a7369471c5101403f5b8f87'
==> hadoop1: notice: /Stage[main]/Avahi/File[/etc/avahi/avahi-daemon.conf]/mode: mode changed '0644' to '0600'
==> hadoop1: notice: /Stage[main]/Hbase/File[/etc/profile.d/hbase-path.sh]/ensure: defined content as '{md5}06cf529d2063f3060bfca646dd2d1a18'
==> hadoop1: notice: /Stage[main]/Avahi/File[/etc/avahi/hosts]/content: content changed '{md5}186990ae1edac95a88dbef6a36a07716' to '{md5}c90385145a2d6900d7d027bd87cd8ff0'
==> hadoop1: notice: /Stage[main]/Avahi/File[/etc/avahi/hosts]/mode: mode changed '0644' to '0600'
==> hadoop1: notice: /Stage[main]/Avahi/Service[avahi-daemon]: Triggered 'refresh' from 4 events
==> hadoop1: notice: /Stage[main]/Hadoop/File[/etc/profile.d/hadoop-path.sh]/ensure: defined content as '{md5}da4327f03f22df21251fece99b4fda68'
==> hadoop1: notice: /Stage[main]/Hadoop/File[/tmp/verifier]/ensure: defined content as '{md5}ee3850511912c0b432c98426be818253'
==> hadoop1: err: /Stage[main]/Hadoop/Exec[download_grrr]/returns: change from notrun to 0 failed: Command exceeded timeout at /tmp/vagrant-puppet-1/modules-0/hadoop/manifests/init.pp:37
==> hadoop1: notice: /Stage[main]/Hadoop/Exec[download_checksum]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/Exec[download_checksum]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/Exec[download_hadoop]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/Exec[download_hadoop]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hbase/Exec[download_hbase]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hbase/Exec[download_hbase]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Avahi/File[/etc/hosts]/content: content changed '{md5}28728fdc2cb16bf53da7ba1988a7e978' to '{md5}c90385145a2d6900d7d027bd87cd8ff0'
==> hadoop1: notice: /Stage[main]/Avahi/File[/etc/hosts]/mode: mode changed '0644' to '0600'
==> hadoop1: notice: /Stage[main]/Hadoop/Exec[verify_tarball]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/Exec[verify_tarball]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hbase/Exec[unpack_hbase]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hbase/Exec[unpack_hbase]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hbase/File[/opt/hbase-0.96.2-hadoop2/conf/hbase-site.xml]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hbase/File[/opt/hbase-0.96.2-hadoop2/conf/hbase-site.xml]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/Exec[unpack_hadoop]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/Exec[unpack_hadoop]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hbase/File[/opt/hbase-0.96.2-hadoop2/conf/regionservers]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hbase/File[/opt/hbase-0.96.2-hadoop2/conf/regionservers]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/slaves]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/slaves]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/hdfs-site.xml]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/hdfs-site.xml]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/core-site.xml]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/core-site.xml]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/yarn-site.xml]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/yarn-site.xml]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/hadoop-env.sh]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/hadoop-env.sh]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/bin/stop-all.sh]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/bin/stop-all.sh]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/Exec[hadoop_conf_permissions]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/Exec[hadoop_conf_permissions]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/mapred-site.xml]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/mapred-site.xml]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/masters]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/masters]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/yarn-env.sh]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/yarn-env.sh]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/bin/start-all.sh]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/bin/start-all.sh]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hbase/File[/opt/hbase-0.96.2-hadoop2/conf/hbase-env.sh]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hbase/File[/opt/hbase-0.96.2-hadoop2/conf/hbase-env.sh]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/bin/prepare-cluster.sh]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/bin/prepare-cluster.sh]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/Group[hadoop]/ensure: created
==> hadoop1: notice: /Stage[main]/Hadoop/User[hdfs]/ensure: created
==> hadoop1: notice: /Stage[main]/Hadoop/File[/srv/hadoop/]/ensure: created
==> hadoop1: notice: /Stage[main]/Hadoop/File[/srv/hadoop/namenode]/ensure: created
==> hadoop1: notice: /Stage[main]/Hadoop/File[/srv/hadoop/datanode/]/ensure: created
==> hadoop1: notice: /Stage[main]/Hadoop/User[yarn]/ensure: created
==> hadoop1: notice: /Stage[main]/Hadoop/User[mapred]/ensure: created
==> hadoop1:
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs/mapred]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs/mapred]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs/yarn]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs/yarn]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs/hadoop]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs/hadoop]: Skipping because of failed dependencies
==> hadoop1: notice: Finished catalog run in 1838.19 seconds
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
它在exec download\u grrr中给出了一个错误
==> hadoop1: err: /Stage[main]/Hadoop/Exec[download_grrr]/returns: change from notrun to 0 failed: Command exceeded timeout at /tmp/vagrant-puppet-1/modules-0/hadoop/manifests/init.pp:37
此错误所引用的exec命令位于/modules/hadoop/manifests/init.pp中
exec { "download_grrr":
command => "wget --no-check-certificate http://raw.github.com/fs111/grrrr/master/grrr -O /tmp/grrr && chmod +x /tmp/grrr",
path => $path,
creates => "/tmp/grrr",
}
我自己下载了grrr文件,它成功了。所以下载文件本身没有问题
grrr文件包含:
# !/bin/bash
# author: André Kelpe <efeshunderelf at googlemail.com>
# licencse: Apache v2
GRRR_WGET_OPTIONS="--user-agent grrr/1.0"
# find out our region and yes, you can get this as csv file. How cool is that?
GEOIP_REGION=$(wget -qO- freegeoip.net/csv/ | tr '[A-Z]' '[a-z]' | tr -d '"'| awk -F, '{print $2}')
# classic confusion between geoip db and apache mirror list
if [ $GEOIP_REGION == "gb" ]; then
GEOIP_REGION=uk
fi
MIRRORLIST_FILE_NAME=$(mktemp)
# download the latest mirror list from apache. we ignore the last
# sync times and hope for the best...
wget -qO- http://www.apache.org/mirrors/mirrors.list | grep -v '^$' \
| grep http | grep -v ' 0$' | grep -v '^#' > $MIRRORLIST_FILE_NAME
# use US as the default region. apache does the same in their scripts...
REGION=us
# check if there is a mirror in our region
if grep -q " $GEOIP_REGION " $MIRRORLIST_FILE_NAME; then
REGION=$GEOIP_REGION
fi
# finally download it all
wget $GRRR_WGET_OPTIONS $(grep " $REGION " $MIRRORLIST_FILE_NAME | shuf | head -1 | awk '{print $3}')/$*
retval=$?
# clean up after ourselves.
rm $MIRRORLIST_FILE_NAME
exit $retval
因此,由于其他一些exec命令需要下载\u grrr exec,它们会因为失败的依赖关系而被跳过。如何解决此错误?
1条答案
按热度按时间2skhul331#
通常超时意味着从服务器下载文件花费的时间太长。您需要添加一个
timeout => 0
或者一个足够高的值。puppet对exec的默认超时是300秒。不过,由于它下载的是一个相当小的shell脚本,因此它所设置的url可能存在网络问题。当github尝试运行该命令时,您可能受到速率限制,或者github花了很长时间才做出响应。
最简单的方法是通过执行以下操作手动修复:
我刚刚克隆了那个回购,然后流浪起来,对我来说效果很好。这可能是暂时的网络故障。