在mesos 0.21.0上使用hadoop 2.3.0启动spark,在从属服务器上出现错误“sh:1:hadoop:not found”

6uxekuva  于 2021-05-30  发布在  Hadoop
关注(0)|答案(1)|浏览(319)

我正在用Hadoop2.3.0在Mesos0.21.0上设置spark。当我在主机上尝试spark时,从mesos slave的stderr收到以下错误消息:
警告:在initgooglelogging()写入stderr之前进行日志记录
i1229 12:34:45.923665 8571取数器。cpp:76]正在获取uri'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz'
i1229 12:34:45.925240 8571取数器。cpp:105]正在从下载资源'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz'to'/tmp/mesos/slaves/20141226-161203-701475338-5050-6942-s0/frameworks/20141229-111020-701475338-5050-985-0001/executors/20141226-161203-701475338-5050-6942-s0/runs/8ef30e72-d8cf-4218-8a62-bccdf673b5aa/spark-1.2.0.tar.gz'
e1229 12:34:45.927089 8571取数器。cpp:109]hdfs copytolocal失败:hadoop fs-copytolocal'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz' '/tmp/mesos/slaves/20141226-161203-701475338-5050-6942-s0/frameworks/20141229-111020-701475338-5050-985-0001/executors/20141226-161203-701475338-5050-6942-s0/runs/8ef30e72-d8cf-4218-8a62-bccdf673b5aa/spark-1.2.0.tar.gz'
sh:1:hadoop:未找到
无法获取:hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz
无法与从属同步(可能已退出)
有趣的是,当我切换到从属节点并运行相同的命令时
hadoop fs-copytolocal'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz'/tmp/mesos/slaves/20141226-161203-701475338-5050-6942-s0/frameworks/20141229-111020-701475338-5050-985-0001/executors/20141226-161203-701475338-5050-6942-s0/runs/8ef30e72-d8cf-4218-8a62-bccdf673b5aa/spark-1.2.0.tar.gz'
,进展顺利。

b1payxdu

b1payxdu1#

启动mesos slave时,必须通过以下参数指定hadoop安装的路径:

--hadoop_home=/path/to/hadoop

如果没有这些,它对我来说就不起作用了,尽管我已经设置了hadoop\u home环境变量。

相关问题