为什么spark作业在mesos上失败,并带有“hadoop:notfound”?

2vuwiymt  于 2021-06-26  发布在  Mesos
关注(0)|答案(1)|浏览(412)

我在Debian8上使用Spark1.6.1、Hadoop2.6.4和Mesos0.28。
试图通过提交作业时 spark-submit 对于mesos群集,从机在stderr日志中出现以下故障:

I0427 22:35:39.626055 48258 fetcher.cpp:424] Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/ad642fcf-9951-42ad-8f86-cc4f5a5cb408-S0\/hduser","items":[{"action":"BYP$
I0427 22:35:39.628031 48258 fetcher.cpp:379] Fetching URI 'hdfs://xxxxxxxxx:54310/sources/spark/SimpleEventCounter.jar'
I0427 22:35:39.628057 48258 fetcher.cpp:250] Fetching directly into the sandbox directory
I0427 22:35:39.628078 48258 fetcher.cpp:187] Fetching URI 'hdfs://xxxxxxx:54310/sources/spark/SimpleEventCounter.jar'
E0427 22:35:39.629243 48258 shell.hpp:93] Command 'hadoop version 2>&1' failed; this is the output:
sh: 1: hadoop: not found
Failed to fetch 'hdfs://xxxxxxx:54310/sources/spark/SimpleEventCounter.jar': Failed to create HDFS client: Failed to execute 'hadoop version 2>&1'; the command was e$
Failed to synchronize with slave (it's probably exited)

我的jar文件包含hadoop2.6二进制文件
spark executor/binary的路径是通过 hdfs:// 链接
我的作业不会出现在framework选项卡中,但它们确实会以“queued”的状态出现在驱动程序中,它们一直坐在那里,直到我关闭了驱动程序 spark-mesos-dispatcher.sh 服务。

6kkfgxo0

6kkfgxo01#

我看到了一个非常类似的错误,我发现我的问题是hadoop\u home没有设置在mesos代理中。我在每个mesos slave的/etc/default/mesos slave中添加了以下行: MESOS_hadoop_home="/path/to/my/hadoop/install/folder/" 编辑:hadoop必须安装在每个从机上,路径/to/my/haoop/install/folder是本地路径

相关问题