我已经下载了 hadoop
源代码从github和编译 native
选项:
mvn package -Pdist,native -DskipTests -Dtar -Dmaven.javadoc.skip=true
然后我复制了 .dylib
文件到$hadoop\u home/lib
cp -p hadoop-common-project/hadoop-common/target/hadoop-common-2.7.1/lib/native/*.dylib /usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/lib
已更新ld\u library\u路径并重新启动hdfs:
echo $LD_LIBRARY_PATH
/usr/local/Cellar/hadoop/2.7.2/libexec/lib:
/usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/lib:/Library/Java/JavaVirtualMachines/jdk1.8.0_92.jdk/Contents/Home//jre/lib
(注意:这也意味着docker spark上的hadoop“无法为您的平台加载本机hadoop库”错误的答案?不适合我……)
但是 checknative
仍然统一返回 false
:
$stop-dfs.sh && start-dfs.sh && hadoop checknative
16/06/13 16:12:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping namenodes on [sparkbook]
sparkbook: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
16/06/13 16:12:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/06/13 16:12:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [sparkbook]
sparkbook: starting namenode, logging to /usr/local/Cellar/hadoop/2.7.2/libexec/logs/hadoop-macuser-namenode-sparkbook.out
localhost: starting datanode, logging to /usr/local/Cellar/hadoop/2.7.2/libexec/logs/hadoop-macuser-datanode-sparkbook.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/Cellar/hadoop/2.7.2/libexec/logs/hadoop-macuser-secondarynamenode-sparkbook.out
16/06/13 16:13:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/06/13 16:13:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Native library checking:
hadoop: false
zlib: false
snappy: false
lz4: false
bzip2: false
openssl: false
4条答案
按热度按时间uurity8g1#
上面@andrewdotn的回答中缺少了一些步骤:
1) 对于步骤(3),通过添加发布到文本文件(如“patch.txt”)的文本来创建修补程序,然后执行“git apply patch.txt”
2) 除了按照javadba的指示复制文件外,某些应用程序还需要设置:
bprjcwpo2#
要在新安装的macos 10.12上运行此功能,我必须执行以下操作:
使用自制软件安装生成依赖项:
查看hadoop源代码
将以下修补程序应用于生成:
从源代码生成hadoop:
指定
JAVA_LIBRARY_PATH
运行hadoop时:azpvetkf3#
所需的步骤是复制
*.dylib
从git
源代码将dir构建到$HADOOP_HOME/<common dir>lib
为你的平台。为了OS/X
通过安装brew
它是:我们现在可以看到所需的lib:
而现在
hadoop checknative
指挥工作:z31licg04#
作为@andrewdotn答案的更新,下面是
patch.txt
与hadoop 2.8.1一起使用的文件: