我在ubuntu16.04上运行spark2.1.0、hive2.1.1和hadoop2.7.3。
我从github下载了spark项目,并构建了“没有hadoop”的版本:
./dev/make-distribution.sh--name“hadoop2 without hive”--tgz“-pyarn,提供hadoop,提供hadoop-2.7,提供parquet”
当我跑的时候 ./sbin/start-master.sh
,我得到以下异常:
Spark Command: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -cp /home/server/spark/conf/:/home/server/spark/jars/*:/home/server/hadoop/etc/hadoop/:/home/server/hadoop/share/hadoop/common/lib/:/home/server/hadoop/share/hadoop/common/:/home/server/hadoop/share/hadoop/mapreduce/:/home/server/hadoop/share/hadoop/mapreduce/lib/:/home/server/hadoop/share/hadoop/yarn/:/home/server/hadoop/share/hadoop/yarn/lib/ -Xmx1g org.apache.spark.deploy.master.Master --host ThinkPad-W550s-Lab --port 7077 --webui-port 8080
========================================
Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/Logger
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
at java.lang.Class.getMethod0(Class.java:3018)
at java.lang.Class.getMethod(Class.java:1784)
at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
Caused by: java.lang.ClassNotFoundException: org.slf4j.Logger
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 7 more
我根据hadoop2中hadoopjar文件在哪里的帖子编辑spark\u dist\u类路径?
export SPARK_DIST_CLASSPATH=~/hadoop/share/hadoop/common/lib:~/hadoop/share/hadoop/common:~/hadoop/share/hadoop/mapreduce:~/hadoop/share/hadoop/mapreduce/lib:~/hadoop/share/hadoop/yarn:~/hadoop/share/hadoop/yarn/lib
但我还是犯了同样的错误。我可以看到slf4jjar文件在下面 ~/hadoop/share/hadoop/common/lib
.
如何修复此错误?
谢谢您!
1条答案
按热度按时间64jmpszr1#
“hadoop免费”版本需要修改
SPARK_DIST_CLASSPATH
包括hadoop的包jar。最方便的方法是在
conf/spark-env.sh
:看看这个https://spark.apache.org/docs/latest/hadoop-provided.html