hbase 1.2.4与spark 2.1.0和hadoop 2.7.3在完全分布式模式下的集成

a7qyws3x  于 2021-06-10  发布在  Hbase
关注(0)|答案(0)|浏览(235)

我正在尝试将hbase与spark集成。我做了两种类型的集成,但都出错了。首先,我复制了所有hbase lib jar并粘贴到spark jars文件夹中。有些hbase jar与spark jar冲突,所以我将spark jar保留在这种情况下。还添加了 export SPARK_CLASSPATH=/usr/local/spark/spark-2.1.0/jars/* 至spark-env.sh和 export HADOOP_USER_CLASSPATH_FIRST=true 到bashrc文件,但得到以下结果 IncompatibleClassChangeError 启动spark shell时出错:

hduser@master:/usr/local/spark/spark-2.1.0/bin$ ./spark-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
[ERROR] Terminal initialization failed; falling back to unsupported
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
    at jline.TerminalFactory.create(TerminalFactory.java:101)
    at jline.TerminalFactory.get(TerminalFactory.java:158)
    at jline.console.ConsoleReader.<init>(ConsoleReader.java:229)
    at jline.console.ConsoleReader.<init>(ConsoleReader.java:221)
    at jline.console.ConsoleReader.<init>(ConsoleReader.java:209)
    at scala.tools.nsc.interpreter.jline.JLineConsoleReader.<init>(JLineReader.scala:62)
    at scala.tools.nsc.interpreter.jline.InteractiveReader.<init>(JLineReader.scala:34)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$scala$tools$nsc$interpreter$ILoop$$instantiater$1$1.apply(ILoop.scala:858)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$scala$tools$nsc$interpreter$ILoop$$instantiater$1$1.apply(ILoop.scala:855)
    at scala.tools.nsc.interpreter.ILoop.scala$tools$nsc$interpreter$ILoop$$mkReader$1(ILoop.scala:862)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$21$$anonfun$apply$9.apply(ILoop.scala:873)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$21$$anonfun$apply$9.apply(ILoop.scala:873)
    at scala.util.Try$.apply(Try.scala:192)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$21.apply(ILoop.scala:873)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$21.apply(ILoop.scala:873)
    at scala.collection.immutable.Stream.map(Stream.scala:418)
    at scala.tools.nsc.interpreter.ILoop.chooseReader(ILoop.scala:873)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$apply$mcZ$sp$2.apply(ILoop.scala:914)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:914)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
    at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
    at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
    at org.apache.spark.repl.Main$.doMain(Main.scala:68)
    at org.apache.spark.repl.Main$.main(Main.scala:51)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

17/01/29 13:02:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/01/29 13:02:27 WARN spark.SparkConf: 
SPARK_CLASSPATH was detected (set to '/usr/local/spark/spark-2.1.0/jars/*').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with --driver-class-path to augment the driver classpath
 - spark.executor.extraClassPath to augment the executor classpath

17/01/29 13:02:27 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to '/usr/local/spark/spark-2.1.0/jars/*' as a work-around.
17/01/29 13:02:27 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to '/usr/local/spark/spark-2.1.0/jars/*' as a work-around.
17/01/29 13:02:36 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://10.0.0.1:4040
Spark context available as 'sc' (master = local[*], app id = local-1485720148707).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

第二次尝试如下(hbase和spark都处于初始状态,没有复制和粘贴jar):

hduser@master:~$ HBASE_PATH='/usr/local/hbase/hbase-1.2.4/bin/hbase classpath'
hduser@master:~$ cd /usr/local/spark/spark-2.1.0/bin
hduser@master:/usr/local/spark/spark-2.1.0/bin$ ./spark-shell --driver-class-path $HBASE_PATH
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/01/29 16:40:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/01/29 16:41:03 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://10.0.0.1:4040
Spark context available as 'sc' (master = local[*], app id = local-1485733255875).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.

scala> import org.apache.hadoop.hbase.mapreduce.TableInputFormat
<console>:23: error: object hbase is not a member of package org.apache.hadoop
       import org.apache.hadoop.hbase.mapreduce.TableInputFormat
                                ^

scala>

这次我得到了 object hbase is not a member of package org.apache.hadoop .
请帮助我将hbase 1.2.4与spark 2.1.0集成。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题