我试图得到一个Spark/鲨鱼集群,但不断遇到同样的问题。我已经按照上面的说明做了https://github.com/amplab/shark/wiki/running-shark-on-a-cluster 和Hive的地址。
我认为鲨鱼驱动程序正在使用另一个版本的hadoopjars,但不确定为什么。
这里是细节,任何帮助都会很好。
Spark/鲨鱼0.9.0
apache hadoop 2.3.0版
安培尔0.11
scala 2.10.3版
java 7
我已安装了所有内容,但收到了一些不推荐使用的警告,然后出现了一个异常:
14/03/14 11:24:47信息配置。不推荐使用:mapred.input.dir.recursive不推荐使用。相反,请使用mapreduce.input.fileinputformat.input.dir.recursive
14/03/14 11:24:47 info configuration.deprecation:mapred.max.split.size已弃用。相反,请使用mapreduce.input.fileinputformat.split.maxsize
例外情况:
Exception in thread "main" org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1072)
at shark.memstore2.TableRecovery$.reloadRdds(TableRecovery.scala:49)
at shark.SharkCliDriver.<init>(SharkCliDriver.scala:275)
at shark.SharkCliDriver$.main(SharkCliDriver.scala:162)
at shark.SharkCliDriver.main(SharkCliDriver.scala)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1139)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:51)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:61)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2288)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2299)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1070)
... 4 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1137)
... 9 more
Caused by: java.lang.UnsupportedOperationException: Not implemented by the DistributedFileSystem FileSystem implementation
1条答案
按热度按时间vi4fp9gy1#
我也遇到了同样的问题,我认为这是由hadoop/hive和spark/shark的不兼容版本造成的。
您需要:
从中删除hadoop-core-1.0.x.jar
shark/lib_managed/jars/org.apache.hadoop/hadoop-core/
在构建shark时,显式设置SHARK_HADOOP_VERSION
具体如下:第二种方法也为我解决了其他问题。您还可以查看此主题以了解更多详细信息:https://groups.google.com/forum/#!msg/shark用户/ltnpcxhjioq/eqzybyzqmj