1.1.0)访问问题

tzcvj98z  于 2021-06-26  发布在  Hive
关注(0)|答案(0)|浏览(210)

我的配置单元版本是1.1.0,spark版本是1.6.0,连接没有问题。我能够成功地建立联系。
连接后,当导入数据或创建数据链接使用配置单元连接,我可以看到数据库名称和表属于它,但得到错误( java.lang.IllegalArgumentException: java.net.UnknownHostException: -nameservice )从表中检索数据时。下面是我的代码:

val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
hiveContext.setConf("hive.metastore.uris", prop.getProperty("hive.metastore.uris"))
hiveContext.setConf("hive.metastore.sasl.enabled", prop.getProperty("hive.metastore.sasl.enabled"))
hiveContext.setConf("hive.security.authorization.enabled", prop.getProperty("hive.security.authorization.enabled"))
hiveContext.setConf("hive.metastore.kerberos.principal", prop.getProperty("hive.metastore.kerberos.principal"))
hiveContext.setConf("hive.metastore.execute.setugi", prop.getProperty("hive.metastore.execute.setugi"))
hiveContext.sql("use abc")   
hiveContext.sql("show tables").show(4) // This is working
hiveContext.sql("select * from abc.tab1 limit 10").show(2)

问题如下:
java.lang.illegalargumentexception:java.net.unknownhostexception: nameservice位于org.apache.hadoop.security.securityutil.buildtokenservice(securityutil)。java:406)在org.apache.hadoop.hdfs.namenodeproxies.createnonhaproxy(namenodeproxies。java:310)在org.apache.hadoop.hdfs.namenodeproxies.createproxy(namenodeproxies。java:176)在org.apache.hadoop.hdfs.dfsclient.(dfsclient。java:728)在org.apache.hadoop.hdfs.dfsclient。java:671)位于org.apache.hadoop.hdfs.distributedfilesystem.initialize(distributedfilesystem)。java:155)在org.apache.hadoop.fs.filesystem.createfilesystem(filesystem。java:2800)在org.apache.hadoop.fs.filesystem.access$200(文件系统)。java:98)在org.apache.hadoop.fs.filesystem$cache.getinternal(文件系统)。java:2837)在org.apache.hadoop.fs.filesystem$cache.get(filesystem。java:2819)在org.apache.hadoop.fs.filesystem.get(filesystem。java:387)在org.apache.hadoop.fs.path.getfilesystem(path。java:296)在org.apache.hadoop.mapreduce.security.tokencache.obtaintokensfornamenodesinternal(令牌缓存)。java:97)在org.apache.hadoop.mapreduce.security.tokencache.obtaintokensfornamenodes(tokencache。java:80)在org.apache.hadoop.mapred.fileinputformat.liststatus(fileinputformat。java:206)在org.apache.hadoop.hive.ql.io.avro.avrocontainerinputformat.liststatus(avrocontainerinputformat)。java:42)位于org.apache.hadoop.mapred.fileinputformat.getsplits(fileinputformat)。java:315)在org.apache.spark.rdd.hadooprdd.getpartitions(hadooprdd。scala:202)在org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd。scala:239)在org.apache.spark.rdd.rdd$$anonfun$分区$2.apply(rdd。scala:237)在scala.option.getorelse(option。scala:120)在org.apache.spark.rdd.rdd.partitions(rdd。scala:237)在org.apache.spark.rdd.mappartitionsrdd.getpartitions(mappartitionsrdd。scala:35)在org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd。scala:239)

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题