无法在本地创建表,需要获得配置单元支持

7eumitmz  于 2021-06-24  发布在  Hive
关注(0)|答案(1)|浏览(305)

设置配置后仍出现错误

config("spark.sql.catalogImplementation","hive")

override def beforeAll(): Unit = {
  super[SharedSparkContext].beforeAll()
  SparkSessionProvider._sparkSession = SparkSession.builder()
    .master("local[*]")
    .config("spark.sql.catalogImplementation","hive")
    .getOrCreate()
}

编辑时间:
这就是我如何设置我的本地数据库和表进行测试。

val stgDb = "test_stagingDB"
val stgTbl_exp ="test_stagingDB_expected"
val stgTbl_result="test_stg_table_result"

val trgtDb = "test_activeDB"
val trgtTbl_exp ="test_activeDB_expected"
val trgtTbl_result ="test_activeDB_results"

def setUpDb ={
  println("Set up DB started")
  val localPath="file:/C:/Users/vmurthyms/Code-prdb/prdb/com.rxcorp.prdb"
  spark.sql(s"CREATE DATABASE IF NOT EXISTS test_stagingDB LOCATION '$localPath/test_stagingDB.db'")
  spark.sql(s"CREATE DATABASE IF NOT EXISTS test_activeDB LOCATION '$localPath/test_sctiveDB.db'")
  spark.sql(s"CREATE TABLE IF NOT EXISTS $trgtDb.${trgtTbl_exp}_ina (Id String, Name String)")
  println("Set up DB done")
}
setUpDb

运行spark.sql(“create table..,”)cmd时,出现以下错误:错误:创建配置单元表(as select)需要配置单元支持;;'创建表 test_activeDB . test_activeDB_expected_ina ,org.apache.hadoop.hive.serde2.lazy.lazysimpleserde,忽略
org.apache.spark.sql.analysisexception:创建配置单元表(作为select)需要配置单元支持;;'创建表 test_activeDB . test_activeDB_expected_ina ,org.apache.hadoop.hive.serde2.lazy.lazysimpleserde,忽略

at org.apache.spark.sql.execution.datasources.HiveOnlyCheck$$anonfun$apply$12.apply(rules.scala:392)
at org.apache.spark.sql.execution.datasources.HiveOnlyCheck$$anonfun$apply$12.apply(rules.scala:390)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreach(TreeNode.scala:117)
at org.apache.spark.sql.execution.datasources.HiveOnlyCheck$.apply(rules.scala:390)
at org.apache.spark.sql.execution.datasources.HiveOnlyCheck$.apply(rules.scala:388)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$2.apply(CheckAnalysis.scala:349)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$2.apply(CheckAnalysis.scala:349)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:349)
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:92)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:641)
at com.rxcorp.prdb.exe.SitecoreAPIExtractTest$$anonfun$2.setUpDb$1(SitecoreAPIExtractTest.scala:127)
at com.rxcorp.prdb.exe.SitecoreAPIExtractTest$$anonfun$2.apply$mcV$sp(SitecoreAPIExtractTest.scala:130)
mum43rcc

mum43rcc1#

好像你快到了(你的错误信息也给了你线索),你需要打电话 enableHiveSupport() 创建spark会话时。如。

SparkSession.builder()
         .master("local[*]")
         .config("spark.sql.catalogImplementation","hive")
         .enableHiveSupport()
         .getOrCreate()

在使用 enableHiveSupport() ,设置 config("spark.sql.catalogImplementation","hive") 看起来多余。我想你可以放心地把那部分删掉。

相关问题