使用sqlcontext.sql(…)动态创建配置单元外部表

xzlaal3s  于 2021-06-24  发布在  Hive
关注(0)|答案(1)|浏览(419)

我在zeppelin笔记本中有一个pyspark脚本,我将它指向blob存储中的一个json文件,以便推断json模式并在hive中创建一个外部表。
我可以使用从脚本打印出来的sql命令,在单独的段落中执行它,并且表创建得很好,但是当我尝试通过sqlcontext.sql()方法创建表时,出现以下错误:;
analysisexception:u'org.apache.hadoop.hive.ql.metadata.hiveexception:java.lang.runtimeexception:metaexception(message:java.lang.classnotfoundexception class org.openx.data.jsonserde.jsonserde未找到);'
google这个错误只会显示确保serde的jar文件在服务器上的页面,很明显,这是因为我可以手动创建这个表。下面是我的剧本;

%spark2.pyspark

import os
import datetime as dt
import time
from datetime import date
from pyspark.sql.functions import monotonically_increasing_id, lit
from pyspark.sql.types import *
from pyspark.sql import *
from pyspark.sql.functions import split, lower, unix_timestamp, from_unixtime

hiveDbName = 'dev_phoenix'
hiveTableName = 'et_engagement_cac'
serdeName = 'org.openx.data.jsonserde.JsonSerDe'
jsonFileLocation = 'wasbs://blah-blah-blah@meh-meh-meh.blob.core.windows.net/dev/data/Engagement'

jsonDf = sqlContext.read.json("wasbs://blah-blah-blah@meh-meh-meh.blob.core.windows.net/dev/data/Engagement/Engagement.json")

# jsonDf.printSchema()

extTableDDL = "create external table " + hiveDbName + "." + hiveTableName + "(\n"

for col in jsonDf.dtypes:
    extTableDDL += '`' + col[0] + '` ' + col[1].replace('_id','`_id`') + ',\n'

extTableDDL = extTableDDL[:-2]
extTableDDL += ')\nrow format serde \'' + serdeName + '\'\n'
extTableDDL += 'location \'' + jsonFileLocation + '\'\n'
extTableDDL += 'tblproperties (\'serialization.null.format\'=\'\')'

print extTableDDL

sqlContext.sql(extTableDDL)

我故意混淆了我们的wasb容器名称,因此出现了blah/meh。
我发现一些帖子让我开始觉得可以用sqlcontext.sql创建的表类型有局限性,也许我想做的是不可能的?
当我取出serde声明时,我能够成功地创建表,但是hive使用了一个默认的serde,它对我在底层文件中的数据不起作用。

3pvhb19x

3pvhb19x1#

好吧,我想我知道发生了什么,怎么解决了。我怀疑我尝试使用的serde的jar文件位于服务器上的目录中,而该目录不在classpath变量中。
所以,我第一次调用spark.sql(…)来添加jar,现在它可以工作了。见下面更新的脚本;

%spark2.pyspark

import os
import datetime as dt
import time
from datetime import date
from pyspark.sql.functions import monotonically_increasing_id, lit
from pyspark.sql.types import *
from pyspark.sql import *
from pyspark.sql.functions import split, lower, unix_timestamp, from_unixtime

hiveDbName = 'dev_phoenix'
hiveTableName = 'et_engagement_cac'
serdeName = 'org.openx.data.jsonserde.JsonSerDe'
jsonFileLocation = 'wasbs://blah-blah-blah@meh-meh-meh.blob.core.windows.net/dev/data/Engagement'

jsonDf = spark.read.json("wasbs://blah-blah-blah@meh-meh-meh.blob.core.windows.net/dev/data/Engagement/Engagement.json")

# jsonDf.printSchema()

spark.sql('add jar /usr/hdp/current/hive-client/lib/json-serde-1.3.8-jar-with-dependencies.jar')

extTableDDL = "create external table " + hiveDbName + "." + hiveTableName + "(\n"

for col in jsonDf.dtypes:
    extTableDDL += '`' + col[0] + '` ' + col[1].replace('_id','`_id`').replace('_class','`_class`') + ',\n'

extTableDDL = extTableDDL[:-2]
extTableDDL += ')\nROW FORMAT SERDE\n'
extTableDDL += '   \'' + serdeName + '\'\n'
extTableDDL += 'STORED AS INPUTFORMAT\n'
extTableDDL += '   \'org.apache.hadoop.mapred.TextInputFormat\'\n'
extTableDDL += 'OUTPUTFORMAT\n'
extTableDDL += '   \'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat\'\n'

extTableDDL += 'location \'' + jsonFileLocation + '\'\n'
extTableDDL += 'tblproperties (\'serialization.null.format\'=\'\')'

print extTableDDL

spark.sql(extTableDDL)

相关问题