pyspark hive sql将数组(map(varchar,varchar))按行转换为字符串

jum4pzuy  于 2022-11-05  发布在  Hive
关注(0)|答案(2)|浏览(344)

我想把一列

array(map(varchar, varchar))

通过来自jupyter笔记本python3的pysparkhivesql以编程方式将字符串作为prestodb上的表的行。
这些例子

user_id     sport_ids
 'aca'       [ {'sport_id': '5818'}, {'sport_id': '6712'}, {'sport_id': '1065'} ]

预期结果

user_id.    sport_ids
  'aca'.          '5815'
  'aca'.          '5712'
  'aca'.          '1065'

我试过了

sql_q= """
            select distinct, user_id, transform(sport_ids, x -> element_at(x, 'sport_id')
            from tab """

     spark.sql(sql_q)

但出现错误:

'->' cannot be resolved

我也试过

sql_q= """
            select distinct, user_id, sport_ids
            from tab"""

     spark.sql(sql_q)

但出现错误:

org.apache.spark.sql.AnalysisException: Cannot have map type columns in DataFrame which calls set operations(intersect, except, etc.), but the type of column request_features[0] is map<string,string>;;

我错过什么了吗?
我试过这个,但很有帮助hive convert array<map<string, string>> to stringExtract map(varchar, array(varchar)) - Hive SQL
谢谢

xtfmy6hx

xtfmy6hx1#

让我们尝试使用高阶函数来查找Map值并分解为单独的行

df.withColumn('sport_ids', explode(expr("transform(sport_ids, x->map_values(x)[0])"))).show()

+-------+---------+
|user_id|sport_ids|
+-------+---------+
|    aca|     5818|
|    aca|     6712|
|    aca|     1065|
+-------+---------+
aurhwmvo

aurhwmvo2#

您可以处理json数据(json_parse,转换为json和json_extract_scalar的数组-对于更多的json函数-see here),并在presto侧展平(unnest):

-- sample data
WITH dataset(user_id, sport_ids) AS (
    VALUES 
        ('aca', '[ {"sport_id": "5818"}, {"sport_id": "6712"}, {"sport_id": "1065"} ]')
) 

-- query
select user_id,
    json_extract_scalar(record, '$.sport_id') sport_id
from dataset,
    unnest(cast(json_parse(sport_ids) as array(json))) as t(record)

输出量:
| 用户标识|运动标识|
| - -|- -|
| akka |小行星5818|
| akka |小行星6712|
| akka |一○六五|

相关问题