我想在spark dataframe和hive表中聚合一个列值(json)。
例如
year, month, val (json)
2010 01 [{"a_id":"caes"},{"a_id":"rgvtsa"},{"a_id":"btbsdv"}]
2010 01 [{"a_id":"caes"},{"a_id":"uktf"},{"a_id":"ohcwa"}]
2008 10 [{"a_id":"rfve"},{"a_id":"yjndf"},{"a_id":"onbds"}]
2008 10 [{"a_id":"fvds"},{"a_id":"yjndf"},{"a_id":"yesva"}]
我需要:
year, month, val (json), num (int)
2010 01 [{"a_id":"caes"},{"a_id":"rgvtsa"},{"a_id":"btbsdv},{"a_id":"uktf"}, {"a_id":"ohcwa"}] 5
2008 10 [{"a_id":"rfve"},{"a_id":"yjndf"},{"a_id":"onbds"},{"a_id":"yesva"}] 4
我需要删除重复项,并在其中找到json字符串的大小(num of“a\u id”)。
数据被保存为一个配置单元表,这样用pysparksql处理它会更好?
我也想知道如何工作,如果它被保存为一个SparkDataframe。
我试过:
from pyspark.sql.functions import from_json, col
from pyspark.sql.types import StructType, StructField, StringType
schema = StructType(
[
StructField('a_id', StringType(), True)
]
)
df.withColumn("val", from_json("val", schema))\
.select(col('year'), col('month'), col('val.*'))\
.show()
但是,“val1”中的所有值都为空。
谢谢
升级我的配置单元版本:
%sh
ls /databricks/hive | grep "hive"
spark--maven-trees--spark_1.4_hive_0.13
我的ddl:
import pyspark.sql.functions as F
import pyspark.sql.types as T
from pyspark.sql.types import *
def concate_elements(val):
return reduce (lambda x, y:x+y, val)
flatten_array = F.udf(concate_elements, T.ArrayType(T.StringType()))
remove_duplicates = udf(lambda row: list(set(row)),
ArrayType(StringType()))
# final results
df.select("year","month", flatten_array("val").alias("flattenvalues")).withColumn("uniquevalues", remove_duplicates("flattenvalues")).withColumn("size",F.size("uniquevalues")).show()
2条答案
按热度按时间erhoui1w1#
考虑输入数据输入json文件
json-input.json
```{"year":"2010","month":"01","value":[{"a_id":"caes"},{"a_id":"uktf"},{"a_id":"ohcwa"}]}
{"year":"2011","month":"01","value":[{"a_id":"caes"},{"a_id":"uktf"},{"a_id":"uktf"},{"a_id":"sathya"}]}
ADD JAR /home/sathya/Downloads/json-serde-1.3.7-jar-with-dependencies.jar
CREATE EXTERNAL TABLE json_table (
year string,
month string,
value array<struct<a_id:string>>)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe';
load data local inpath '/home/sathya/json-input.json' into table json_table;
select * from json_table;
OK
2010 01 [{"a_id":"caes"},{"a_id":"uktf"},{"a_id":"ohcwa"}]
2011 01 [{"a_id":"caes"},{"a_id":"uktf"},{"a_id":"uktf"},{"a_id":"sathya"}]
pyspark --jars /home/sathya/Downloads/json-serde-1.3.7-jar-with-dependencies.jar --driver-class-path /home/sathya/Downloads/json-serde-1.3.7-jar-with-dependencies.jar
df=spark.sql("select * from default.json_table")
df.show(truncate=False)
'''
+----+-----+----------------------------------+
|year|month|value |
+----+-----+----------------------------------+
|2010|01 |[[caes], [uktf], [ohcwa]] |
|2011|01 |[[caes], [uktf], [uktf], [sathya]]|
+----+-----+----------------------------------+
'''
UDFs for concatenating the array elements & removing duplicates in an array
def concate_elements(val):
return reduce (lambda x, y:x+y, val)
flatten_array = F.udf(concate_elements, T.ArrayType(T.StringType()))
remove_duplicates = udf(lambda row: list(set(row)), ArrayType(StringType()))
final results
df.select("year","month",flattenUdf("value").alias("flattenvalues")).withColumn("uniquevalues", remove_duplicates("flattenvalues")).withColumn("size",size("uniquevalues")).show()
'''
+----+-----+--------------------------+--------------------+----+
|year|month|flattenvalues |uniquevalues |size|
+----+-----+--------------------------+--------------------+----+
|2010|01 |[caes, uktf, ohcwa] |[caes, uktf, ohcwa] |3 |
|2011|01 |[caes, uktf, uktf, sathya]|[caes, sathya, uktf]|3 |
+----+-----+--------------------------+--------------------+----+
'''
场景的代码是:
zqdjd7g92#
下面是一个适用于databricks的解决方案:
输入测向:
输出测向: