如何将pyspark rdd转换为Dataframe

yftpprvb  于 2021-05-27  发布在  Spark
关注(0)|答案(2)|浏览(555)

我有一个Dataframedf如下
测向=

+---+---+----+---+---+
|  a|  b|   c|  d|  e|
+---+---+----+---+---+
|  1|  a|foo1|  4|  5|
|  2|  b| bar|  4|  6|
|  3|  c| mnc|  4|  7|
|  4|  c| mnc|  4|  7|
+---+---+----+---+---+

我想实现像df1这样的目标=

+---+---+-----------------------------------------------+
|  a|  b|   c                                           |
+---+---+-----------------------------------------------+
|  1|  a|{'a': 1, 'b': 'a', 'c': 'foo1', 'd': 4, 'e': 5}|                            
|  2|  b|{'a': 2, 'b': 'b', 'c': 'bar', 'd': 4, 'e': 6} |                                       
|  3|  c|{'a': 3, 'b': 'c', 'c': 'mnc', 'd': 4, 'e': 7} |                                       
|  4|  c|{'a': 4, 'b': 'c', 'c': 'mnc', 'd': 4, 'e': 7} |                                       
+---+---+-----------------------------------------------+

我真的想避免分组,所以我想首先将Dataframe转换成rdd,然后再将它们转换成一个Dataframe
我写的代码是

df2=df.rdd.flatMap(lambda x:(x.a,x.b,x.asDict()))

在df2上做foreach时,我得到的结果是rdd格式的,所以我试图用它创建一个Dataframe。

df3=df2.toDF() #1st way
df3=sparkSession.createDataframe(df2) #2nd way

但我在这两方面都犯了错误。有人能解释一下我在这里做错了什么,以及如何实现我的团聚吗

pepwfjgg

pepwfjgg1#

可以使用spark sql执行以下操作:
Sparksql

data.createOrReplaceTempView("data")
spark.sql("""
select a, b, to_json(named_struct('a',a, 'b',b,'c',c,'d',d,'e',e)) as c
from data""").show(20,False)

输出


# +---+---+----------------------------------------+

# |a  |b  |c                                       |

# +---+---+----------------------------------------+

# |1  |a  |{"a":1,"b":"a","c":"foo1","d":"4","e":5}|

# |2  |b  |{"a":2,"b":"b","c":"bar","d":"4","e":6} |

# |3  |c  |{"a":3,"b":"c","c":"mnc","d":"4","e":7} |

# |4  |c  |{"a":4,"b":"c","c":"mnc","d":"4","e":7} |

# +---+---+----------------------------------------+

Dataframeapi

result = data\
 .withColumn('c',to_json(struct(data.a,data.b,data.c,data.d,data.e)))\
 .select("a","b","c")
result.show(20,False)

输出


# +---+---+----------------------------------------+

# |a  |b  |c                                       |

# +---+---+----------------------------------------+

# |1  |a  |{"a":1,"b":"a","c":"foo1","d":"4","e":5}|

# |2  |b  |{"a":2,"b":"b","c":"bar","d":"4","e":6} |

# |3  |c  |{"a":3,"b":"c","c":"mnc","d":"4","e":7} |

# |4  |c  |{"a":4,"b":"c","c":"mnc","d":"4","e":7} |

# +---+---+----------------------------------------+
dgtucam1

dgtucam12#

可以从Map类型列创建json列

import pyspark.sql.functions as F
df = sqlContext.createDataFrame(
    [(0, 1, 23, 4, 8, 9, 5, "b1"), (1, 2, 43, 8, 10, 20, 43, "e1")], 
    ("id", "a1", "b1", "c1", "d1", "e1", "f1", "ref")
)
tst = [[F.lit(c),F.col(c)] for c in df.columns]
tst_flat =[item for sublist in tst for item in sublist]

# %%

map_coln = F.create_map(*tst_flat)

df1=df.withColumn("out",F.to_json(map_coln))

结果:

In [37]: df1.show(truncate=False)
+---+---+---+---+---+---+---+---+-------------------------------------------------------------------------------+
|id |a1 |b1 |c1 |d1 |e1 |f1 |ref|out                                                                            |
+---+---+---+---+---+---+---+---+-------------------------------------------------------------------------------+
|0  |1  |23 |4  |8  |9  |5  |b1 |{"id":"0","a1":"1","b1":"23","c1":"4","d1":"8","e1":"9","f1":"5","ref":"b1"}   |
|1  |2  |43 |8  |10 |20 |43 |e1 |{"id":"1","a1":"2","b1":"43","c1":"8","d1":"10","e1":"20","f1":"43","ref":"e1"}|
+---+---+---+---+---+---+---+---+-------------------------------------------------------------------------------+

相关问题