将字符串转换为单独的行,然后转换为pysparkDataframe

fykwrbwg  于 2021-05-27  发布在  Spark
关注(0)|答案(2)|浏览(418)

我有一个这样的字符串,每行之间用\n分隔。
我试过多种方法,但找不到合适的方法。
列名称\n第一行\n第二行,例如

"Name,ID,Number\n abc,1,123 \n xyz,2,456"

I want to convert it into pyspark dataframe like this

Name     ID   Number
abc      1      123
xyz      2      456
xqk2d5yq

xqk2d5yq1#

你可以试试这个

from pyspark.sql.functions import *
from pyspark.sql.types import *

data = spark.sql("""select 'Name,ID,Number\n abc,1,123 \n xyz,2,456' as col1""")

data.show(20,False)

# +-------------------------------------+

# |col1                                 |

# +-------------------------------------+

# |Name,ID,Number

# abc,1,123

# xyz,2,456|

# +-------------------------------------+

data.createOrReplaceTempView("data")
data = spark.sql("""
select posexplode(split(col1,'\n'))
from data
""")
data.show(20,False)

# +---+--------------+

# |pos|col           |

# +---+--------------+

# |0  |Name,ID,Number|

# |1  | abc,1,123    |

# |2  | xyz,2,456    |

# +---+--------------+

columnList = data.select('col').first()[0].split(",")
data.createOrReplaceTempView("data")

query = ""
for i,e in enumerate(columnList):
  query += "trim(split(col , ',')[{1}]) as {0}".format(e,i) if i == 0 else ",trim(split(col , ',')[{1}]) as {0}".format(e,i)

finalData = spark.sql("""
SELECT {0}
FROM data
where pos > 0
""".format(query))
finalData.show()

# +----+---+------+

# |Name| ID|Number|

# +----+---+------+

# | abc|  1|   123|

# | xyz|  2|   456|

# +----+---+------+
1sbrub3j

1sbrub3j2#

我假设你试图从一个文本创建。如果是这样的话,有很多方法可以从pyspark-createdataframe()中的列表创建Dataframe,然后使用df()和parallelize。在python中,有许多方法可以将字符串拆分为列表。所以把这两者结合起来应该会得到你想要的结果。请研究它们。一种可能的方法如下:

tst_str= "Name,ID,Number\n abc,1,123 \n xyz,2,456"
tst_spl = [x.split(',') for x in tst_str.split()]

# %%

tst_df = sqlContext.createDataFrame(tst_spl[1:],schema=tst_spl[0])

tst_df.show()
+----+---+------+
|Name| ID|Number|
+----+---+------+
| abc|  1|   123|
| xyz|  2|   456|
+----+---+------+

相关问题