pysparkDataframe修改列

vuv7lop3  于 2021-06-25  发布在  Hive
关注(0)|答案(2)|浏览(457)

我有如下的输入Dataframe,其中输入列是动态的,也就是说,它可以是n个类似input1到input2的数字

+----+----+-------+------+------+
|dim1|dim2|  byvar|input1|input2|
+----+----+-------+------+------+
| 101| 102|MTD0001|     1|    10|
| 101| 102|MTD0002|     2|    12|
| 101| 102|MTD0003|     3|    13|

想修改列如下,怎么可能?

+----+----+-------+----------+------+
|dim1|dim2|  byvar|TRAMS_NAME|values|
+----+----+-------+----------+------+
| 101| 102|MTD0001|    input1|     1|
| 101| 102|MTD0001|    input2|    10|
| 101| 102|MTD0002|    input1|     2|
| 101| 102|MTD0002|    input2|    12|
| 101| 102|MTD0003|    input1|     3|
| 101| 102|MTD0003|    input2|    13|

我使用了create\u map spark方法,但这是一种硬编码的方法。有没有其他方法可以达到同样的效果??

fkvaft9z

fkvaft9z1#

下面是使用stack()函数解决问题的另一种方法。当然,这可能会简单一点,但有一个限制,即必须显式地放置列名。
希望这有帮助!


# set your dataframe

df = spark.createDataFrame(
    [(101, 102, 'MTD0001', 1, 10),
     (101, 102, 'MTD0002', 2, 12),
     (101, 102, 'MTD0003', 3, 13)],
    ['dim1', 'dim2', 'byvar', 'v1', 'v2']
)

df.show()
+----+----+-------+---+---+
|dim1|dim2|  byvar| v1| v2|
+----+----+-------+---+---+
| 101| 102|MTD0001|  1| 10|
| 101| 102|MTD0002|  2| 12|
| 101| 102|MTD0003|  3| 13|
+----+----+-------+---+---+

result = df.selectExpr('dim1', 
                       'dim2', 
                       'byvar', 
                       "stack(2, 'v1', v1, 'v2', v2) as (names, values)")
result.show()
+----+----+-------+-----+------+
|dim1|dim2|  byvar|names|values|
+----+----+-------+-----+------+
| 101| 102|MTD0001|   v1|     1|
| 101| 102|MTD0001|   v2|    10|
| 101| 102|MTD0002|   v1|     2|
| 101| 102|MTD0002|   v2|    12|
| 101| 102|MTD0003|   v1|     3|
| 101| 102|MTD0003|   v2|    13|
+----+----+-------+-----+------+

如果我们想动态地设置要堆栈的列,我们只需要设置未更改的列,在您的示例中是dim1、dim2和byvar,并使用for循环创建堆栈语句。


# set static columns

unaltered_cols = ['dim1', 'dim2', 'byvar']

# extract columns to stack

change_cols = [n for n in df.schema.names if not n in unaltered_cols]
cols_exp = ",".join(["'" + n + "'," + n for n in change_cols])

# create stack sentence

stack_exp = "stack(" + str(len(change_cols)) +',' + cols_exp + ") as (names, values)"

# print final expression

print(stack_exp)

# --> stack(2,'v1',v1,'v2',v2) as (names, values)

# apply transformation

result = df.selectExpr('dim1', 
                       'dim2', 
                       'byvar', 
                       stack_exp)
result.show()
+----+----+-------+-----+------+
|dim1|dim2|  byvar|names|values|
+----+----+-------+-----+------+
| 101| 102|MTD0001|   v1|     1|
| 101| 102|MTD0001|   v2|    10|
| 101| 102|MTD0002|   v1|     2|
| 101| 102|MTD0002|   v2|    12|
| 101| 102|MTD0003|   v1|     3|
| 101| 102|MTD0003|   v2|    13|
+----+----+-------+-----+------+

如果我们使用不同的Dataframe运行相同的代码,您将得到所需的结果。

df = spark.createDataFrame(
    [(101, 102, 'MTD0001', 1, 10, 4),
     (101, 102, 'MTD0002', 2, 12, 5),
     (101, 102, 'MTD0003', 3, 13, 5)],
    ['dim1', 'dim2', 'byvar', 'v1', 'v2', 'v3']
)

# Re-run the code to create the stack_exp before!

result = df.selectExpr('dim1', 
                       'dim2', 
                       'byvar', 
                       stack_exp)
result.show()
+----+----+-------+-----+------+
|dim1|dim2|  byvar|names|values|
+----+----+-------+-----+------+
| 101| 102|MTD0001|   v1|     1|
| 101| 102|MTD0001|   v2|    10|
| 101| 102|MTD0001|   v3|     4|
| 101| 102|MTD0002|   v1|     2|
| 101| 102|MTD0002|   v2|    12|
| 101| 102|MTD0002|   v3|     5|
| 101| 102|MTD0003|   v1|     3|
| 101| 102|MTD0003|   v2|    13|
| 101| 102|MTD0003|   v3|     5|
+----+----+-------+-----+------+
z0qdvdin

z0qdvdin2#

Sample DataFrame: ```
df.show() #added more columns to show code is dynamic
+----+----+-------+------+------+------+------+------+------+
|dim1|dim2| byvar|input1|input2|input3|input4|input5|input6|
+----+----+-------+------+------+------+------+------+------+
| 101| 102|MTD0001| 1| 10| 3| 6| 10| 13|
| 101| 102|MTD0002| 2| 12| 4| 8| 11| 14|
| 101| 102|MTD0003| 3| 13| 5| 9| 12| 15|
+----+----+-------+------+------+------+------+------+------+

为了 `Spark2.4+` 您可以使用 `explode` ,  `arrays_zip` ,  `array` ,和 `element_at` 得到你的2列。只要输入列的名称以 `'input'` ```
from pyspark.sql import functions as F
df.withColumn("vals",\
F.explode(F.arrays_zip(F.array([F.array(F.lit(x),F.col(x)) for x in df.columns if x!=['dim1','dim2','byvar']]))))\
.select("dim1", "dim2","byvar","vals.*").withColumn("TRAMS_NAME", F.element_at("0",1))\
                                                    .withColumn("VALUES", F.element_at("0",2)).drop("0").show()

+----+----+-------+----------+------+
|dim1|dim2|  byvar|TRAMS_NAME|VALUES|
+----+----+-------+----------+------+
| 101| 102|MTD0001|    input1|     1|
| 101| 102|MTD0001|    input2|    10|
| 101| 102|MTD0001|    input3|     3|
| 101| 102|MTD0001|    input4|     6|
| 101| 102|MTD0001|    input5|    10|
| 101| 102|MTD0001|    input6|    13|
| 101| 102|MTD0002|    input1|     2|
| 101| 102|MTD0002|    input2|    12|
| 101| 102|MTD0002|    input3|     4|
| 101| 102|MTD0002|    input4|     8|
| 101| 102|MTD0002|    input5|    11|
| 101| 102|MTD0002|    input6|    14|
| 101| 102|MTD0003|    input1|     3|
| 101| 102|MTD0003|    input2|    13|
| 101| 102|MTD0003|    input3|     5|
| 101| 102|MTD0003|    input4|     9|
| 101| 102|MTD0003|    input5|    12|
| 101| 102|MTD0003|    input6|    15|
+----+----+-------+----------+------+

相关问题