pyspark:groupby和aggregate avg以及多个列上的第一个

2o7dmzc5  于 2022-11-21  发布在  Spark
关注(0)|答案(2)|浏览(197)

我有以下示例pyspark Dataframe ,在groupby之后,我希望计算多个列中的第一个列的平均值。在真实的情况下,我有数百个列,因此无法单独计算

sp = spark.createDataFrame([['a',2,4,'cc','anc'], ['a',4,7,'cd','abc'], ['b',6,0,'as','asd'], ['b', 2, 4, 'ad','acb'],
                        ['c', 4, 4, 'sd','acc']], ['id', 'col1', 'col2','col3', 'col4'])

+---+----+----+----+----+
| id|col1|col2|col3|col4|
+---+----+----+----+----+
|  a|   2|   4|  cc| anc|
|  a|   4|   7|  cd| abc|
|  b|   6|   0|  as| asd|
|  b|   2|   4|  ad| acb|
|  c|   4|   4|  sd| acc|
+---+----+----+----+----+

这就是我正在尝试的

mean_cols = ['col1', 'col2']
first_cols = ['col3', 'col4']
sc.groupby('id').agg(*[ f.mean for col in mean_cols], *[f.first for col in first_cols])

但是它不起作用。我怎么能用pyspark这样做呢

j8ag8udp

j8ag8udp1#

在多个数据行上使用多个函数的最佳方式是使用.agg(*expr)格式。

import pyspark.sql.functions as F
from pyspark.sql.functions import udf
from pyspark.sql.types import *
import numpy as np
#Test data
tst = sqlContext.createDataFrame([(1,2,3,4),(3,4,5,1),(5,6,7,8),(7,8,9,2)],schema=['col1','col2','col3','col4'])
fn_l = [F.min,F.max,F.mean,F.first]
col_l=['col1','col2','col3']
expr = [fn(coln).alias(str(fn.__name__)+'_'+str(coln)) for fn in fn_l for coln in col_l]
tst_r = tst.groupby('col4').agg(*expr)

结果将是

tst_r.show()
+----+--------+--------+--------+--------+--------+--------+---------+---------+---------+----------+----------+----------+
|col4|min_col1|min_col2|min_col3|max_col1|max_col2|max_col3|mean_col1|mean_col2|mean_col3|first_col1|first_col2|first_col3|
+----+--------+--------+--------+--------+--------+--------+---------+---------+---------+----------+----------+----------+
|   5|       5|       6|       7|       7|       8|       9|      6.0|      7.0|      8.0|         5|         6|         7|
|   4|       1|       2|       3|       3|       4|       5|      2.0|      3.0|      4.0|         1|         2|         3|
+----+--------+--------+--------+--------+--------+--------+---------+---------+---------+----------+----------+----------+

为了有选择地对列应用函数,可以有多个表达式数组,并在聚合中将它们连接起来。

fn_l = [F.min,F.max]
fn_2=[F.mean,F.first]
col_l=['col1','col2']
col_2=['col1','col3','col4']
expr1 = [fn(coln).alias(str(fn.__name__)+'_'+str(coln)) for fn in fn_l for coln in col_l]
expr2 = [fn(coln).alias(str(fn.__name__)+'_'+str(coln)) for fn in fn_2 for coln in col_2]
tst_r = tst.groupby('col4').agg(*(expr1+expr2))
7cjasjjr

7cjasjjr2#

更简单的方法是:

import pyspark.sql.functions as F

tst_r = ( tst.groupby('col4')
             .agg(*[F.mean(col).alias(f"{col}_mean") for col in means_col],
                  *[F.first(col).alias(f"{col}_first") for col in firsts_col]) )

相关问题