如何在scala spark df中减少和求和网格

tp5buhyn  于 2021-05-27  发布在  Spark
关注(0)|答案(2)|浏览(490)

是否可以将scala spark df中的nxn网格减少为网格的总和,并创建新的df?现有数据框:

1 1 0 0 0 0 0 0
0 0 0 0 0 0 1 0
0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 1 0 0 0 0 1 1
0 1 0 0 0 0 1 0
0 0 0 0 1 0 0 0

如果n=4,我们能从这个df中取4x4网格,求和吗?

1 1 0 0 | 0 0 0 0
0 0 0 0 | 0 0 1 0
0 1 0 0 | 0 0 0 0
0 0 0 0 | 0 0 0 0
------------------
0 0 0 0 | 0 0 0 0
0 1 0 0 | 0 0 1 1
0 1 0 0 | 0 0 1 0
0 0 0 0 | 1 0 0 0

得到这个输出?

3 1
2 4
vngu2lb8

vngu2lb81#

检查以下代码。

scala> df.show(false)
+---+---+---+---+---+---+---+---+
|a  |b  |c  |d  |e  |f  |g  |h  |
+---+---+---+---+---+---+---+---+
|1  |1  |0  |0  |0  |0  |0  |0  |
|0  |0  |0  |0  |0  |0  |1  |0  |
|0  |1  |0  |0  |0  |0  |0  |0  |
|0  |0  |0  |0  |0  |0  |0  |0  |
|0  |0  |0  |0  |0  |0  |0  |0  |
|0  |1  |0  |0  |0  |0  |1  |1  |
|0  |1  |0  |0  |0  |0  |1  |0  |
|0  |0  |0  |0  |1  |0  |0  |0  |
+---+---+---+---+---+---+---+---+
scala> val n = 4

这将把行分成2行,每组有4行数据。

scala> val rowExpr = ntile(n/2)
.over(
    Window
    .orderBy(lit(1))
)

将所有值收集到数组的数组中。

scala> val aggExpr = df
.columns
.grouped(4)
.toList.map(c => collect_list(array(c.map(col):_*)).as(c.mkString))

展平数组,删除0值并获取数组大小。

scala> val selectExpr = df
.columns
.grouped(4)
.toList
.map(c => size(array_remove(flatten(col(c.mkString)),0)).as(c.mkString))

应用 rowExpr & selectExpr ```
scala> df
.withColumn("row_id",rowExpr)
.groupBy($"row_id")
.agg(aggExpr.head,aggExpr.tail:*)
.select(selectExpr:
*)
.show(false)

最终输出

+----+----+
|abcd|efgh|
+----+----+
|3 |1 |
|2 |4 |
+----+----+

fdx2calv

fdx2calv2#

对于行,必须进行聚合;对于列,必须进行求和。2x2的示例代码

import pyspark.sql.functions as F
from pyspark.sql.types import *
from pyspark.sql.window import Window

# Create test data frame

tst= sqlContext.createDataFrame([(1,1,2,11),(1,3,4,12),(1,5,6,13),(1,7,8,14),(2,9,10,15),(2,11,12,16),(2,13,14,17),(2,13,14,17)],schema=['col1','col2','col3','col4'])
w=Window.orderBy(F.monotonically_increasing_id())
tst1= tst.withColumn("grp",F.ceil(F.row_number().over(w)/2)) # 2 is for this example - change to 4
tst_sum_row = tst1.groupby('grp').agg(*[F.sum(coln).alias(coln) for coln in tst1.columns if 'grp' not in coln])
expr =[sum([F.col(tst.columns[i]),F.col(tst.columns[i+1])]).alias('coln'+str(i)) for i in [x*2 for x in (range(len(tst.columns)/2))]] # The sum used here is python inbuilt sum and not pyspark sum function which is referred as F.sum()
tst_sum_coln = tst_sum_row.select(*expr)

tst.show()
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
|   1|   1|   2|  11|
|   1|   3|   4|  12|
|   1|   5|   6|  13|
|   1|   7|   8|  14|
|   2|   9|  10|  15|
|   2|  11|  12|  16|
|   2|  13|  14|  17|
|   2|  13|  14|  17|
+----+----+----+----+

In [21]: tst_sum_coln.show()
+-----+-----+
|coln0|coln2|
+-----+-----+
|    6|   29|
|   14|   41|
|   24|   53|
|   30|   62|
+-----+-----+

相关问题