spark dataframe对列中的元素进行计数

jv2fixgn  于 2021-05-18  发布在  Spark
关注(0)|答案(1)|浏览(651)
val someDF = Seq(
  (4623874, "user1", "success"),
  (4623874, "user2","fail"),
  (4623874, "user3","success"),
  (1343244, "user4","fail"),
  (4235252, "user5", "fail")
).toDF("primaryid", "user","status")

这是输入Dataframe是否可以获取除groupby之外的每个主id的计数状态

someDF.groupBy("primaryid", "status").count.show

+-------+-------+-----+
primaryid| status|count|
+-------+-------+-----+
|4235252|   fail|    1|
|1343244|   fail|    1|
|4623874|   fail|    1|
|4623874|success|    2|
+-------+-------+-----+

除了“groupby”之外,还有其他方法可以得到上述结果吗?

ruarlubt

ruarlubt1#

使用 count 窗口功能。检查以下代码。

scala> val someDF = Seq(
     |   (4623874, "user1", "success"),
     |   (4623874, "user2","fail"),
     |   (4623874, "user3","success"),
     |   (1343244, "user4","fail"),
     |   (4235252, "user5", "fail")
     | ).toDF("primaryid", "user","status")
scala> import org.apache.spark.sql.expressions._
import org.apache.spark.sql.expressions._

someDF
.withColumn("count",
    count($"status")
    .over(
        Window
        .partitionBy($"primaryid",$"status")
        .orderBy($"primaryid".asc)
    )
).show(false)
+---------+-----+-------+-----+
|primaryid|user |status |count|
+---------+-----+-------+-----+
|4235252  |user5|fail   |1    |
|1343244  |user4|fail   |1    |
|4623874  |user2|fail   |1    |
|4623874  |user1|success|2    |
|4623874  |user3|success|2    |
+---------+-----+-------+-----+
scala> :paste
// Entering paste mode (ctrl-D to finish)

someDF
.withColumn("count",
    count($"status")
    .over(
        Window
        .partitionBy($"primaryid",$"status")
        .orderBy($"primaryid".asc)
    )
)
.filter($"status" === "success")
.show(false)

// Exiting paste mode, now interpreting.

+---------+-----+-------+-----+
|primaryid|user |status |count|
+---------+-----+-------+-----+
|4623874  |user1|success|2    |
|4623874  |user3|success|2    |
+---------+-----+-------+-----+

相关问题