我有一个PySpark Dataframe ,由三列x,y,z组成。
X在此 Dataframe 中可能有多行。我如何分别计算x中每个键的百分位数?
+------+---------+------+
| Name| Role|Salary|
+------+---------+------+
| bob|Developer|125000|
| mark|Developer|108000|
| carl| Tester| 70000|
| carl|Developer|185000|
| carl| Tester| 65000|
| roman| Tester| 82000|
| simon|Developer| 98000|
| eric|Developer|144000|
|carlos| Tester| 75000|
| henry|Developer|110000|
+------+---------+------+
所需输出:
+------+---------+------+---------+
| Name| Role|Salary| 50%|
+------+---------+------+---------+
| bob|Developer|125000|117500.0 |
| mark|Developer|108000|117500.0 |
| carl| Tester| 70000|72500.0 |
| carl|Developer|185000|117500.0 |
| carl| Tester| 65000|72500.0 |
| roman| Tester| 82000|72500.0 |
| simon|Developer| 98000|117500.0 |
| eric|Developer|144000|117500.0 |
|carlos| Tester| 75000|72500.0 |
| henry|Developer|110000|117500.0 |
+------+---------+------+---------+
3条答案
按热度按时间jobtbby31#
尝试
groupby
+F.expr
:输出:
现在,您可以将
df1
与原始 Dataframe 连接起来:输出:
enyaitl32#
array
实际上不是必需的:与窗口函数一起,它完成了以下工作:
iqih9akk3#
您可以尝试spark中提供的
approxQuantile
功能。https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.approxQuantile.html