pyspark:如何从另一个Dataframe向Dataframe添加列?

t9aqgxwy  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(522)

我有两个10行的Dataframe。

df1.show()
+-------------------+------------------+--------+-------+
|                lat|               lon|duration|stop_id|
+-------------------+------------------+--------+-------+
|  -6.23748779296875| 106.6937255859375|     247|      0|
|  -6.23748779296875| 106.6937255859375|    2206|      1|
|  -6.23748779296875| 106.6937255859375|     609|      2|
| 0.5733972787857056|101.45503234863281|   16879|      3|
| 0.5733972787857056|101.45503234863281|    4680|      4|
| -6.851855278015137|108.64261627197266|     164|      5|
| -6.851855278015137|108.64261627197266|     220|      6|
| -6.851855278015137|108.64261627197266|    1669|      7|
|-0.9033176600933075|100.41548919677734|   30811|      8|
|-0.9033176600933075|100.41548919677734|   23404|      9|
+-------------------+------------------+--------+-------+

我想添加列 bank_and_postdf2df1 . df2 来自函数。

def assignPtime(x, mu, std):
  mu = mu.values[0]
  std = std.values[0]
  x1 = np.random.normal(mu, std, 100000) 
  a1, b1 = np.histogram(x1, density=True)
  val = x / 60
  for k, v in enumerate(val):
    prob = 0
    for i,j in enumerate(b1[:-1]):
      v1 = b1[i]
      v2 = b1[i+1]
      if (v >= v1) and (v < v2):
        prob = a1[i]
    x[k] = prob
  return x

ff = pandas_udf(assignPtime, returnType=DoubleType())
df2 = df1.select(ff(col("duration"), lit(15), lit(15)).alias("ptime_bank_and_post"))
df2.show()
+--------------------+
|       bank_and_post|
+--------------------+
|0.021806558032484918|
|0.014366417828826784|
|0.021806558032484918|
|                 0.0|
|                 0.0|
|0.021806558032484918|
|0.021806558032484918|
|0.014366417828826784|
|                 0.0|
|                 0.0|
+--------------------+

如果我尝试

df2 = df2.withColumn("stop_id", monotonically_increasing_id())

我得到了错误

ValueError: assignment destination is read-only
06odsfpq

06odsfpq1#

使用 row_number() 窗口函数将新列添加到 df1,df2 dataframes然后连接row\u number列上的dataframes。
Example: 1. Using row_number function: ```
df1=spark.createDataFrame([(0,),(1,),(2,),(3,),(4,),(5,),(6,),(7,),(8,),(9,)],["stop_id"])

df2=spark.createDataFrame([("0.021806558032484918",),("0.014366417828826784",),("0.021806558032484918",),(" 0.0",),(" 0.0",),("0.021806558032484918",),("0.021806558032484918",),("0.014366417828826784",),(" 0.0",),(" 0.0",)],["bank_and_post"])

from pyspark.sql import *
from pyspark.sql.functions import *

w=Window.orderBy(lit(1))

df4=df2.withColumn("rn",row_number().over(w)-1)
df3=df1.withColumn("rn",row_number().over(w)-1)

df3.join(df4,["rn"]).drop("rn").show()

+-------+--------------------+

|stop_id| bank_and_post|

+-------+--------------------+

| 0|0.021806558032484918|

| 1|0.014366417828826784|

| 2|0.021806558032484918|

| 3| 0.0|

| 4| 0.0|

| 5|0.021806558032484918|

| 6|0.021806558032484918|

| 7|0.014366417828826784|

| 8| 0.0|

| 9| 0.0|

+-------+--------------------+

`2. Using monotonically_increasing_id() function:`
df1.withColumn("mid",monotonically_increasing_id()).
join(df2.withColumn("mid",monotonically_increasing_id()),["mid"]).
drop("mid").
orderBy("stop_id").
show()

+-------+--------------------+

|stop_id| bank_and_post|

+-------+--------------------+

| 0|0.021806558032484918|

| 1|0.014366417828826784|

| 2|0.021806558032484918|

| 3| 0.0|

| 4| 0.0|

| 5|0.021806558032484918|

| 6|0.021806558032484918|

| 7|0.014366417828826784|

| 8| 0.0|

| 9| 0.0|

+-------+--------------------+

`3. Using row_number() on monotonically_increasing_id() function:`
w=Window.orderBy("mid")
df3=df1.withColumn("mid",monotonically_increasing_id()).withColumn("rn",row_number().over(w) - 1)
df4=df2.withColumn("mid",monotonically_increasing_id()).withColumn("rn",row_number().over(w) - 1)
df3.join(df4,["rn"]).drop("rn","mid").show()

+-------+--------------------+

|stop_id| bank_and_post|

+-------+--------------------+

| 0|0.021806558032484918|

| 1|0.014366417828826784|

| 2|0.021806558032484918|

| 3| 0.0|

| 4| 0.0|

| 5|0.021806558032484918|

| 6|0.021806558032484918|

| 7|0.014366417828826784|

| 8| 0.0|

| 9| 0.0|

+-------+--------------------+

`4. Using zipWithIndex:`
df3=df1.rdd.zipWithIndex().toDF().select("_1.","_2")
df4=df2.rdd.zipWithIndex().toDF().select("_1.
","_2")
df3.join(df4,["_2"]).drop("_2").orderBy("stop_id").show()

+-------+--------------------+

|stop_id| bank_and_post|

+-------+--------------------+

| 0|0.021806558032484918|

| 1|0.014366417828826784|

| 2|0.021806558032484918|

| 3| 0.0|

| 4| 0.0|

| 5|0.021806558032484918|

| 6|0.021806558032484918|

| 7|0.014366417828826784|

| 8| 0.0|

| 9| 0.0|

+-------+--------------------+

相关问题