用嵌套字段更新Dataframe-spark

aelbi1ox  于 2021-05-29  发布在  Hadoop
关注(0)|答案(2)|浏览(398)

这个问题在这里已经有了答案

向sparkDataframe添加嵌套列(1个答案)
去年关门了。
我有两个Dataframe如下
df1型

+----------------------+---------+
    |products              |visitorId|
    +----------------------+---------+
    |[[i1,0.68], [i2,0.42]]|v1       |
    |[[i1,0.78], [i3,0.11]]|v2       |
    +----------------------+---------+

df2型

+---+----------+
| id|      name|
+---+----------+
| i1|Nike Shoes|
| i2|  Umbrella|
| i3|     Jeans|
+---+----------+

这是Dataframedf1的模式

root
 |-- products: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- id: string (nullable = true)
 |    |    |-- interest: double (nullable = true)
 |-- visitorId: string (nullable = true)

我想连接这两个Dataframe,这样输出

+------------------------------------------+---------+
|products                                  |visitorId|
+------------------------------------------+---------+
|[[i1,0.68,Nike Shoes], [i2,0.42,Umbrella]]|v1       |
|[[i1,0.78,Nike Shoes], [i3,0.11,Jeans]]   |v2       |
+------------------------------------------+---------+

这是我期望的输出模式

root
     |-- products: array (nullable = true)
     |    |-- element: struct (containsNull = true)
     |    |    |-- id: string (nullable = true)
     |    |    |-- interest: double (nullable = true)
     |    |    |-- name: double (nullable = true)
     |-- visitorId: string (nullable = true)

在scala我该怎么做?我正在使用spark 2.2.0。
更新
我对上面的Dataframe进行了分解和连接,得到了下面的输出。

+---------+---+--------+----------+
|visitorId| id|interest|      name|
+---------+---+--------+----------+
|       v1| i1|    0.68|Nike Shoes|
|       v1| i2|    0.42|  Umbrella|
|       v2| i1|    0.78|Nike Shoes|
|       v2| i3|    0.11|     Jeans|
+---------+---+--------+----------+

现在,我只需要下面json格式的上面的Dataframe。

{
    "visitorId": "v1",
    "products": [{
         "id": "i1",
         "name": "Nike Shoes",
         "interest": 0.68
    }, {
         "id": "i2",
         "name": "Umbrella",
         "interest": 0.42
    }]
},
{
    "visitorId": "v2",
    "products": [{
         "id": "i1",
         "name": "Nike Shoes",
         "interest": 0.78
    }, {
         "id": "i3",
         "name": "Jeans",
         "interest": 0.11
    }]
}
5m1hhzi4

5m1hhzi41#

这取决于您的具体情况,但如果df2查找表足够小,您可以尝试将其收集为scalaMap,以便在udf中使用。所以变得很简单:

val m = df2.as[(String, String)].collect.toMap

val addName = udf( (arr: Seq[Row]) => {
    arr.map(i => (i.getAs[String](0), i.getAs[Double](1), m(i.getAs[String](0))))
})

df1.withColumn("products", addName('products)).show(false)

+------------------------------------------+---------+
|products                                  |visitorId|
+------------------------------------------+---------+
|[[i1,0.68,Nike Shoes], [i2,0.42,Umbrella]]|v1       |
|[[i1,0.78,Nike Shoes], [i3,0.11,Jeans]]   |v2       |
+------------------------------------------+---------+
3pvhb19x

3pvhb19x2#

试试这个。

scala> val df1 = Seq((Seq(("i1",0.68),("i2",0.42)), "v1"), (Seq(("i1",0.78),("i3",0.11)), "v2")).toDF("products", "visitorId" )
df: org.apache.spark.sql.DataFrame = [products: array<struct<_1:string,_2:double>>, visitorId: string]

scala> df1.show(false)
+------------------------+---------+
|products                |visitorId|
+------------------------+---------+
|[[i1, 0.68], [i2, 0.42]]|v1       |
|[[i1, 0.78], [i3, 0.11]]|v2       |
+------------------------+---------+

scala> val df2 = Seq(("i1", "Nike Shoes"),("i2", "Umbrella"), ("i3", "Jeans")).toDF("id", "name")
df2: org.apache.spark.sql.DataFrame = [id: string, name: string]

scala> df2.show(false)
+---+----------+
|id |name      |
+---+----------+
|i1 |Nike Shoes|
|i2 |Umbrella  |
|i3 |Jeans     |
+---+----------+

scala> val withProductsDF = df1.withColumn("individualproducts", explode($"products")).select($"visitorId",$"products",$"individualproducts._1" as "id", $"individualproducts._2" as "interest")
withProductsDF: org.apache.spark.sql.DataFrame = [visitorId: string, products: array<struct<_1:string,_2:double>> ... 2 more fields]

scala> withProductsDF.show(false)
+---------+------------------------+---+--------+
|visitorId|products                |id |interest|
+---------+------------------------+---+--------+
|v1       |[[i1, 0.68], [i2, 0.42]]|i1 |0.68    |
|v1       |[[i1, 0.68], [i2, 0.42]]|i2 |0.42    |
|v2       |[[i1, 0.78], [i3, 0.11]]|i1 |0.78    |
|v2       |[[i1, 0.78], [i3, 0.11]]|i3 |0.11    |
+---------+------------------------+---+--------+

scala> val withProductNamesDF = withProductsDF.join(df2, "id")
withProductNamesDF: org.apache.spark.sql.DataFrame = [id: string, visitorId: string ... 3 more fields]

scala> withProductNamesDF.show(false)
+---+---------+------------------------+--------+----------+
|id |visitorId|products                |interest|name      |
+---+---------+------------------------+--------+----------+
|i1 |v2       |[[i1, 0.78], [i3, 0.11]]|0.78    |Nike Shoes|
|i1 |v1       |[[i1, 0.68], [i2, 0.42]]|0.68    |Nike Shoes|
|i2 |v1       |[[i1, 0.68], [i2, 0.42]]|0.42    |Umbrella  |
|i3 |v2       |[[i1, 0.78], [i3, 0.11]]|0.11    |Jeans     |
+---+---------+------------------------+--------+----------+

scala> val outputDF = withProductNamesDF.groupBy("visitorId").agg(collect_list(struct($"id", $"name", $"interest")) as  "products")
outputDF: org.apache.spark.sql.DataFrame = [visitorId: string, products: array<struct<id:string,name:string,interest:double>>]

scala> outputDF.toJSON.show(false)
+-----------------------------------------------------------------------------------------------------------------------------+
|value                                                                                                                        |
+-----------------------------------------------------------------------------------------------------------------------------+
|{"visitorId":"v2","products":[{"id":"i1","name":"Nike Shoes","interest":0.78},{"id":"i3","name":"Jeans","interest":0.11}]}   |
|{"visitorId":"v1","products":[{"id":"i1","name":"Nike Shoes","interest":0.68},{"id":"i2","name":"Umbrella","interest":0.42}]}|
+-----------------------------------------------------------------------------------------------------------------------------+

相关问题