spark中的分解结构

42fyovps  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(458)

我有以下模式的Dataframe:

|-- data: struct (nullable = true)
 |    |-- asin: string (nullable = true)
 |    |-- customerId: long (nullable = true)
 |    |-- eventTime: long (nullable = true)
 |    |-- marketplaceId: long (nullable = true)
 |    |-- rating: long (nullable = true)
 |    |-- region: string (nullable = true)
 |    |-- type: string (nullable = true)
 |-- uploadedDate: long (nullable = true)

我想分解结构,使所有元素(如asin、customerid、eventtime)成为dataframe中的列。我尝试过分解函数,但它在数组上工作,而不是在结构类型上。是否可以将ableDataframe转换为以下Dataframe:

|-- asin: string (nullable = true)
     |-- customerId: long (nullable = true)
     |-- eventTime: long (nullable = true)
     |-- marketplaceId: long (nullable = true)
     |-- rating: long (nullable = true)
     |-- region: string (nullable = true)
     |-- type: string (nullable = true)
     |-- uploadedDate: long (nullable = true)
aoyhnmkz

aoyhnmkz1#

很简单:

val newDF = df.select("uploadedDate", "data.*");

告诉您选择uploadeddate,然后选择字段数据的所有子元素
例子:

scala> case class A(a: Int, b: Double)
scala> val df = Seq((A(1, 1.0), "1"), (A(2, 2.0), "2")).toDF("data", "uploadedDate")
scala> val newDF = df.select("uploadedDate", "data.*")
scala> newDF.show()
+------------+---+---+
|uploadedDate|  a|  b|
+------------+---+---+
|           1|  1|1.0|
|           2|  2|2.0|
+------------+---+---+

scala> newDF.printSchema()
root
 |-- uploadedDate: string (nullable = true)
 |-- a: integer (nullable = true)
 |-- b: double (nullable = true)

相关问题