如何在postgresql中插入具有列数组< array< double>>的Dataframe?

oxalkeyp  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(477)

我正在尝试在postgresql中存储具有嵌套模式的Dataframe。有人能帮我解释一下如何在postgres中存储列(坐标)和(用户提到的)?我读到postgres可以存储数组类型,但在尝试写入db时出现了一个错误。我不完全确定我的表是否创建正确。
错误:

Exception in thread "main" java.lang.IllegalArgumentException: Can't get JDBC type for array<array<double>>

Dataframe架构:

root
 |-- created_at: string (nullable = true)
 |-- id: long (nullable = true)
 |-- text: string (nullable = true)
 |-- source: string (nullable = true)
 |-- user_id: long (nullable = true)
 |-- in_reply_to_status_id: string (nullable = true)
 |-- in_reply_to_user_id: long (nullable = true)
 |-- lang: string (nullable = true)
 |-- retweet_count: long (nullable = true)
 |-- reply_count: long (nullable = true)
 |-- coordinates: array (nullable = true)
 |    |-- element: array (containsNull = true)
 |    |    |-- element: double (containsNull = true)
 |-- hashtags: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- user_mentions: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- id: long (nullable = true)
 |    |    |-- id_str: string (nullable = true)
 |    |    |-- indices: array (nullable = true)
 |    |    |    |-- element: long (containsNull = true)
 |    |    |-- name: string (nullable = true)
 |    |    |-- screen_name: string (nullable = true)

postgres表创建:

create table test-table (created_at varchar, id int, text text, source text, user_id int, in_reply_to_status_id varchar, in_reply_to_user_id int, lang varchar, retweet_count int, reply_count int, coordinates double precision[][], hashtags text[], user_mentions text[]);

spark scala代码:

val df_1 = df.select(col("created_at"), col("id"), col("text"), col("source"), col("user.id").as("user_id"),
      col("in_reply_to_status_id"), col("in_reply_to_user_id"),
      col("lang"), col("retweet_count"), col("reply_count"), col("place.bounding_box.coordinates"),
      col("entities.hashtags"), col("entities.user_mentions")).withColumn("coordinates", explode(col("coordinates")))

    df_1.show(truncate = false)
    df_1.printSchema()

    df_1.write
      .format("jdbc")
      .option("url", "postgres_url")
      .option("dbtable", "xxx.mytable")
      .option("user", "user")
      .option("password", "pass")
      .save()

样本输入:
坐标列:

[[80.063341, 26.348309], [80.063341, 30.43339], [88.2027, 30.43339], [88.2027, 26.348309]]

用户提到:

[[123456789, 123456789, [0, 15], Name, ScreenName]]
rjee0c15

rjee0c151#

spark只支持用jdbc读写一维数组。您可以将数据转换为具有多行(即。 explode 它必须在多行中有double[]),或者您可以从中转换数据 double[][] 以逗号分隔 string[] 或者普通的 string .
例如。 [[1, 2], [3, 4]] 可以转换为 ["1,2", "3,4"]

相关问题