org.apache.spark.sql.analysisexception:保存sparkDataframe时

fnx2tebb  于 2021-06-25  发布在  Hive
关注(0)|答案(1)|浏览(727)

我的数据库中有两个表中的一个表。我正在尝试使用insertinto将数据从第一个表保存到第二个表。

CREATE TABLE if not exists dbname.tablename_csv ( id STRING, location STRING, city STRING, country STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE ;

    CREATE TABLE  if not exists dbname.tablename_orc ( id String,location STRING, country String PARTITIONED BY (city string) CLUSTERED BY (country) into 4 buckets ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS ORCFILE tblproperties("orc.compress"="SNAPPY");

    var query=spark.sql("id,location,city,country from dbname.tablename_csv")
    query.write.insertInto("dbname.tablename_orc")

但这是一个问题。”

"org.apache.spark.sql.AnalysisException: `dbname`.`tablename_orc` requires that the data to be inserted have the same number of columns as the target table: target table has 3 column(s) but the inserted data has 4 column(s), including 0 partition column(s) having constant value(s).;"

请有人给我一个提示,还有什么需要添加。我试图添加分区,但也得到了同样的错误,并显示分区不需要。

query.write.partitionBy("city").insertInto("dbname.tablename_orc")
20jt8wwn

20jt8wwn1#

saveastable(…)with mode=“append”

相关问题