“Parquet记录格式不正确”,而列计数不为0

mkshixfv  于 2021-06-25  发布在  Hive
关注(0)|答案(2)|浏览(478)

在aws emr集群上,我尝试使用pyspark将查询结果写入parquet,但遇到以下错误:

Caused by: java.lang.RuntimeException: Parquet record is malformed: empty fields are illegal, the field should be ommited completely instead
    at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:64)
    at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
    at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
    at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121)
    at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123)
    at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42)
    at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111)
    at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124)
    at org.apache.spark.sql.hive.execution.HiveOutputWriter.write(HiveFileFormat.scala:149)
    at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.write(FileFormatDataWriter.scala:137)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:245)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
    ... 10 more

我已经读到,如果有一些列只有空值,那么可能会发生这种情况,但是在检查了所有列计数之后,情况就不是这样了。没有一列是完全空的。我没有使用parquet,而是尝试将结果写入文本文件,一切都很顺利。
有什么线索能触发这个错误吗?以下是此表中使用的所有数据类型。共有51列。

'array<bigint>',
'array<char(50)>',
'array<smallint>',
'array<string>',
'array<varchar(100)>',
'array<varchar(50)>',
'bigint',
'char(16)',
'char(20)',
'char(4)',
'int',
'string',
'timestamp',
'varchar(255)',
'varchar(50)',
'varchar(87)'
wztqucjr

wztqucjr1#

看起来您正在使用spark的一个配置单元写入路径( org.apache.hadoop.hive.ql.io.parquet.write ). 我可以通过直接写入parquet来解决这个问题,然后再将分区添加到所需的任何配置单元表中。

df.write.parquet(your_path)
spark.sql(f"""
    ALTER TABLE {your_table}
    ADD PARTITION (partition_spec) LOCATION '{your_path}'
    """)
neskvpey

neskvpey2#

原来parquet不支持空数组。如果表中有一个或多个空数组(任何类型),将触发此错误。
一种解决方法是将空数组强制转换为空值。

相关问题