我有一个工具,它使用org.apache.parquet.hadoop.parquetwriter将csv数据文件转换为parquet数据文件。
我可以很好地编写基本的基元类型(int32、double、binary string)。
我需要写空值,但我不知道怎么写。我试着简单地写作 null
与parquetwriter,它抛出了一个异常。
如何使用org.apache.parquet.hadoop.parquetwriter编写null?有可为空的类型吗?
我相信代码是不言自明的:
ArrayList<Type> fields = new ArrayList<>();
fields.add(new PrimitiveType(Type.Repetition.OPTIONAL, PrimitiveTypeName.INT32, "int32_col", null));
fields.add(new PrimitiveType(Type.Repetition.OPTIONAL, PrimitiveTypeName.DOUBLE, "double_col", null));
fields.add(new PrimitiveType(Type.Repetition.OPTIONAL, PrimitiveTypeName.BINARY, "string_col", null));
MessageType schema = new MessageType("input", fields);
Configuration configuration = new Configuration();
configuration.setQuietMode(true);
GroupWriteSupport.setSchema(schema, configuration);
SimpleGroupFactory f = new SimpleGroupFactory(schema);
ParquetWriter<Group> writer = new ParquetWriter<Group>(
new Path("output.parquet"),
new GroupWriteSupport(),
CompressionCodecName.SNAPPY,
ParquetWriter.DEFAULT_BLOCK_SIZE,
ParquetWriter.DEFAULT_PAGE_SIZE,
1048576,
true,
false,
ParquetProperties.WriterVersion.PARQUET_1_0,
configuration
);
// create row 1 with defined values
Group group1 = f.newGroup();
Integer int1 = 100;
Double double1 = 0.5;
String string1 = "string-value";
group1.add(0, int1);
group1.add(1, double1);
group1.add(2, string1);
writer.write(group1);
// create row 2 with NULL values -- does not work!
Group group2 = f.newGroup();
Integer int2 = null;
Double double2 = null;
String string2 = null;
group2.add(0, int2); // <-- throws NullPointerException
group2.add(1, double2); // <-- throws NullPointerException
group2.add(2, string2); // <-- throws NullPointerException
writer.write(group2);
writer.close();
1条答案
按热度按时间btxsgosb1#
解决方案非常简单,只是不要写一个值: