如何保存Parquet文件在hdfs没有Spark或框架?

wgeznvg7  于 2021-06-01  发布在  Hadoop
关注(0)|答案(1)|浏览(359)

我想用java直接将一个Parquet文件保存到hdfs。
这是我用来生成Parquet文件并将它们存储在本地的代码,但是现在我想将它们存储在hdfs中。

final String schemaLocation = "/home/javier/FlinkProjects/kafka-flink/src/main/java/com/grallandco/demos/avro.json";
  final Schema avroSchema = new Schema.Parser().parse(new File(schemaLocation));
  final MessageType parquetSchema = new AvroSchemaConverter().convert(avroSchema);
  final WriteSupport writeSupport = new AvroWriteSupport(parquetSchema, avroSchema);
  final String parquetFile = "/home/javier/parquet-files/data" + postfijoFilename + ".parquet";
  final Path path = new Path(parquetFile);
  AvroParquetWriter parquetWriter = new AvroParquetWriter(path,
          avroSchema, CompressionCodecName.SNAPPY, ParquetWriter.DEFAULT_BLOCK_SIZE, ParquetWriter.DEFAULT_PAGE_SIZE);
  final GenericRecord record = new GenericData.Record(avroSchema);
  record.put(Constantes.CAMPO_ID, datos[0]);
  record.put("movie", datos[1]);
  record.put("date", datos[2]);
  record.put("imdb", datos[3]);
  parquetWriter.write(record);

我想换掉这个

final String parquetFile = "/home/javier/parquet-files/data" + postfijoFilename + ".parquet";

使用hadoop hdfs路径,知道吗???

4si2a6ki

4si2a6ki1#

您可以通过下面的方法来实现这一点(注意,位置必须存在,并在代码中更改您的hdfsurl和用户名)。可能需要在hdfs中包含模式):

final String schemaLocation = "/home/javier/FlinkProjects/kafka-flink/src/main/java/com/grallandco/demos/avro.json";
final Schema avroSchema = new Schema.Parser().parse(new File(schemaLocation));
final MessageType parquetSchema = new AvroSchemaConverter().convert(avroSchema);
final WriteSupport writeSupport = new AvroWriteSupport(parquetSchema,  avroSchema);
final Path path = new Path("/user/hduser/parquet-files/data" + 
       postfijoFilename + ".parquet");
Configuration configuration = new Configuration();
String hdfsUrl = "hdfs://hadoopnamenode:9000/";
String username = "hduser";
FileSystem fs= FileSystem.get(new URI(hdfsUrl), configuration);
UserGroupInformation ugi = 
UserGroupInformation.createRemoteUser(username);
ugi.doAs(new PrivilegedExceptionAction<Void>() {
            public Void run() throws Exception { 
                AvroParquetWriter parquetWriter = new 
                    AvroParquetWriter(path,
                    avroSchema, 
                    CompressionCodecName.SNAPPY,                
                    ParquetWriter.DEFAULT_BLOCK_SIZE, 
                    ParquetWriter.DEFAULT_PAGE_SIZE);
                final GenericRecord record = new 
                    GenericData.Record(avroSchema);
                record.put(Constantes.CAMPO_ID, datos[0]);
                record.put("movie", datos[1]);
                record.put("date", datos[2]);
                record.put("imdb", datos[3]);
                parquetWriter.write(record);
                return null;
                }
    });

相关问题