如何使用apache flink读取hdfs中的parquet文件?

luaexgnf  于 2021-06-21  发布在  Flink
关注(0)|答案(1)|浏览(846)

我只找到textinputformat和csvinputformat。那么如何使用apache flink读取hdfs中的parquet文件呢?

vnzz0bqm

vnzz0bqm1#

好 啊。我已经找到了一种通过apacheflink读取hdfs中parquet文件的方法。
您应该在pom.xml中添加以下依赖项

<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-hadoop-compatibility_2.11</artifactId>
  <version>1.6.1</version>
</dependency>
<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-avro</artifactId>
  <version>1.6.1</version>
</dependency>
<dependency>
  <groupId>org.apache.parquet</groupId>
  <artifactId>parquet-avro</artifactId>
  <version>1.10.0</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-mapreduce-client-core</artifactId>
  <version>3.1.1</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-hdfs</artifactId>
  <version>3.1.1</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-core</artifactId>
  <version>1.2.1</version>
</dependency>

创建一个avsc文件来定义模式。实验:

{"namespace": "com.flinklearn.models",
     "type": "record",
     "name": "AvroTamAlert",
     "fields": [
        {"name": "raw_data", "type": ["string","null"]}
     ]
    }

运行“java-jard:\avro-tools-1.8.2.jar compile schema alert.avsc.”生成java类并将avrotamalert.java复制到项目中。
使用avroparquetinputformat读取hdfs中的Parquet文件:

class Main {
    def startApp(): Unit ={
        val env = ExecutionEnvironment.getExecutionEnvironment

        val job = Job.getInstance()

        val dIf = new HadoopInputFormat[Void, AvroTamAlert](new AvroParquetInputFormat(), classOf[Void], classOf[AvroTamAlert], job)
        FileInputFormat.addInputPath(job, new Path("/user/hive/warehouse/testpath"))

        val dataset = env.createInput(dIf)

        println(dataset.count())

        env.execute("start hdfs parquet test")
    }
}

object Main {
    def main(args:Array[String]):Unit = {
        new Main().startApp()
    }
}

相关问题