kafka主题对象转换成sparkDataframe并写入hdfs

ehxuflar  于 2021-06-04  发布在  Kafka
关注(0)|答案(1)|浏览(277)

我试图在spark编码中创建kafka消费者,同时创建我得到了例外。我的目标是我必须从主题中阅读并需要写入hdfs路径。

scala> df2.printSchema()
root
 |-- key: binary (nullable = true)
 |-- value: binary (nullable = true)
 |-- topic: string (nullable = true)
 |-- partition: integer (nullable = true)
 |-- offset: long (nullable = true)
 |-- timestamp: timestamp (nullable = true)
 |-- timestampType: integer (nullable = true)

scala> print(df1)
[key: binary, value: binary ... 5 more fields]

我没有在这个主题中提供任何输入,即使它将这6个值作为输入。

import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types.StructType
import org.apache.spark.sql.types.StringType
import org.apache.spark.sql.types.StructField
import spark.implicits._
object Read {  
  def main(args: Array[String]): Unit = {  

    val spark = SparkSession.builder()
    .appName("spark Oracle Kafka")
    .master("local")
    .getOrCreate()
val df2 = spark
  .read
  .format("kafka")
  .option("kafka.bootstrap.servers", "kafka server ip address i have given")
  .option("subscribe", "topic20190904")
  .load()

print(df1)//it is return some values 
df2.show() it's throwing exception i hope it's not dataframe.
df2.write.parquet("/user/xrrn5/abcd")// I am getting java.lang.AbstractMethodError
java.lang.AbstractMethodError  at rg.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala)
stszievb

stszievb1#

要将数据从kafka写入hdfs,实际上不需要任何代码,只需使用kafka connect即可,它是apachekafka的一部分。下面是一个配置示例:

{
  "name": "hdfs-sink",
  "config": {
    "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
    "tasks.max": "1",
    "topics": "test_hdfs",
    "hdfs.url": "hdfs://localhost:9000",
    "flush.size": "3",
    "name": "hdfs-sink"
  }
}

有关连接器的文档,请参见此处;有关使用kafka connect的一般介绍和概述,请参见此处。

相关问题