通过spark和kafka流处理产生的空值

gcuhipw9  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(351)

我创建了sparkconsumer,这样我就可以通过spark结构化流媒体向kafka发送csv文件。我启动sparkconsumer,然后他等待制作人。我启动生产者和文件发送。问题是我变成了一个“null”——Dataframe中的值,而不是内容。我的输出如下所示:

-------------------------------------------
Batch: 1
-------------------------------------------
+---------+---------+-----------+--------+-----------------------+
|InvoiceNo|StockCode|Description|Quantity|timestamp              |
+---------+---------+-----------+--------+-----------------------+
|null     |null     |null       |null    |2019-01-08 15:46:29.156|
|null     |null     |null       |null    |2019-01-08 15:46:29.224|
|null     |null     |null       |null    |2019-01-08 15:46:29.224|
|null     |null     |null       |null    |2019-01-08 15:46:29.225|
|null     |null     |null       |null    |2019-01-08 15:46:29.225|
|null     |null     |null       |null    |2019-01-08 15:46:29.225|
|null     |null     |null       |null    |2019-01-08 15:46:29.225|
|null     |null     |null       |null    |2019-01-08 15:46:29.225|
|null     |null     |null       |null    |2019-01-08 15:46:29.225|
|null     |null     |null       |null    |2019-01-08 15:46:29.225|
|null     |null     |null       |null    |2019-01-08 15:46:29.225|
|null     |null     |null       |null    |2019-01-08 15:46:29.241|
|null     |null     |null       |null    |2019-01-08 15:46:29.241|
|null     |null     |null       |null    |2019-01-08 15:46:29.241|
|null     |null     |null       |null    |2019-01-08 15:46:29.241|
|null     |null     |null       |null    |2019-01-08 15:46:29.241|
|null     |null     |null       |null    |2019-01-08 15:46:29.241|
|null     |null     |null       |null    |2019-01-08 15:46:29.241|
|null     |null     |null       |null    |2019-01-08 15:46:29.241|
|null     |null     |null       |null    |2019-01-08 15:46:29.241|
+---------+---------+-----------+--------+-----------------------+

sparkconsumer的代码是:

object sparkConsumer extends App {

  val rootLogger = Logger.getRootLogger()
  rootLogger.setLevel(Level.ERROR)

  val spark = SparkSession
    .builder()
    .appName("Spark-Kafka-Integration")
    .master("local")
    .getOrCreate()

  val schema = StructType(Array(
    StructField("InvoiceNo", StringType, nullable = true),
    StructField("StockCode", StringType, nullable = true),
    StructField("Description", StringType, nullable = true),
    StructField("Quantity", StringType, nullable = true)
  ))

  import spark.implicits._
  val df = spark
    .readStream
    .format("kafka")
    .option("kafka.bootstrap.servers", "localhost:9092")
    .option("subscribe", "test")
    .option("delimiter", ";")
    .option("header","true")
    .option("inferSchema","true")
    .load()

  val df1 = df.selectExpr("CAST(value as STRING)", "CAST(timestamp AS TIMESTAMP)").as[(String, Timestamp)]
    .select(from_json($"value", schema).as("data"), $"timestamp")
    .select("data.*", "timestamp")

  df1.writeStream
    .format("console")
    .option("truncate","false")
    .start()
    .awaitTermination()

}

制作人.scala:

object Producer extends App {
  import java.util.Properties
  import org.apache.kafka.clients.producer._

  val  props = new Properties()
  props.put("bootstrap.servers", "localhost:9092")                                             
  props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")        
  props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")      

  val producer = new KafkaProducer[String, String](props)                                             
  val TOPIC="test"
  val fileName = "path/to/test.csv"
  val lines = Source.fromFile(fileName).getLines()

  for(i <- lines){
    val record = new ProducerRecord(TOPIC, "key", s"$i")                    
    producer.send(record)
  }
  val record = new ProducerRecord(TOPIC, "key", "the end "+new java.util.Date)
  producer.send(record)
  producer.close()

}

有人能帮我成为文件的内容吗?

jyztefdp

jyztefdp1#

我认为这个问题与序列化和反序列化有关。你的 value ,以csv格式写入主题,例如: 111,someCode,someDescription,11 您的spark消费者认为消息是json格式的( from_json 使用一些模式)。如果消息如下所示,解析就可以了。

{
    "InvoiceNo": "111",
    "StockCode": "someCode",
    "Description": "someDescription",
    "Quantity": "11"
}

必须更改序列化或反序列化以相互匹配。
以下选项之一应起作用
生产者必须以json格式将消息写入主题
spark使用者应该使用 comma 拆分字段

相关问题