kafka+spark流媒体:单任务多主题处理

szqfcxe2  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(387)

Kafka中有40个主题和书面Spark流作业,每个处理5个表。spark流作业的唯一目标是读取5个kafka主题并将其写入相应的5个hdfs路径。大多数情况下,它工作正常,但有时它会将主题1数据写入其他hdfs路径。

下面的代码试图归档一个spark流作业来处理5 topic并将其写入相应的hdfs,但是这个代码将topic1数据写入hdfs 5而不是hdfs 1。
请提供您的建议:

import java.text.SimpleDateFormat
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.{ SparkConf, TaskContext }
import org.apache.spark.streaming.{ Seconds, StreamingContext }
import org.apache.spark.streaming.kafka010._
import org.apache.kafka.common.serialization.StringDeserializer

object SparkKafkaMultiConsumer extends App {

  override def main(args: Array[String]) {
    if (args.length < 1) {
      System.err.println(s"""
        |Usage: KafkaStreams auto.offset.reset latest/earliest table1,table2,etc 
        |
        """.stripMargin)
      System.exit(1)
    }

    val date_today = new SimpleDateFormat("yyyy_MM_dd");
    val date_today_hour = new SimpleDateFormat("yyyy_MM_dd_HH");
    val PATH_SEPERATOR = "/";

    import com.typesafe.config.ConfigFactory

    val conf = ConfigFactory.load("env.conf")
    val topicconf = ConfigFactory.load("topics.conf")

// Create context with custom second batch interval
val sparkConf = new SparkConf().setAppName("pt_streams")
val ssc = new StreamingContext(sparkConf, Seconds(conf.getString("kafka.duration").toLong))
var kafka_topics="kafka.topics"

// Create direct kafka stream with brokers and topics
var topicsSet = topicconf.getString(kafka_topics).split(",").toSet
if(args.length==2 ) {
  print ("This stream job will process table(s) : "+ args(1)) 
  topicsSet=args {1}.split(",").toSet
}

val topicList = topicsSet.toList

val kafkaParams = Map[String, Object](
  "bootstrap.servers" -> conf.getString("kafka.brokers"),
  "zookeeper.connect" -> conf.getString("kafka.zookeeper"),
  "group.id" -> conf.getString("kafka.consumergroups"),
  "auto.offset.reset" -> args { 0 },
  "enable.auto.commit" -> (conf.getString("kafka.autoCommit").toBoolean: java.lang.Boolean),
  "key.deserializer" -> classOf[StringDeserializer],
  "value.deserializer" -> classOf[StringDeserializer],
  "security.protocol" -> "SASL_PLAINTEXT")

val messages = KafkaUtils.createDirectStream[String, String](
  ssc,
  LocationStrategies.PreferConsistent,
  ConsumerStrategies.Subscribe[String, String](topicsSet, kafkaParams))

for (i <- 0 until topicList.length) {
  /**
   *  set timer to see how much time takes for the filter operation for each topics
   */
  val topicStream = messages.filter(_.topic().equals(topicList(i)))

  val data = topicStream.map(_.value())
  data.foreachRDD((rdd, batchTime) => {
    //        val data = rdd.map(_.value())
    if (!rdd.isEmpty()) {
      rdd.coalesce(1).saveAsTextFile(conf.getString("hdfs.streamoutpath") + PATH_SEPERATOR + topicList(i) + PATH_SEPERATOR + date_today.format(System.currentTimeMillis())
        + PATH_SEPERATOR + date_today_hour.format(System.currentTimeMillis()) + PATH_SEPERATOR + System.currentTimeMillis())
    }
  })
}

 try{
     // After all successful processing, commit the offsets to kafka
    messages.foreachRDD { rdd =>
      val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
      messages.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
    }
  } catch {
    case e: Exception => 
      e.printStackTrace()
      print("error while commiting the offset")

  }
// Start the computation
ssc.start()
ssc.awaitTermination()

  }

}
u5rb5r59

u5rb5r591#

您最好使用Kafka连接的hdfs连接器。它是开源的,可以独立使用,也可以作为confluent平台的一部分使用。简单的配置文件,从kafka主题流到hdfs,如果您有一个数据模式,它将为您创建配置单元表。
如果你自己编写代码,那你就是在发明轮子;这是一个已解决的问题:)

相关问题