spark流scala性能急剧下降

zqdjd7g9  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(522)

我有以下代码:-

case class event(imei: String, date: String, gpsdt: String,dt: String,id: String)
case class historyevent(imei: String, date: String, gpsdt: String)
object kafkatesting {
def main(args: Array[String]) {

val clients = new RedisClientPool("192.168.0.40", 6379)
val conf = new SparkConf()
  .setAppName("KafkaReceiver")
  .set("spark.cassandra.connection.host", "192.168.0.40")
  .set("spark.cassandra.connection.keep_alive_ms", "20000")
  .set("spark.executor.memory", "3g")
  .set("spark.driver.memory", "4g")
  .set("spark.submit.deployMode", "cluster")
  .set("spark.executor.instances", "4")
  .set("spark.executor.cores", "3")
  .set("spark.streaming.backpressure.enabled", "true")
  .set("spark.streaming.backpressure.initialRate", "100")
  .set("spark.streaming.kafka.maxRatePerPartition", "7")

val sc = SparkContext.getOrCreate(conf)
val ssc = new StreamingContext(sc, Seconds(10))
val sqlContext = new SQLContext(sc)
val kafkaParams = Map[String, String](
  "bootstrap.servers" -> "192.168.0.113:9092",
  "group.id" -> "test-group-aditya",
  "auto.offset.reset" -> "largest")

val topics = Set("random")
val kafkaStream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topics)

kafkaStream.foreachRDD { rdd =>

  val updatedRDD = rdd.map(a =>
    {
      implicit val formats = DefaultFormats
      val jValue = parse(a._2)
      val fleetrecord = jValue.extract[historyevent]
      val hash = fleetrecord.imei + fleetrecord.date + fleetrecord.gpsdt
      val md5Hash = DigestUtils.md5Hex(hash).toUpperCase()
      val now = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(Calendar.getInstance().getTime())

      event(fleetrecord.imei, fleetrecord.date, fleetrecord.gpsdt, now, md5Hash)
    })
    .collect()

  updatedRDD.foreach(f =>
    {
      clients.withClient {
        client =>
          {
            val value = f.imei + " , " + f.gpsdt
            val zscore = Calendar.getInstance().getTimeInMillis
            val key = new SimpleDateFormat("yyyy-MM-dd").format(Calendar.getInstance().getTime())
            val dt = new SimpleDateFormat("HH:mm:ss").format(Calendar.getInstance().getTime())
            val q1 = "00:00:00"
            val q2 = "06:00:00"
            val q3 = "12:00:00"
            val q4 = "18:00:00"
            val quater = if (dt > q1 && dt < q2) {
              System.out.println(dt + " lies in quarter 1");
              " -> 1"
            } else if (dt > q2 && dt < q3) {
              System.out.println(dt + " lies in quarter 2");
              " -> 2"
            } else if (dt > q3 && dt < q4) {
              System.out.println(dt + " lies in quarter 3");
              " -> 3"
            } else {
              System.out.println(dt + " lies in quarter 4");
              " -> 4"
            }
            client.zadd(key + quater, zscore, value)
            println(f.toString())
          }
      }
    })
  val collection = sc.parallelize(updatedRDD)
  collection.saveToCassandra("db", "table", SomeColumns("imei", "date", "gpsdt","dt","id"))
}

ssc.start()
ssc.awaitTermination()
}
}

我用这段代码将Kafka的数据插入到Cassandra和redis中,但面临以下问题issues:-
1) 应用程序在当前处理前一批时创建一个活动批的长队列。所以,我只希望在前一批完成执行后才有下一批。
2) 我有四个节点的集群正在处理每个批,但是执行700条记录需要大约30-40秒。
我的代码是经过优化的还是需要改进代码以获得更好的性能?

zzzyeukh

zzzyeukh1#

是的,你可以在里面做所有的事情 mapPartition . datastax中有一些API允许您直接保存数据流。下面是如何为c*实现的。

val partitionedDstream = kafkaStream.repartition(5) //change this value as per your data and spark cluster

//Now instead of iterating each RDD work on each partition.
val eventsStream: DStream[event] = partitionedDstream.mapPartitions(x => {
  val lst = scala.collection.mutable.ListBuffer[event]()
  while (x.hasNext) {
    val a = x.next()
    implicit val formats = DefaultFormats
    val jValue = parse(a._2)
    val fleetrecord = jValue.extract[historyevent]
    val hash = fleetrecord.imei + fleetrecord.date + fleetrecord.gpsdt
    val md5Hash = DigestUtils.md5Hex(hash).toUpperCase()
    val now = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(Calendar.getInstance().getTime())
    lst += event(fleetrecord.imei, fleetrecord.date, fleetrecord.gpsdt, now, md5Hash)
  }
  lst.toList.iterator
})

eventsStream.cache() //because you are using same Dstream for C* and Redis

//instead of collecting each RDD save whole Dstream at once
import com.datastax.spark.connector.streaming._
eventsStream.saveToCassandra("db", "table", SomeColumns("imei", "date", "gpsdt", "dt", "id"))

Cassandra也接受了 timestamp 作为 Long 值,因此您还可以如下所示更改代码的某些部分

val now = System.currentTimeMillis()

//also change your case class to take `Long` instead of `String`
case class event(imei: String, date: String, gpsdt: String, dt: Long, id: String)

同样,你也可以为 Redis 也。

相关问题