为用例实现sparksql udaf

ruarlubt  于 2021-06-02  发布在  Hadoop
关注(0)|答案(0)|浏览(263)

我正在研究sparksql中的自定义聚合函数。
用例如下:
我有如下数据(示例):

+-----------+-------+-------------+--------+
|season_year|line_pn|stock_loc_num|quantity|
+-----------+-------+-------------+--------+
|Autumn-2012|ACD47PS|           22|       2|
|Autumn-2012|ACD47PS|            3|       1|
|Autumn-2012|ACD47PS|           52|       9|
|Autumn-2012|ACD47PS|            9|       1|
|Autumn-2012|ACD47PS|            1|       4|
|Autumn-2012|ACD47PS|            1|       1|
|Autumn-2012|ACD47PS|            1|       1|
|Autumn-2012|ACD47PS|           10|       2|
|Autumn-2012|ACD47PS|           12|       2|
|Autumn-2012|ACD47PS|           15|       2|
|Autumn-2012|ACD47PS|           15|       3|
|Autumn-2012|ACD47PS|           15|       3|
|Autumn-2012|ACD47PS|           16|       1|
|Autumn-2012|ACD47PS|           18|       1|
|Autumn-2012|ACD47PS|           18|       3|
|Autumn-2012|ACD47PS|            2|      49|
|Autumn-2012|ACD47PS|            2|       7|
|Autumn-2012|ACD47PS|           21|       5|
|Autumn-2012|ACD47PS|           22|       8|
|Autumn-2012|ACD47PS|           24|       3|
+-----------+-------+-------------+--------+

注:从2009年到2016年 Spring ,共有25万个生产线,70个库存地点和季节
我正在尝试编写一个sparksql udaf,它将line\u pn,stock\u loc\u num分组,在一个聚合函数中接受2个属性season\u year和quantity
同样,在自定义聚合函数中(对于每一组行\u pn&stock \u loc \u num),我想按季节\u年和总和(数量)进行分组。

df.groupBy("line_pn", "stock_loc_num").agg(seasonality(df.col("quantity"), df.col("season_year")).as("seasonality"))

然后为聚集的数据创建时间序列并计算指数平滑状态空间模型(我已经实现了,它接受时间序列作为输入来生成stock loc num的行是季节性的还是非季节性的)
输出必须是:
对于第p行、库存位置数量级别分组,必须汇总季节性因素

+-------+-------------+--------+
|line_pn|stock_loc_num|seasonal|
+-------+-------------+--------+
|ACD47PS|           22|       N|
|ACD47PS|            3|       A|
|MOTFP70|           52|       N|
+-------+-------------+--------+

我试了很多东西,但是我不能写udaf,请帮帮我
代码:
主代码

import org.apache.spark.sql._
import org.apache.spark.sql.expressions.{MutableAggregationBuffer, UserDefinedAggregateFunction}
import org.apache.spark.sql.types._
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan

object UDAF {

//Extend UserDefinedAggregateFunction to write custom aggregate function
//You can also specify any constructor arguments. For instance you 

    val conf = new SparkConf().setAppName("HiveQL").setMaster("local[4]")
    val sc = new SparkContext(conf)
    val sqlContext = new SQLContext(sc)

class Seasonality extends UserDefinedAggregateFunction {

  // Input Data Type Schema

  def inputSchema: StructType = StructType(Array(StructField("quantity", IntegerType), StructField("season", StringType)))

  // Intermediate Schema
  def bufferSchema: StructType = StructType(
    StructField("sumQty", IntegerType) ::
    StructField("season_year", StringType) :: Nil
  )  
  // Returned Data Type .
  def dataType: DataType = StringType

  // Self-explaining
  def deterministic = true

  // This function is called whenever key changes
  def initialize(buffer: MutableAggregationBuffer) = {
    buffer(0) = 0 // set season_year to blank
    buffer(1) = "" // set number of items to 0
  }

  // Iterate over each entry of a group
  def update(buffer: MutableAggregationBuffer, input: Row) = {   

    // Clueless What should be done here ? I'm just summing the quantity attribute and pushing new String into buffer :\

    buffer(0) = buffer.getInt(0) + input.getInt(0)
    buffer(1) = input.getString(1)

  }

  // Merge two partial aggregates
  def merge(buffer1: MutableAggregationBuffer, buffer2: Row) = {
    buffer1(0) = buffer1.getInt(0) + buffer2.getInt(0)
    buffer1(1) = buffer2.getString(1)

    //println("Buffer1 Seq: "+buffer1.toSeq)
    //println("Buffer2 Seq: "+buffer2.toSeq)
  }

  // Called after all the entries are exhausted.
  def evaluate(buffer: Row) = {

    // I dont know I'm just concatenating both the attributes here :\

    //I want a Time Series of season_year and Sum(quantity) here so as to Calculate Seasonality as follows
    /*
     * Something Like this
     * 
     * season_year              sumQty
     * Spring-2012                  2
     * Winter-2012                  6
     * Summer-2012                  0
     * Autumn-2012                  3
     * Spring-2013                  1
     * Winter-2013                  0
     * Summer-2013                  3
     * Autumn-2013                  5
     * 
     * 
     * This will be a Time Series for 2 years, 4 season 
     * 
     * say TimeSeries ts
     * 
     *          Spring  Winter  Summer  Autumn
     * 
     * 2012     2               6               0               3
     * 2013     1               0               3               5
     * 
     * 
     * 
     *  val etsForecast = SeasonalExponentialSmoothing.train(ts, 4) // ts timeseries, 4 quarter / seasons
     *  
     *  Final Value to be returned from Aggregated function is ets.getBestParams()[2] 
     *  
     * */

    buffer.getString(1) + "--"+ buffer.getInt(0)
  }

}

  def main (args: Array[String]) {

    import sqlContext.implicits._

    val df = sqlContext.read
    .format("com.databricks.spark.csv")
    .option("header", "true") // Use first line of all files as header
    .option("inferSchema", "true") // Automatically infer data types
    .load("Sess.csv")

    val seasonality = new Seasonality()

    // Calculate seasonality value for each group
    df.groupBy("line_pn", "stock_loc_num").agg(seasonality(df.col("quantity"), df.col("season_year")).as("seasonality")).show()

  }

}

季节性组件平滑模型源

class SeasonalExponentialSmoothingModel (
    val number: Int, //The number of y that has been evaluated.
    val l_array: Array[Double],
    val b_array: Array[Double],
    val s_array: Array[Double],
    val best_index: Int,
    val MSE_vector: Array[Double],
    val m      : Int //The seasonal period, for monthly dat, m = 12, for quaterly data, m = 4.
  ) {

  def this(    //The constructor from the very beginning. With only 3 params : the number of points, initial l and inital b given. 
    l: Double,
    b: Double,
    s: Array[Double],
    m: Int
    ){
    this(0,Array.fill(1331)(l),Array.fill(1331)(b),Array.fill(1331)(s).flatten,0,Array.fill(1331)(0.0),m)
  }
  // c : calibration   n:number of parameters
  private def gridGenerator(c: Int, n: Int) = for( i <- 0 to n-1) yield List.tabulate(pow((c+1),n).toInt)(x => ((x%(pow((c+1),n-i).toInt))/(pow((c+1),n-i-1).toInt)).toDouble/c)    

  val IndexedSeq(alpha,beta,gama) = gridGenerator(10,3)      //Grid Search, creating for alpha and beta.
  val brzAlpha = new BDV[Double](alpha.toArray)
  val brzBeta  = new BDV[Double](beta.toArray)
  val brzGama  = new BDV[Double](gama.toArray)

  private val numOfIndex = alpha.length

  val brzL = new BDV[Double](l_array)
  val brzB = new BDV[Double](b_array)
  val brzMSE = new BDV[Double](MSE_vector)
  val brzS = new BDM(m,numOfIndex,s_array)
  val MSE      = brzMSE(best_index)

  def bestParams() = (brzAlpha(best_index),brzBeta(best_index),brzGama(best_index))//Get the best alpha and beta values as a tuple.
  def predict(predictionLength: Int = 12) = List.tabulate(predictionLength)(x => brzB * x.toDouble + brzB + brzL + brzS(x%m,::).t)
  def bestPrediction(predictionLength: Int = 12) = predict(predictionLength).map(_(best_index))

  def evaluate(y: Double) = {
    val new_brzL = (brzAlpha :* (-brzS(0,::).t + y)) + ((-brzAlpha + 1.0):*(brzL + brzB))
    val new_brzS = new BDM[Double](m,numOfIndex)
    new_brzS(-1,::) := ((brzGama :* (-brzL - brzB + y)) + ((-brzGama + 1.0):*(brzS(0,::).t))).t
    new_brzS(0 to -2,::) := brzS(1 to -1,::) 
    val new_brzB = (brzBeta :* (new_brzL - brzL)) + ((-brzBeta + 1.0):*brzB)
    val y_predict_1 = predict(1)(0) //Step one forecast. Type: breeze.linalg.DenseVector[Double]
    val error_1 =  y_predict_1 - y//Setp one error. Type: breeze.linalg.DenseVector[Int]
    val new_brzMSE = (((brzMSE :* brzMSE * number.toDouble) + (error_1 :* error_1)) * (1./(number+1))  ).map(sqrt(_)) //new MSE, Type: breeze.linalg.DenseVector[Double]
    val new_best_index = argmax(-new_brzMSE)
    new SeasonalExponentialSmoothingModel(number+1,new_brzL.toArray,new_brzB.toArray,new_brzS.data,new_best_index,new_brzMSE.toArray,m)
  }  
}

class SeasonalExponentialSmoothing {
  /**
   * Run the algorithm .
  */ 

  def initialize(y: List[Double], m: Int) = {
    val y_init = y.take(m)
    val l_0 = y_init.sum/y_init.length
    val b_0 = (y.take(2*m).drop(m).sum - y_init.sum)/m/m
    val s_0 = y_init.map(_ - l_0).toArray//(10.7,-9.5,-2.6,1.4)
    (l_0,b_0,s_0)
  }

  def run(y: List[Double],m:Int) = { 
    val number = y.length
    val (l,b,s) = initialize(y,m)
    val Model = new SeasonalExponentialSmoothingModel(l,b,s,m) //The initialization model...    
    y.foldLeft(Model)((b,a) => b.evaluate(a))
  }
}

object SeasonalExponentialSmoothing {

  def train(input: List[Double], m: Int): SeasonalExponentialSmoothingModel = { 
    new SeasonalExponentialSmoothing().run(input,m) //Input the the data to be forecasted and m is the period.
  }
}

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题