fp增长算法

ct2axkht  于 2021-06-28  发布在  Hive
关注(0)|答案(1)|浏览(396)

下面是我从配置单元表生成频繁项集的代码

val sparkConf = new SparkConf().setAppName("Recommender").setMaster("local")
val sc = new SparkContext(sparkConf)
val hiveContext = new HiveContext(sc)
import hiveContext.implicits._
import hiveContext.sql

val schema = new StructType(Array(
StructField("col1", StringType, false)
))

val dataRow = hiveContext.sql("select col1 from hive_table limit 100000").cache()
val dataRDD = hiveContext.createDataFrame(dataRow.rdd,schema).cache()
dataRDD.show()

val transactions = dataRDD.map((row:Row) => {
val stringarray=row.getAs[String](0).split(",")
var arr=new Array[String](stringarray.length)
for( a <- 0 to arr.length-1) {
  arr(a)=stringarray(a)
}
arr
})

val fpg = new FPGrowth().setMinSupport(0.1).setNumPartitions(10)
val model = fpg.run(transactions)
val size: Double = transactions.count()
println("MODEL FreqItemCount "+model.freqItemsets.count())
println("Transactions count : "+size)

但frequeitemcount结果总是为0。
输入查询结果如下图所示

270035_1,249134_1,929747_1
259138_1,44072_1,326046_1
385448_1,747230_1,74440_1,68096_1,610434_1,215589_3,999507_1,74439_1,36260_1,925018_1,588394_1,986622_1,64585_1,942893_1,5421_1,37041_1,52500_1,4925_1,553613           415353_1,600036_1,75955_1
693780_1,31379_1
465624_1,28993_1,1899_2,823631_1
667863_1,95623_3,345830_8,168966_1
837337_1,95586_1,350341_1,67379_1,837347_1,20556_1,17567_1,77713_1,361216_1,39535_1,525748_1,646241_1,346425_1,219266_1,77717_1,179382_3,702935_1
249882_1,28977_1
78025_1,113415_1,136718_1,640967_1,787444_1
193307_1,266303_1,220199_2,459193_1,352411_1,371579_1,45906_1,505334_1,9816_1,12627_1,135294_1,28182_1,132470_1
526260_1,305646_1,65438_1

但是当我用下面的硬编码输入执行代码时,我得到了正确的频繁项集

val transactions = sc.parallelize(Seq(
  Array("Tuna", "Banana", "Strawberry"),
  Array("Melon", "Milk", "Bread", "Strawberry"),
  Array("Melon", "Kiwi", "Bread"),
  Array("Bread", "Banana", "Strawberry"),
  Array("Milk", "Tuna", "Tomato"),
  Array("Pepper", "Melon", "Tomato"),
  Array("Milk", "Strawberry", "Kiwi"),
  Array("Kiwi", "Banana", "Tuna"),
  Array("Pepper", "Melon")
))

你能告诉我我做错了什么吗?我正在使用spark 1.6.2和scala 2.10。

svujldwt

svujldwt1#

问题的根源似乎是支持阈值(0.1)过高。您使用的值非常高,不太可能在实际事务数据中观察到。试着逐渐减少它,直到你开始得到规则。

相关问题