我使用spark和scala来实现一个问题。我使用的是movielens数据集,其中包含ratings.csv文件、movie.csv和tag.csv。我想用基于域的方法来计算标签之间的余弦相似度,我把两个文件转换成一个字符串,然后计算相似度。
代码:
val lines=Source.fromURL(Source.getClass().getResource("file:///usr/loca/spark/dataset/algorithm3/comedy")).mkString("\n")
val lines2=Source.fromURL(Source.getClass().getResource("file:///usr/local/spark/dataset/algorithm3/funny")).mkString("\n")
val result=textCosine(lines,lines2)
println("The cosine similarity score: "+result)
}
def module(vec:Vector[Double]): Double ={
math.sqrt(vec.map(math.pow(_,2)).sum)
}
def innerProduct(v1:Vector[Double],v2:Vector[Double]): Double ={
val listBuffer=ListBuffer[Double]()
for(i<- 0 until v1.length; j<- 0 until v2.length;if i==j){
if(i==j){
listBuffer.append( v1(i)*v2(j) )
}
}
listBuffer.sum
}
def cosvec(v1:Vector[Double],v2:Vector[Double]):Double ={
val cos=innerProduct(v1,v2) / (module(v1)* module(v2))
if (cos <= 1) cos else 1.0
}
def textCosine(lines:String,lines2:String):Double={
val set=mutable.Set[Char]()
lines.foreach(set +=_)
lines2.foreach(set +=_)
println(set)
val ints1: Vector[Double] = set.toList.sorted.map(ch => {
lines.count(s => s == ch).toDouble
}).toVector
println("===ints1: "+ints1)
val ints2: Vector[Double] = set.toList.sorted.map(ch => {
lines2.count(s => s == ch).toDouble
}).toVector
println("===ints2: "+ints2)
cosvec(ints1,ints2)
}
}
但是输出给我错误:
Exception in thread "main" java.lang.NullPointerException
at scala.io.Source$.fromURL(Source.scala:141)
at com.algorithm.similarity$.main(similarity.scala:18)
at com.algorithm.similarity.main(similarity.scala)
怎么了?
暂无答案!
目前还没有任何答案,快来回答吧!