我试图计算维基百科文档之间的行相似性。我有tf-idf向量的格式 Key class: class org.apache.hadoop.io.Text Value Class: class org.apache.mahout.math.VectorWritable
. 下面是文本分析的快速教程:https://cwiki.apache.org/confluence/display/mahout/quick+tour+of+text+analysis+using+the+mahout+command+line
我创建了一个mahout矩阵,如下所示:
mahout rowid \
-i wikipedia-vectors/tfidf-vectors/part-r-00000
-o wikipedia-matrix
我得到了生成的行和列的数量:
vectors.RowIdJob: Wrote out matrix with 4587604 rows and 14121544 columns to wikipedia-matrix/matrix
矩阵的格式是 Key class: class org.apache.hadoop.io.IntWritable Value Class: class org.apache.mahout.math.VectorWritable
我也有一个 docIndex
文件格式如下: Key class: class org.apache.hadoop.io.IntWritable Value Class: class org.apache.hadoop.io.Text
当我开始工作的时候
mahout rowsimilarity
-i wikipedia-matrix/matrix
-o wikipedia-similarity
-r 4587604
--similarityClassname SIMILARITY_COSINE
-m 50
-ess
我得到以下错误:
13/08/25 15:18:18 INFO mapred.JobClient: Task Id : attempt_201308161435_0364_m_000001_1, Status : FAILED
java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.mahout.math.VectorWritable
at org.apache.mahout.math.hadoop.similarity.cooccurrence.RowSimilarityJob$VectorNormMapper.map(RowSimilarityJob.java:183)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:648)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:322)
at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
at org.apache.hadoop.mapred.Child.main(Child.java:260)
有人能帮我解决这个错误吗?我不知道这是从哪里来的 org.apache.hadoop.io.Text
当输入矩阵为格式时 Key class: class org.apache.hadoop.io.IntWritable Value Class: class org.apache.mahout.math.VectorWritable
非常感谢你。
最好的,德拉甘
1条答案
按热度按时间xpcnnkqh1#
我用以下命令解决了这个问题:
我也没有出错。