当我使用mahout和hadoop做一些推荐时,我遇到了一个问题。
错误消息是:
Error: java.io.IOException: wrong value class: org.apache.mahout.math.VarLongWritable is not class org.apache.mahout.math.VectorWritable
at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1378)
at org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:83)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at org.apache.hadoop.mapreduce.Reducer.reduce(Reducer.java:150)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
主要功能是:
job.setinputformatclass(textinputformat.class);
job.setMapperClass(FilesToItemPrefsMapper.class);
job.setMapOutputKeyClass(VarLongWritable.class);
job.setMapOutputValueClass(VarLongWritable.class);
job.setReducerClass(FileToUserVectorReducer.class);
job.setOutputKeyClass(VarLongWritable.class);
job.setOutputValueClass(VectorWritable.class);
job.setOutputFormatClass(SequenceFileOutputFormat.class);
SequenceFileOutputFormat.setOutputCompressionType(job,CompressionType.NONE);
Map器是:
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
Matcher m = NUMBERS.matcher(line);
m.find();
VarLongWritable userID = new VarLongWritable(Long.parseLong(m.group()));
VarLongWritable itemID = new VarLongWritable();
while (m.find()){
itemID.set(Long.parseLong(m.group()));
context.write(userID, itemID);
}
减速器为:
public class FileToUserVectorReducer
extends Reducer<VarLongWritable, VarLongWritable, VarLongWritable, VectorWritable> {
public void reducer(VarLongWritable userID, Iterable<VarLongWritable> itemPrefs, Context context)
throws IOException, InterruptedException{
Vector userVector = new RandomAccessSparseVector(Integer.MAX_VALUE, 100);
for(VarLongWritable itemPref : itemPrefs){
userVector.set((int)itemPref.get(), 1.0f);
}
context.write(userID, new VectorWritable(userVector));
}
}
我认为可向量写入的reducer的值是在job.setoutputvalueclass(vectorwritable.class)中设置的。如果是这样,为什么会发出这样的错误信息?
1条答案
按热度按时间mnowg1ta1#
问题出在减速机功能上。减速机(…)应减速,这意味着:
@覆盖非常有用。如果我使用@override,它会在编译时发出错误消息。一开始我觉得没必要,但这次经历证明了它的价值。