我是hadoop新手,我的map reduce代码可以工作,但它不会产生任何输出。这是MapReduce的信息:
16/09/20 13:11:40 INFO mapred.JobClient: Job complete: job_201609081210_0078
16/09/20 13:11:40 INFO mapred.JobClient: Counters: 28
16/09/20 13:11:40 INFO mapred.JobClient: Map-Reduce Framework
16/09/20 13:11:40 INFO mapred.JobClient: Spilled Records=0
16/09/20 13:11:40 INFO mapred.JobClient: Map output materialized bytes=1362
16/09/20 13:11:40 INFO mapred.JobClient: Reduce input records=0
16/09/20 13:11:40 INFO mapred.JobClient: Virtual memory (bytes) snapshot=466248720384
16/09/20 13:11:40 INFO mapred.JobClient: Map input records=852032443
16/09/20 13:11:40 INFO mapred.JobClient: SPLIT_RAW_BYTES=29964
16/09/20 13:11:40 INFO mapred.JobClient: Map output bytes=0
16/09/20 13:11:40 INFO mapred.JobClient: Reduce shuffle bytes=1362
16/09/20 13:11:40 INFO mapred.JobClient: Physical memory (bytes) snapshot=57472311296
16/09/20 13:11:40 INFO mapred.JobClient: Reduce input groups=0
16/09/20 13:11:40 INFO mapred.JobClient: Combine output records=0
16/09/20 13:11:40 INFO mapred.JobClient: Reduce output records=0
16/09/20 13:11:40 INFO mapred.JobClient: Map output records=0
16/09/20 13:11:40 INFO mapred.JobClient: Combine input records=0
16/09/20 13:11:40 INFO mapred.JobClient: CPU time spent (ms)=2375210
16/09/20 13:11:40 INFO mapred.JobClient: Total committed heap usage (bytes)=47554494464
16/09/20 13:11:40 INFO mapred.JobClient: File Input Format Counters
16/09/20 13:11:40 INFO mapred.JobClient: Bytes Read=15163097088
16/09/20 13:11:40 INFO mapred.JobClient: FileSystemCounters
16/09/20 13:11:40 INFO mapred.JobClient: HDFS_BYTES_READ=15163127052
16/09/20 13:11:40 INFO mapred.JobClient: FILE_BYTES_WRITTEN=13170190
16/09/20 13:11:40 INFO mapred.JobClient: FILE_BYTES_READ=6
16/09/20 13:11:40 INFO mapred.JobClient: Job Counters
16/09/20 13:11:40 INFO mapred.JobClient: Launched map tasks=227
16/09/20 13:11:40 INFO mapred.JobClient: Launched reduce tasks=1
16/09/20 13:11:40 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=759045
16/09/20 13:11:40 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
16/09/20 13:11:40 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=1613259
16/09/20 13:11:40 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
16/09/20 13:11:40 INFO mapred.JobClient: Data-local map tasks=227
16/09/20 13:11:40 INFO mapred.JobClient: File Output Format Counters
16/09/20 13:11:40 INFO mapred.JobClient: Bytes Written=0
以下是启动mapreduce作业的代码:
import java.io.File;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class mp{
public static void main(String[] args) throws Exception {
Job job1 = new Job();
job1.setJarByClass(mp.class);
FileInputFormat.addInputPath(job1, new Path(args[0]));
String oFolder = args[0] + "/output";
FileOutputFormat.setOutputPath(job1, new Path(oFolder));
job1.setMapperClass(TransMapper1.class);
job1.setReducerClass(TransReducer1.class);
job1.setMapOutputKeyClass(LongWritable.class);
job1.setMapOutputValueClass(DnaWritable.class);
job1.setOutputKeyClass(LongWritable.class);
job1.setOutputValueClass(Text.class);
}
}
这里是mapper类(transmapper1):
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class TransMapper1 extends Mapper<LongWritable, Text, LongWritable, DnaWritable> {
@Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
LongWritable bamWindow = new LongWritable(Long.parseLong(tokenizer.nextToken()));
LongWritable read = new LongWritable(Long.parseLong(tokenizer.nextToken()));
LongWritable refWindow = new LongWritable(Long.parseLong(tokenizer.nextToken()));
IntWritable chr = new IntWritable(Integer.parseInt(tokenizer.nextToken()));
DoubleWritable dist = new DoubleWritable(Double.parseDouble(tokenizer.nextToken()));
DnaWritable dnaW = new DnaWritable(bamWindow,read,refWindow,chr,dist);
context.write(bamWindow,dnaW);
}
}
这是减速器类(transreducer1):
import java.io.IOException;
import java.util.ArrayList;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class TransReducer1 extends Reducer<LongWritable, DnaWritable, LongWritable, Text> {
@Override
public void reduce(LongWritable key, Iterable<DnaWritable> values, Context context) throws IOException, InterruptedException {
ArrayList<DnaWritable> list = new ArrayList<DnaWritable>();
double minDist = Double.MAX_VALUE;
for (DnaWritable value : values) {
long bamWindow = value.getBamWindow().get();
long read = value.getRead().get();
long refWindow = value.getRefWindow().get();
int chr = value.getChr().get();
double dist = value.getDist().get();
if (dist > minDist)
continue;
else
if (dist < minDist)
list.clear();
list.add(new DnaWritable(bamWindow,read,refWindow,chr,dist));
minDist = Math.min(minDist, value.getDist().get());
}
for(int i = 0; i < list.size(); i++){
context.write(new LongWritable(list.get(i).getRead().get()),new Text(new DnaWritable(list.get(i).getBamWindow(),list.get(i).getRead(),list.get(i).getRefWindow(),list.get(i).getChr(),list.get(i).getDist()).toString()));
}
}
}
这是dnawritable类(我没有把import部分缩短一点):
public class DnaWritable implements Writable {
LongWritable bamWindow;
LongWritable read;
LongWritable refWindow;
IntWritable chr;
DoubleWritable dist;
public DnaWritable(LongWritable bamWindow, LongWritable read, LongWritable refWindow, IntWritable chr, DoubleWritable dist){
this.bamWindow = bamWindow;
this.read = read;
this.refWindow = refWindow;
this.chr = chr;
this.dist = dist;
}
public DnaWritable(long bamWindow, long read, long refWindow, int chr, double dist){
this.bamWindow = new LongWritable(bamWindow);
this.read = new LongWritable(read);
this.refWindow = new LongWritable(refWindow);
this.chr = new IntWritable(chr);
this.dist = new DoubleWritable(dist);
}
@Override
public void write(DataOutput dataOutput) throws IOException {
bamWindow.write(dataOutput);
read.write(dataOutput);
refWindow.write(dataOutput);
chr.write(dataOutput);
dist.write(dataOutput);
}
@Override
public void readFields(DataInput dataInput) throws IOException {
bamWindow.readFields(dataInput);
read.readFields(dataInput);
refWindow.readFields(dataInput);
chr.readFields(dataInput);
dist.readFields(dataInput);
}
}
任何帮助都将不胜感激。。谢谢您
4条答案
按热度按时间nwlqm0z11#
我认为你没有很好地执行
write(DataOutput out)
以及readFields(DataInput in)
dnawritable类中的方法。ijnw1ujt2#
还考虑如下实现comparablewritable,并添加no args构造函数。
如果仍然失败,我将修改log4.properties,这样您就可以查看是否存在任何未看到的潜在错误。
src/main/资源
x6yk4ghg3#
我认为你根本没有把工作交给集群。主类中没有job1.submit()或job1.waitforcompletion(true)。
你的主要方法也需要修正。
下面是创建作业对象的正确方法
c9qzyr3d4#
你能把你的dnawritable类改成并测试它吗