我的mapper输入和reducer输出是如何相同的

k7fdbhmy  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(457)

我遇到了一个有趣的情况,我的Map器输入与reducer输出相同(reducer代码不起作用)。这是我的第一个数据集,因为我是一个新手。提前谢谢。
问题陈述:找到年度最高温度。
考虑一下,下面是我的数据集(year和temp列用制表符分隔)

2001    32
2001    50
2001    18
2001    21
2002    30
2002    34
2002    12
2003    09
2003    12

Map程序代码

import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class MapperCode extends Mapper<LongWritable,Text,Text,IntWritable> {
public void map(LongWritable key,Text value,Context context) throws IOException,InterruptedException
{
    String Line=value.toString();
    String keyvalpair[]=Line.split("\t");
    context.write(new Text(keyvalpair[0].trim()), new IntWritable(Integer.parseInt(keyvalpair[1].trim())));
}
}

减速机代码:

import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class ReducerCode extends Reducer<Text,IntWritable,Text,IntWritable>           {
public void reducer(Text key,Iterable<IntWritable> value,Context context)throws IOException,InterruptedException
{
    int max=0;
    for (IntWritable values:value)
    {
        max=Math.max(max, values.get());
    }
    context.write(key,new IntWritable(max));    
}   
}

驱动程序代码:

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class MaxTemp extends Configuration {
    public static void main(String[] args) throws IOException,InterruptedException,Exception {
Job job=new Job();
job.setJobName("MaxTemp");
job.setJarByClass(MaxTemp.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setMapperClass(MapperCode.class);
job.setReducerClass(ReducerCode.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.waitForCompletion(true);

    }

}

请让我知道我在哪里犯了错。为什么我的o/p与输入数据集相同。

cuxqih21

cuxqih211#

这个 Reducer 实现必须覆盖 reduce() 方法。实现的方法名为 reducer() 从来没有叫过。
把它改成

public class ReducerCode extends Reducer<Text,IntWritable,Text,IntWritable> {
    public void reduce(Text key,Iterable<IntWritable> value,Context context)throws IOException,InterruptedException {

相关问题