mapreduce作业的链接

mwkjh3gx  于 2021-06-02  发布在  Hadoop
关注(0)|答案(2)|浏览(335)

我遇到了“mapreduce作业的链接”。作为mapreduce的新手,在什么情况下我们必须链接(我假设链接意味着依次运行mapreduce作业)作业?
有什么例子可以帮助你吗?

sshcrbum

sshcrbum1#

必须链接的作业的经典示例是输出按频率排序的单词的单词计数。
您需要:
工作1:
输入源Map器(将单词作为键,一个作为值)
聚合缩减器(聚合字数)
工作2:
键/值交换Map器(将频率作为键,将单词作为值)
隐式身份缩减器(获取按频率排序的单词,不必实现)
下面是上述Map器/还原器的示例:

public class HadoopWordCount {

  public static class TokenizerMapper extends Mapper<Object, Text, Text, LongWritable> {

    private final static Text word = new Text();
    private final static LongWritable one = new LongWritable(1);

    public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class KeyValueSwappingMapper extends Mapper<Text, LongWritable, LongWritable, Text> {

    public void map(Text key, LongWritable value, Context context) throws IOException, InterruptedException {
      context.write(value, key);
    }
  }

  public static class SumReducer extends Reducer<Text, LongWritable, Text, LongWritable> {
    private LongWritable result = new LongWritable();

    public void reduce(Text key, Iterable<LongWritable> values, Context context) throws IOException,
        InterruptedException {
      long sum = 0;
      for (LongWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
}

下面是驱动程序的示例。
它需要两个参数:
一个用来计算字数的输入文本文件。
输出目录(不应该预先存在)-在{this dir}/out2/part-r-0000文件中查找输出

public static void main(String[] args) throws Exception {

    Configuration conf = new Configuration();
    Path out = new Path(args[1]);

    Job job1 = Job.getInstance(conf, "word count");
    job1.setJarByClass(HadoopWordCount.class);
    job1.setMapperClass(TokenizerMapper.class);
    job1.setCombinerClass(SumReducer.class);
    job1.setReducerClass(SumReducer.class);
    job1.setOutputKeyClass(Text.class);
    job1.setOutputValueClass(LongWritable.class);
    job1.setOutputFormatClass(SequenceFileOutputFormat.class);
    FileInputFormat.addInputPath(job1, new Path(args[0]));
    FileOutputFormat.setOutputPath(job1, new Path(out, "out1"));
    if (!job1.waitForCompletion(true)) {
      System.exit(1);
    }
    Job job2 = Job.getInstance(conf, "sort by frequency");
    job2.setJarByClass(HadoopWordCount.class);
    job2.setMapperClass(KeyValueSwappingMapper.class);
    job2.setNumReduceTasks(1);
    job2.setSortComparatorClass(LongWritable.DecreasingComparator.class);
    job2.setOutputKeyClass(LongWritable.class);
    job2.setOutputValueClass(Text.class);
    job2.setInputFormatClass(SequenceFileInputFormat.class);
    FileInputFormat.addInputPath(job2, new Path(out, "out1"));
    FileOutputFormat.setOutputPath(job2, new Path(out, "out2"));
    if (!job2.waitForCompletion(true)) {
      System.exit(1);
    }

}
cvxl0en2

cvxl0en22#

简单地说,当您的问题不能只适合一个map reduce作业时,您必须链接多个map reduce作业。
一个很好的例子是找到前10个购买物品,这可以通过两个工作来实现:
一个Map缩小作业,以找出每件物品的购买时间。
第二项工作,根据购买次数对物品进行排序,得到前10项。
为了获得完整的概念,作业链会生成写入磁盘和从磁盘读取的中间文件,因此会降低性能。尽量避免链接作业。
这里是如何连锁工作。

相关问题