java—扩展了一个类,该类扩展了hadoop的Map器

vcirk6k6  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(318)

这是hadoop中的map类[1]的一个示例,它扩展了mapper类[3] 是hadoop的mapper类。
我想创建我的 MyExampleMapper 这扩展了 ExampleMapper 这也扩展了hadoop的 Mapper [2]. 我这样做是因为我只想在 ExampleMapper 所以,当我创建 MyExampleMapper 或者其他例子,我不必自己设置属性,因为我已经扩展了 ExampleMapper . 有可能这样做吗?
[1] 示例Map器

import org.apache.hadoop.mapreduce.Mapper;

public class ExampleMapper 
     extends Mapper<Object, Text, Text, IntWritable>{

   private final static IntWritable one = new IntWritable(1);
   private Text word = new Text();

   public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
     StringTokenizer itr = new StringTokenizer(value.toString());
     while (itr.hasMoreTokens()) {
       word.set(itr.nextToken());
       context.write(word, one);
     }
   }
 }

[2] 我想要什么

import org.apache.hadoop.mapreduce.Mapper;

public class MyExampleMapper 
     extends ExampleMapper<Object, Text, Text, IntWritable>{

   private final static IntWritable one = new IntWritable(1);
   private Text word = new Text();

   public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
     StringTokenizer itr = new StringTokenizer(value.toString());

     String result = System.getProperty("job.examplemapper")

     if (result.equals("true")) {
       while (itr.hasMoreTokens()) {
         word.set(itr.nextToken());
         context.write(word, one);
       }
     }
   }
 }

public class ExampleMapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> 
     extends Mapper{

   System.setProperty("job.examplemapper", "true");
 }

[3] 这是hadoop的mapper类

public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> {
    public Mapper() {
    }

    protected void setup(Mapper.Context context) throws IOException, InterruptedException {
    }

    protected void map(KEYIN key, VALUEIN value, Mapper.Context context) throws IOException, InterruptedException {
        context.write(key, value);
    }

    protected void cleanup(Mapper.Context context) throws IOException, InterruptedException {
    }

    public void run(Mapper.Context context) throws IOException, InterruptedException {
        this.setup(context);

        try {
            while(context.nextKeyValue()) {
                this.map(context.getCurrentKey(), context.getCurrentValue(), context);
            }
        } finally {
            this.cleanup(context);
        }

    }

    public class Context extends MapContext<KEYIN, VALUEIN, KEYOUT, VALUEOUT> {
        public Context(Configuration var1, TaskAttemptID conf, RecordReader<KEYIN, VALUEIN> taskid, RecordWriter<KEYOUT, VALUEOUT> reader, OutputCommitter writer, StatusReporter committer, InputSplit reporter) throws IOException, InterruptedException {
            super(conf, taskid, reader, writer, committer, reporter, split);
        }
    }
}
oxcyiej7

oxcyiej71#

import org.apache.hadoop.mapreduce.Mapper;

public class ExampleMapper<T, X, Y, Z> extends Mapper<T, X, Y, Z> {
    static {
        System.setProperty("job.examplemapper", "true");
    }
}

然后在你的程序中扩展它

public class MyExampleMapper 
     extends ExampleMapper<Object, Text, Text, IntWritable>{

   private final static IntWritable one = new IntWritable(1);
   private Text word = new Text();

   public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
     StringTokenizer itr = new StringTokenizer(value.toString());

     String result = System.getProperty("job.examplemapper")

     if (result.equals("true")) {
       while (itr.hasMoreTokens()) {
         word.set(itr.nextToken());
         context.write(word, one);
       }
     }
   }
 }

相关问题