org.apache.hadoop.mapred.Mapper.configure()方法的使用及代码示例

x33g5p2x  于2022-01-25 转载在 其他  
字(3.7k)|赞(0)|评价(0)|浏览(187)

本文整理了Java中org.apache.hadoop.mapred.Mapper.configure()方法的一些代码示例,展示了Mapper.configure()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Mapper.configure()方法的具体详情如下:
包路径:org.apache.hadoop.mapred.Mapper
类名称:Mapper
方法名:configure

Mapper.configure介绍

暂无

代码示例

代码示例来源:origin: apache/flink

@Override
public void open(Configuration parameters) throws Exception {
  super.open(parameters);
  this.mapper.configure(jobConf);
  this.reporter = new HadoopDummyReporter();
  this.outputCollector = new HadoopOutputCollector<KEYOUT, VALUEOUT>();
}

代码示例来源:origin: org.apache.crunch/crunch-core

@Override
public void initialize() {
 if (instance == null) {
  this.instance = ReflectionUtils.newInstance(mapperClass, getConfiguration());
 }
 instance.configure(new JobConf(getConfiguration()));
 outputCollector = new OutputCollectorImpl<K2, V2>();
}

代码示例来源:origin: org.apache.flink/flink-hadoop-compatibility

@Override
public void open(Configuration parameters) throws Exception {
  super.open(parameters);
  this.mapper.configure(jobConf);
  this.reporter = new HadoopDummyReporter();
  this.outputCollector = new HadoopOutputCollector<KEYOUT, VALUEOUT>();
}

代码示例来源:origin: org.apache.flink/flink-hadoop-compatibility_2.11

@Override
public void open(Configuration parameters) throws Exception {
  super.open(parameters);
  this.mapper.configure(jobConf);
  this.reporter = new HadoopDummyReporter();
  this.outputCollector = new HadoopOutputCollector<KEYOUT, VALUEOUT>();
}

代码示例来源:origin: com.alibaba.blink/flink-hadoop-compatibility

@Override
public void open(Configuration parameters) throws Exception {
  super.open(parameters);
  this.mapper.configure(jobConf);
  this.reporter = new HadoopDummyReporter();
  this.outputCollector = new HadoopOutputCollector<KEYOUT, VALUEOUT>();
}

代码示例来源:origin: apache/apex-malhar

mapObject.configure(jobConf);

代码示例来源:origin: apache/chukwa

public void testSetDefaultMapProcessor() throws IOException {
 Mapper<ChukwaArchiveKey, ChunkImpl, ChukwaRecordKey, ChukwaRecord> mapper =
     new Demux.MapClass();
 JobConf conf = new JobConf();
 conf.set("chukwa.demux.mapper.default.processor",
      "org.apache.hadoop.chukwa.extraction.demux.processor.mapper.MockMapProcessor,");
 mapper.configure(conf);
 ChunkBuilder cb = new ChunkBuilder();
 cb.addRecord(SAMPLE_RECORD_DATA.getBytes());
 ChunkImpl chunk = (ChunkImpl)cb.getChunk();
 ChukwaTestOutputCollector<ChukwaRecordKey, ChukwaRecord> output =
     new ChukwaTestOutputCollector<ChukwaRecordKey, ChukwaRecord>();
 mapper.map(new ChukwaArchiveKey(), chunk, output, Reporter.NULL);
 ChukwaRecordKey recordKey = new ChukwaRecordKey("someReduceType", SAMPLE_RECORD_DATA);
 assertEquals("MockMapProcessor never invoked - no records found", 1, output.data.size());
 assertNotNull("MockMapProcessor never invoked", output.data.get(recordKey));
}

代码示例来源:origin: apache/chukwa

public void testSetCustomeMapProcessor() throws IOException {
 Mapper<ChukwaArchiveKey, ChunkImpl, ChukwaRecordKey, ChukwaRecord> mapper =
     new Demux.MapClass();
 String custom_DataType = "cus_dt";
 JobConf conf = new JobConf();
 conf.set(custom_DataType,
     "org.apache.hadoop.chukwa.extraction.demux.processor.mapper.MockMapProcessor,");
 mapper.configure(conf);
 ChunkBuilder cb = new ChunkBuilder();
 cb.addRecord(SAMPLE_RECORD_DATA.getBytes());
 ChunkImpl chunk = (ChunkImpl)cb.getChunk();
 chunk.setDataType(custom_DataType);
 ChukwaTestOutputCollector<ChukwaRecordKey, ChukwaRecord> output =
     new ChukwaTestOutputCollector<ChukwaRecordKey, ChukwaRecord>();
 mapper.map(new ChukwaArchiveKey(), chunk, output, Reporter.NULL);
 ChukwaRecordKey recordKey = new ChukwaRecordKey("someReduceType", SAMPLE_RECORD_DATA);
 assertEquals("MockMapProcessor never invoked - no records found", 1, output.data.size());
 assertNotNull("MockMapProcessor never invoked", output.data.get(recordKey));
}

相关文章