本文整理了Java中org.apache.hadoop.mapred.OutputFormat.checkOutputSpecs()
方法的一些代码示例,展示了OutputFormat.checkOutputSpecs()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。OutputFormat.checkOutputSpecs()
方法的具体详情如下:
包路径:org.apache.hadoop.mapred.OutputFormat
类名称:OutputFormat
方法名:checkOutputSpecs
[英]Check for validity of the output-specification for the job.
This is to validate the output specification for the job when it is a job is submitted. Typically checks that it does not already exist, throwing an exception when it already exists, so that output is not overwritten.
[中]检查作业输出规范的有效性。
这是为了在提交作业时验证作业的输出规范。通常会检查它是否已经存在,当它已经存在时会抛出异常,这样输出就不会被覆盖。
代码示例来源:origin: apache/hive
@Override
public void checkOutputSpecs(FileSystem ignored, JobConf job) throws IOException {
actualOutputFormat.checkOutputSpecs(ignored, job);
}
代码示例来源:origin: apache/drill
@Override
public void checkOutputSpecs(FileSystem ignored, JobConf job) throws IOException {
actualOutputFormat.checkOutputSpecs(ignored, job);
}
代码示例来源:origin: elastic/elasticsearch-hadoop
@Override
public void checkOutputSpecs(FileSystem ignored, JobConf job) throws IOException {
List<org.apache.hadoop.mapred.OutputFormat> formats = getOldApiFormats(job);
for (org.apache.hadoop.mapred.OutputFormat format : formats) {
format.checkOutputSpecs(ignored, job);
}
}
代码示例来源:origin: apache/hive
/**
* Check for validity of the output-specification for the job.
* @param context information about the job
* @throws IOException when output should not be attempted
*/
@Override
public void checkOutputSpecs(JobContext context) throws IOException, InterruptedException {
org.apache.hadoop.mapred.OutputFormat<? super WritableComparable<?>, ? super Writable> outputFormat = getBaseOutputFormat();
JobConf jobConf = new JobConf(context.getConfiguration());
outputFormat.checkOutputSpecs(null, jobConf);
HCatUtil.copyConf(jobConf, context.getConfiguration());
}
代码示例来源:origin: Qihoo360/XLearning
OutputFormat outputFormat = ReflectionUtils.newInstance(conf.getClass(XLearningConfiguration.XLEARNING_OUTPUTFORMAT_CLASS, XLearningConfiguration.DEFAULT_XLEARNING_OUTPUTF0RMAT_CLASS, OutputFormat.class),
jobConf);
outputFormat.checkOutputSpecs(dfs, jobConf);
JobID jobID = new JobID(new SimpleDateFormat("yyyyMMddHHmm").format(new Date()), 0);
TaskAttemptID taId = new TaskAttemptID(new TaskID(jobID, true, 0), 0);
代码示例来源:origin: apache/hive
@Override
public void checkOutputSpecs(JobContext context) throws IOException, InterruptedException {
OutputJobInfo jobInfo = HCatOutputFormat.getJobInfo(context.getConfiguration());
IMetaStoreClient client = null;
try {
HiveConf hiveConf = HCatUtil.getHiveConf(context.getConfiguration());
client = HCatUtil.getHiveMetastoreClient(hiveConf);
handleDuplicatePublish(context,
jobInfo,
client,
new Table(jobInfo.getTableInfo().getTable()));
} catch (MetaException e) {
throw new IOException(e);
} catch (TException e) {
throw new IOException(e);
} finally {
HCatUtil.closeHiveClientQuietly(client);
}
if (!jobInfo.isDynamicPartitioningUsed()) {
JobConf jobConf = new JobConf(context.getConfiguration());
getBaseOutputFormat().checkOutputSpecs(null, jobConf);
//checkoutputspecs might've set some properties we need to have context reflect that
HCatUtil.copyConf(jobConf, context.getConfiguration());
}
}
代码示例来源:origin: apache/ignite
/** {@inheritDoc} */
@Override public void run(HadoopTaskContext taskCtx) throws IgniteCheckedException {
HadoopV2TaskContext ctx = (HadoopV2TaskContext)taskCtx;
try {
ctx.jobConf().getOutputFormat().checkOutputSpecs(null, ctx.jobConf());
OutputCommitter committer = ctx.jobConf().getOutputCommitter();
if (committer != null)
committer.setupJob(ctx.jobContext());
}
catch (IOException e) {
throw new IgniteCheckedException(e);
}
}
}
代码示例来源:origin: com.facebook.presto.hive/hive-apache
@Override
public void checkOutputSpecs(FileSystem ignored, JobConf job) throws IOException {
actualOutputFormat.checkOutputSpecs(ignored, job);
}
代码示例来源:origin: org.elasticsearch/elasticsearch-spark
@Override
public void checkOutputSpecs(FileSystem ignored, JobConf job) throws IOException {
List<org.apache.hadoop.mapred.OutputFormat> formats = getOldApiFormats(job);
for (org.apache.hadoop.mapred.OutputFormat format : formats) {
format.checkOutputSpecs(ignored, job);
}
}
代码示例来源:origin: org.elasticsearch/elasticsearch-spark-13
@Override
public void checkOutputSpecs(FileSystem ignored, JobConf job) throws IOException {
List<org.apache.hadoop.mapred.OutputFormat> formats = getOldApiFormats(job);
for (org.apache.hadoop.mapred.OutputFormat format : formats) {
format.checkOutputSpecs(ignored, job);
}
}
代码示例来源:origin: org.apache.hive.hcatalog/hive-hcatalog-hbase-storage-handler
@Override
public void checkOutputSpecs(FileSystem ignored, JobConf job) throws IOException {
OutputFormat<WritableComparable<?>, Object> outputFormat = getOutputFormat(job);
outputFormat.checkOutputSpecs(ignored, job);
}
代码示例来源:origin: org.elasticsearch/elasticsearch-hadoop-mr
@Override
public void checkOutputSpecs(FileSystem ignored, JobConf job) throws IOException {
List<org.apache.hadoop.mapred.OutputFormat> formats = getOldApiFormats(job);
for (org.apache.hadoop.mapred.OutputFormat format : formats) {
format.checkOutputSpecs(ignored, job);
}
}
代码示例来源:origin: org.elasticsearch/elasticsearch-hadoop
@Override
public void checkOutputSpecs(FileSystem ignored, JobConf job) throws IOException {
List<org.apache.hadoop.mapred.OutputFormat> formats = getOldApiFormats(job);
for (org.apache.hadoop.mapred.OutputFormat format : formats) {
format.checkOutputSpecs(ignored, job);
}
}
代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core
public void checkOutputSpecs(FileSystem ignored, JobConf job)
throws IOException {
getBaseOut().checkOutputSpecs(ignored, job);
}
代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core
public void checkOutputSpecs(FileSystem ignored, JobConf job)
throws IOException {
getBaseOut().checkOutputSpecs(ignored, job);
}
代码示例来源:origin: org.apache.hadoop/hadoop-mapred
public void checkOutputSpecs(FileSystem ignored, JobConf job)
throws IOException {
getBaseOut().checkOutputSpecs(ignored, job);
}
代码示例来源:origin: org.spark-project.hive.hcatalog/hive-hcatalog-core
/**
* Check for validity of the output-specification for the job.
* @param context information about the job
* @throws IOException when output should not be attempted
*/
@Override
public void checkOutputSpecs(JobContext context) throws IOException, InterruptedException {
org.apache.hadoop.mapred.OutputFormat<? super WritableComparable<?>, ? super Writable> outputFormat = getBaseOutputFormat();
JobConf jobConf = new JobConf(context.getConfiguration());
outputFormat.checkOutputSpecs(null, jobConf);
HCatUtil.copyConf(jobConf, context.getConfiguration());
}
代码示例来源:origin: org.apache.hive.hcatalog/hive-hcatalog-core
/**
* Check for validity of the output-specification for the job.
* @param context information about the job
* @throws IOException when output should not be attempted
*/
@Override
public void checkOutputSpecs(JobContext context) throws IOException, InterruptedException {
org.apache.hadoop.mapred.OutputFormat<? super WritableComparable<?>, ? super Writable> outputFormat = getBaseOutputFormat();
JobConf jobConf = new JobConf(context.getConfiguration());
outputFormat.checkOutputSpecs(null, jobConf);
HCatUtil.copyConf(jobConf, context.getConfiguration());
}
代码示例来源:origin: com.facebook.presto.hive/hive-apache
/**
* Check for validity of the output-specification for the job.
* @param context information about the job
* @throws IOException when output should not be attempted
*/
@Override
public void checkOutputSpecs(JobContext context) throws IOException, InterruptedException {
org.apache.hadoop.mapred.OutputFormat<? super WritableComparable<?>, ? super Writable> outputFormat = getBaseOutputFormat();
JobConf jobConf = new JobConf(context.getConfiguration());
outputFormat.checkOutputSpecs(null, jobConf);
HCatUtil.copyConf(jobConf, context.getConfiguration());
}
代码示例来源:origin: com.github.hyukjinkwon.hcatalog/hive-hcatalog-core
/**
* Check for validity of the output-specification for the job.
* @param context information about the job
* @throws IOException when output should not be attempted
*/
@Override
public void checkOutputSpecs(JobContext context) throws IOException, InterruptedException {
org.apache.hadoop.mapred.OutputFormat<? super WritableComparable<?>, ? super Writable> outputFormat = getBaseOutputFormat();
JobConf jobConf = new JobConf(context.getConfiguration());
outputFormat.checkOutputSpecs(null, jobConf);
HCatUtil.copyConf(jobConf, context.getConfiguration());
}
内容来源于网络,如有侵权,请联系作者删除!