org.apache.hadoop.hive.ql.exec.Utilities.copyTablePropertiesToConf()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(3.0k)|赞(0)|评价(0)|浏览(166)

本文整理了Java中org.apache.hadoop.hive.ql.exec.Utilities.copyTablePropertiesToConf()方法的一些代码示例,展示了Utilities.copyTablePropertiesToConf()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Utilities.copyTablePropertiesToConf()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.exec.Utilities
类名称:Utilities
方法名:copyTablePropertiesToConf

Utilities.copyTablePropertiesToConf介绍

[英]Copies the storage handler proeprites configured for a table descriptor to a runtime job configuration. This differs from #copyTablePropertiesToConf(org.apache.hadoop.hive.ql.plan.TableDesc,org.apache.hadoop.mapred.JobConf)in that it does not allow parameters already set in the job to override the values from the table. This is important for setting the config up for reading, as the job may already have values in it from another table.
[中]将为表描述符配置的存储处理程序proeprites复制到运行时作业配置。这与#copyTablePropertiesToConf(org.apache.hadoop.hive.ql.plan.TableDesc,org.apache.hadoop.mapred.JobConf)不同,因为它不允许作业中已设置的参数覆盖表中的值。这对于设置要读取的配置很重要,因为作业中可能已经有来自另一个表的值。

代码示例

代码示例来源:origin: apache/drill

private void addSplitsForGroup(List<Path> dirs, TableScanOperator tableScan, JobConf conf,
  InputFormat inputFormat, Class<? extends InputFormat> inputFormatClass, int splits,
  TableDesc table, List<InputSplit> result) throws IOException {
 Utilities.copyTablePropertiesToConf(table, conf);
 if (tableScan != null) {
  pushFilters(conf, tableScan);
 }
 FileInputFormat.setInputPaths(conf, dirs.toArray(new Path[dirs.size()]));
 conf.setInputFormat(inputFormat.getClass());
 int headerCount = 0;
 int footerCount = 0;
 if (table != null) {
  headerCount = Utilities.getHeaderCount(table);
  footerCount = Utilities.getFooterCount(table, conf);
  if (headerCount != 0 || footerCount != 0) {
   // Input file has header or footer, cannot be splitted.
   HiveConf.setLongVar(conf, ConfVars.MAPREDMINSPLITSIZE, Long.MAX_VALUE);
  }
 }
 InputSplit[] iss = inputFormat.getSplits(conf, splits);
 for (InputSplit is : iss) {
  result.add(new HiveInputSplit(is, inputFormatClass.getName()));
 }
}

代码示例来源:origin: apache/hive

Utilities.copyTablePropertiesToConf(table, conf);
if (tableScan != null) {
 AcidUtils.setAcidOperationalProperties(conf, tableScan.getConf().isTranscationalTable(),

代码示例来源:origin: com.facebook.presto.hive/hive-apache

private void addSplitsForGroup(List<Path> dirs, TableScanOperator tableScan, JobConf conf,
  InputFormat inputFormat, Class<? extends InputFormat> inputFormatClass, int splits,
  TableDesc table, List<InputSplit> result) throws IOException {
 Utilities.copyTablePropertiesToConf(table, conf);
 if (tableScan != null) {
  pushFilters(conf, tableScan);
 }
 FileInputFormat.setInputPaths(conf, dirs.toArray(new Path[dirs.size()]));
 conf.setInputFormat(inputFormat.getClass());
 int headerCount = 0;
 int footerCount = 0;
 if (table != null) {
  headerCount = Utilities.getHeaderCount(table);
  footerCount = Utilities.getFooterCount(table, conf);
  if (headerCount != 0 || footerCount != 0) {
   // Input file has header or footer, cannot be splitted.
   conf.setLong(
     ShimLoader.getHadoopShims().getHadoopConfNames().get("MAPREDMINSPLITSIZE"),
     Long.MAX_VALUE);
  }
 }
 InputSplit[] iss = inputFormat.getSplits(conf, splits);
 for (InputSplit is : iss) {
  result.add(new HiveInputSplit(is, inputFormatClass.getName()));
 }
}

相关文章

Utilities类方法