org.apache.hadoop.hive.ql.exec.Utilities.getPartitionDescFromTableDesc()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(3.3k)|赞(0)|评价(0)|浏览(99)

本文整理了Java中org.apache.hadoop.hive.ql.exec.Utilities.getPartitionDescFromTableDesc()方法的一些代码示例,展示了Utilities.getPartitionDescFromTableDesc()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Utilities.getPartitionDescFromTableDesc()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.exec.Utilities
类名称:Utilities
方法名:getPartitionDescFromTableDesc

Utilities.getPartitionDescFromTableDesc介绍

暂无

代码示例

代码示例来源:origin: apache/hive

private FetchWork convertToWork() throws HiveException {
 inputs.clear();
 Utilities.addSchemaEvolutionToTableScanOperator(table, scanOp);
 TableDesc tableDesc = Utilities.getTableDesc(table);
 if (!table.isPartitioned()) {
  inputs.add(new ReadEntity(table, parent, !table.isView() && parent == null));
  FetchWork work = new FetchWork(table.getPath(), tableDesc);
  PlanUtils.configureInputJobPropertiesForStorageHandler(work.getTblDesc());
  work.setSplitSample(splitSample);
  return work;
 }
 List<Path> listP = new ArrayList<Path>();
 List<PartitionDesc> partP = new ArrayList<PartitionDesc>();
 for (Partition partition : partsList.getNotDeniedPartns()) {
  inputs.add(new ReadEntity(partition, parent, parent == null));
  listP.add(partition.getDataLocation());
  partP.add(Utilities.getPartitionDescFromTableDesc(tableDesc, partition, true));
 }
 Table sourceTable = partsList.getSourceTable();
 inputs.add(new ReadEntity(sourceTable, parent, parent == null));
 TableDesc table = Utilities.getTableDesc(sourceTable);
 FetchWork work = new FetchWork(listP, partP, table);
 if (!work.getPartDesc().isEmpty()) {
  PartitionDesc part0 = work.getPartDesc().get(0);
  PlanUtils.configureInputJobPropertiesForStorageHandler(part0.getTableDesc());
  work.setSplitSample(splitSample);
 }
 return work;
}

代码示例来源:origin: apache/drill

private FetchWork convertToWork() throws HiveException {
 inputs.clear();
 Utilities.addSchemaEvolutionToTableScanOperator(table, scanOp);
 TableDesc tableDesc = Utilities.getTableDesc(table);
 if (!table.isPartitioned()) {
  inputs.add(new ReadEntity(table, parent, !table.isView() && parent == null));
  FetchWork work = new FetchWork(table.getPath(), tableDesc);
  PlanUtils.configureInputJobPropertiesForStorageHandler(work.getTblDesc());
  work.setSplitSample(splitSample);
  return work;
 }
 List<Path> listP = new ArrayList<Path>();
 List<PartitionDesc> partP = new ArrayList<PartitionDesc>();
 for (Partition partition : partsList.getNotDeniedPartns()) {
  inputs.add(new ReadEntity(partition, parent, parent == null));
  listP.add(partition.getDataLocation());
  partP.add(Utilities.getPartitionDescFromTableDesc(tableDesc, partition, true));
 }
 Table sourceTable = partsList.getSourceTable();
 inputs.add(new ReadEntity(sourceTable, parent, parent == null));
 TableDesc table = Utilities.getTableDesc(sourceTable);
 FetchWork work = new FetchWork(listP, partP, table);
 if (!work.getPartDesc().isEmpty()) {
  PartitionDesc part0 = work.getPartDesc().get(0);
  PlanUtils.configureInputJobPropertiesForStorageHandler(part0.getTableDesc());
  work.setSplitSample(splitSample);
 }
 return work;
}

代码示例来源:origin: apache/hive

partDesc.add(Utilities.getPartitionDescFromTableDesc(tblDesc, part, false));

代码示例来源:origin: apache/drill

partDesc.add(Utilities.getPartitionDescFromTableDesc(tblDesc, part, false));

代码示例来源:origin: apache/hive

for (Partition part : partitions) {
 partLocs.add(part.getDataLocation());
 partDesc.add(Utilities.getPartitionDescFromTableDesc(tableDesc, part, true));

代码示例来源:origin: com.facebook.presto.hive/hive-apache

partDesc.add(Utilities.getPartitionDescFromTableDesc(tblDesc, part));

相关文章

Utilities类方法