org.apache.hadoop.hive.ql.metadata.Hive.convertAddSpecToMetaPartition()方法的使用及代码示例

x33g5p2x  于2022-01-20 转载在 其他  
字(4.5k)|赞(0)|评价(0)|浏览(168)

本文整理了Java中org.apache.hadoop.hive.ql.metadata.Hive.convertAddSpecToMetaPartition()方法的一些代码示例,展示了Hive.convertAddSpecToMetaPartition()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Hive.convertAddSpecToMetaPartition()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.metadata.Hive
类名称:Hive
方法名:convertAddSpecToMetaPartition

Hive.convertAddSpecToMetaPartition介绍

暂无

代码示例

代码示例来源:origin: apache/hive

private static void createPartitionIfNotExists(HiveEndPoint ep,
  IMetaStoreClient msClient, HiveConf conf) throws PartitionCreationFailed {
 if (ep.partitionVals.isEmpty()) {
  return;
 }
 try {
  org.apache.hadoop.hive.ql.metadata.Table tableObject =
    new org.apache.hadoop.hive.ql.metadata.Table(msClient.getTable(ep.database, ep.table));
  Map<String, String> partSpec =
    Warehouse.makeSpecFromValues(tableObject.getPartitionKeys(), ep.partitionVals);
  AddPartitionDesc addPartitionDesc = new AddPartitionDesc(ep.database, ep.table, true);
  String partLocation = new Path(tableObject.getDataLocation(),
    Warehouse.makePartPath(partSpec)).toString();
  addPartitionDesc.addPartition(partSpec, partLocation);
  Partition partition = Hive.convertAddSpecToMetaPartition(tableObject,
    addPartitionDesc.getPartition(0), conf);
  msClient.add_partition(partition);
 }
 catch (AlreadyExistsException e) {
  //ignore this - multiple clients may be trying to create the same partition
  //AddPartitionDesc has ifExists flag but it's not propagated to
  // HMSHnalder.add_partitions_core() and so it throws...
 }
 catch(HiveException|TException e) {
  LOG.error("Failed to create partition : " + ep, e);
  throw new PartitionCreationFailed(ep, e);
 }
}

代码示例来源:origin: apache/hive

@Override
public PartitionInfo createPartitionIfNotExists(final List<String> partitionValues) throws StreamingException {
 String partLocation = null;
 String partName = null;
 boolean exists = false;
 try {
  Map<String, String> partSpec = Warehouse.makeSpecFromValues(tableObject.getPartitionKeys(), partitionValues);
  AddPartitionDesc addPartitionDesc = new AddPartitionDesc(database, table, true);
  partName = Warehouse.makePartName(tableObject.getPartitionKeys(), partitionValues);
  partLocation = new Path(tableObject.getDataLocation(), Warehouse.makePartPath(partSpec)).toString();
  addPartitionDesc.addPartition(partSpec, partLocation);
  Partition partition = Hive.convertAddSpecToMetaPartition(tableObject, addPartitionDesc.getPartition(0), conf);
  if (getMSC() == null) {
   // We assume it doesn't exist if we can't check it
   // so the driver will decide
   return new PartitionInfo(partName, partLocation, false);
  }
  getMSC().add_partition(partition);
  if (LOG.isDebugEnabled()) {
   LOG.debug("Created partition {} for table {}", partName,
     tableObject.getFullyQualifiedName());
  }
 } catch (AlreadyExistsException e) {
  exists = true;
 } catch (HiveException | TException e) {
  throw new StreamingException("Unable to creation partition for values: " + partitionValues + " connection: " +
   toConnectionInfoString(), e);
 }
 return new PartitionInfo(partName, partLocation, exists);
}

代码示例来源:origin: apache/drill

new ArrayList<org.apache.hadoop.hive.metastore.api.Partition>(size);
for (int i = 0; i < size; ++i) {
 in.add(convertAddSpecToMetaPartition(tbl, addPartitionDesc.getPartition(i)));

代码示例来源:origin: apache/hive

convertAddSpecToMetaPartition(tbl, addPartitionDesc.getPartition(i), conf);
if (tmpPart != null && tableSnapshot != null && tableSnapshot.getWriteId() > 0) {
 tmpPart.setWriteId(tableSnapshot.getWriteId());

代码示例来源:origin: org.apache.hive/hive-streaming

@Override
public PartitionInfo createPartitionIfNotExists(final List<String> partitionValues) throws StreamingException {
 String partLocation = null;
 String partName = null;
 boolean exists = false;
 try {
  Map<String, String> partSpec = Warehouse.makeSpecFromValues(tableObject.getPartitionKeys(), partitionValues);
  AddPartitionDesc addPartitionDesc = new AddPartitionDesc(database, table, true);
  partName = Warehouse.makePartName(tableObject.getPartitionKeys(), partitionValues);
  partLocation = new Path(tableObject.getDataLocation(), Warehouse.makePartPath(partSpec)).toString();
  addPartitionDesc.addPartition(partSpec, partLocation);
  Partition partition = Hive.convertAddSpecToMetaPartition(tableObject, addPartitionDesc.getPartition(0), conf);
  getMSC().add_partition(partition);
 } catch (AlreadyExistsException e) {
  exists = true;
 } catch (HiveException | TException e) {
  throw new StreamingException("Unable to creation partition for values: " + partitionValues + " connection: " +
   toConnectionInfoString(), e);
 }
 return new PartitionInfo(partName, partLocation, exists);
}

代码示例来源:origin: com.facebook.presto.hive/hive-apache

new ArrayList<org.apache.hadoop.hive.metastore.api.Partition>(size);
for (int i = 0; i < size; ++i) {
 in.add(convertAddSpecToMetaPartition(tbl, addPartitionDesc.getPartition(i)));

相关文章

Hive类方法