org.apache.hadoop.hbase.regionserver.HStore.writeCompactionWalRecord()方法的使用及代码示例

x33g5p2x  于2022-01-20 转载在 其他  
字(3.2k)|赞(0)|评价(0)|浏览(119)

本文整理了Java中org.apache.hadoop.hbase.regionserver.HStore.writeCompactionWalRecord()方法的一些代码示例,展示了HStore.writeCompactionWalRecord()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。HStore.writeCompactionWalRecord()方法的具体详情如下:
包路径:org.apache.hadoop.hbase.regionserver.HStore
类名称:HStore
方法名:writeCompactionWalRecord

HStore.writeCompactionWalRecord介绍

[英]Writes the compaction WAL record.
[中]写入压缩记录。

代码示例

代码示例来源:origin: apache/hbase

writeCompactionWalRecord(delSfs, newFiles);
replaceStoreFiles(delSfs, newFiles);
completeCompaction(delSfs);

代码示例来源:origin: apache/hbase

@VisibleForTesting
protected List<HStoreFile> doCompaction(CompactionRequestImpl cr,
  Collection<HStoreFile> filesToCompact, User user, long compactionStartTime,
  List<Path> newFiles) throws IOException {
 // Do the steps necessary to complete the compaction.
 List<HStoreFile> sfs = moveCompactedFilesIntoPlace(cr, newFiles, user);
 writeCompactionWalRecord(filesToCompact, sfs);
 replaceStoreFiles(filesToCompact, sfs);
 if (cr.isMajor()) {
  majorCompactedCellsCount.addAndGet(getCompactionProgress().getTotalCompactingKVs());
  majorCompactedCellsSize.addAndGet(getCompactionProgress().totalCompactedSize);
 } else {
  compactedCellsCount.addAndGet(getCompactionProgress().getTotalCompactingKVs());
  compactedCellsSize.addAndGet(getCompactionProgress().totalCompactedSize);
 }
 long outputBytes = getTotalSize(sfs);
 // At this point the store will use new files for all new scanners.
 completeCompaction(filesToCompact); // update store size.
 long now = EnvironmentEdgeManager.currentTime();
 if (region.getRegionServerServices() != null
   && region.getRegionServerServices().getMetrics() != null) {
  region.getRegionServerServices().getMetrics().updateCompaction(
    region.getTableDescriptor().getTableName().getNameAsString(),
    cr.isMajor(), now - compactionStartTime, cr.getFiles().size(),
    newFiles.size(), cr.getSize(), outputBytes);
 }
 logCompactionEndMessage(cr, sfs, now, compactionStartTime);
 return sfs;
}

代码示例来源:origin: harbby/presto-connectors

private void removeUnneededFiles() throws IOException {
 if (!conf.getBoolean("hbase.store.delete.expired.storefile", true)) return;
 if (getFamily().getMinVersions() > 0) {
  LOG.debug("Skipping expired store file removal due to min version being " +
    getFamily().getMinVersions());
  return;
 }
 this.lock.readLock().lock();
 Collection<StoreFile> delSfs = null;
 try {
  synchronized (filesCompacting) {
   long cfTtl = getStoreFileTtl();
   if (cfTtl != Long.MAX_VALUE) {
    delSfs = storeEngine.getStoreFileManager().getUnneededFiles(
      EnvironmentEdgeManager.currentTime() - cfTtl, filesCompacting);
    addToCompactingFiles(delSfs);
   }
  }
 } finally {
  this.lock.readLock().unlock();
 }
 if (delSfs == null || delSfs.isEmpty()) return;
 Collection<StoreFile> newFiles = new ArrayList<StoreFile>(); // No new files.
 writeCompactionWalRecord(delSfs, newFiles);
 replaceStoreFiles(delSfs, newFiles);
 completeCompaction(delSfs);
 LOG.info("Completed removal of " + delSfs.size() + " unnecessary (expired) file(s) in "
   + this + " of " + this.getRegionInfo().getRegionNameAsString()
   + "; total size for store is " + TraditionalBinaryPrefix.long2String(storeSize, "", 1));
}

代码示例来源:origin: harbby/presto-connectors

writeCompactionWalRecord(filesToCompact, sfs);
replaceStoreFiles(filesToCompact, sfs);
if (cr.isMajor()) {

相关文章

HStore类方法