org.apache.hadoop.hdfs.server.datanode.DataNode.scheduleAllBlockReport()方法的使用及代码示例

x33g5p2x  于2022-01-18 转载在 其他  
字(5.1k)|赞(0)|评价(0)|浏览(90)

本文整理了Java中org.apache.hadoop.hdfs.server.datanode.DataNode.scheduleAllBlockReport()方法的一些代码示例,展示了DataNode.scheduleAllBlockReport()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。DataNode.scheduleAllBlockReport()方法的具体详情如下:
包路径:org.apache.hadoop.hdfs.server.datanode.DataNode
类名称:DataNode
方法名:scheduleAllBlockReport

DataNode.scheduleAllBlockReport介绍

[英]This methods arranges for the data node to send the block report at the next heartbeat.
[中]此方法安排数据节点在下一次心跳时发送块报告。

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

private void handleDiskError(String failedVolumes) {
 final boolean hasEnoughResources = data.hasEnoughResource();
 LOG.warn("DataNode.handleDiskError on: " +
   "[{}] Keep Running: {}", failedVolumes, hasEnoughResources);
 
 // If we have enough active valid volumes then we do not want to 
 // shutdown the DN completely.
 int dpError = hasEnoughResources ? DatanodeProtocol.DISK_ERROR  
                  : DatanodeProtocol.FATAL_DISK_ERROR;  
 metrics.incrVolumeFailures();
 //inform NameNodes
 for(BPOfferService bpos: blockPoolManager.getAllNamenodeThreads()) {
  bpos.trySendErrorReport(dpError, failedVolumes);
 }
 
 if(hasEnoughResources) {
  scheduleAllBlockReport(0);
  return; // do not shutdown
 }
 
 LOG.warn("DataNode is shutting down due to failed volumes: ["
   + failedVolumes + "]");
 shouldRun = false;
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

/** Force the DataNode to report missing blocks immediately. */
private static void triggerDeleteReport(DataNode datanode)
  throws IOException {
 datanode.scheduleAllBlockReport(0);
 DataNodeTestUtils.triggerDeletionReport(datanode);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

private void handleDiskError(String errMsgr) {
 final boolean hasEnoughResources = data.hasEnoughResource();
 LOG.warn("DataNode.handleDiskError: Keep Running: " + hasEnoughResources);
 
 // If we have enough active valid volumes then we do not want to 
 // shutdown the DN completely.
 int dpError = hasEnoughResources ? DatanodeProtocol.DISK_ERROR  
                  : DatanodeProtocol.FATAL_DISK_ERROR;  
 metrics.incrVolumeFailures();
 //inform NameNodes
 for(BPOfferService bpos: blockPoolManager.getAllNamenodeThreads()) {
  bpos.trySendErrorReport(dpError, errMsgr);
 }
 
 if(hasEnoughResources) {
  scheduleAllBlockReport(0);
  return; // do not shutdown
 }
 
 LOG.warn("DataNode is shutting down: " + errMsgr);
 shouldRun = false;
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

/**
 * Multiple-NameNode version of injectBlocks.
 */
public void injectBlocks(int nameNodeIndex, int dataNodeIndex,
  Iterable<Block> blocksToInject) throws IOException {
 if (dataNodeIndex < 0 || dataNodeIndex > dataNodes.size()) {
  throw new IndexOutOfBoundsException();
 }
 final DataNode dn = dataNodes.get(dataNodeIndex).datanode;
 final FsDatasetSpi<?> dataSet = DataNodeTestUtils.getFSDataset(dn);
 if (!(dataSet instanceof SimulatedFSDataset)) {
  throw new IOException("injectBlocks is valid only for SimilatedFSDataset");
 }
 String bpid = getNamesystem(nameNodeIndex).getBlockPoolId();
 SimulatedFSDataset sdataset = (SimulatedFSDataset) dataSet;
 sdataset.injectBlocks(bpid, blocksToInject);
 dataNodes.get(dataNodeIndex).datanode.scheduleAllBlockReport(0);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

/**
 * This method is valid only if the data nodes have simulated data
 * @param dataNodeIndex - data node i which to inject - the index is same as for getDataNodes()
 * @param blocksToInject - the blocks
 * @param bpid - (optional) the block pool id to use for injecting blocks.
 *             If not supplied then it is queried from the in-process NameNode.
 * @throws IOException
 *              if not simulatedFSDataset
 *             if any of blocks already exist in the data node
 *   
 */
public void injectBlocks(int dataNodeIndex,
  Iterable<Block> blocksToInject, String bpid) throws IOException {
 if (dataNodeIndex < 0 || dataNodeIndex > dataNodes.size()) {
  throw new IndexOutOfBoundsException();
 }
 final DataNode dn = dataNodes.get(dataNodeIndex).datanode;
 final FsDatasetSpi<?> dataSet = DataNodeTestUtils.getFSDataset(dn);
 if (!(dataSet instanceof SimulatedFSDataset)) {
  throw new IOException("injectBlocks is valid only for SimilatedFSDataset");
 }
 if (bpid == null) {
  bpid = getNamesystem().getBlockPoolId();
 }
 SimulatedFSDataset sdataset = (SimulatedFSDataset) dataSet;
 sdataset.injectBlocks(bpid, blocksToInject);
 dataNodes.get(dataNodeIndex).datanode.scheduleAllBlockReport(0);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

private void handleDiskError(String errMsgr) {
 final boolean hasEnoughResources = data.hasEnoughResource();
 LOG.warn("DataNode.handleDiskError: Keep Running: " + hasEnoughResources);
 
 // If we have enough active valid volumes then we do not want to 
 // shutdown the DN completely.
 int dpError = hasEnoughResources ? DatanodeProtocol.DISK_ERROR  
                  : DatanodeProtocol.FATAL_DISK_ERROR;  
 metrics.incrVolumeFailures();
 //inform NameNodes
 for(BPOfferService bpos: blockPoolManager.getAllNamenodeThreads()) {
  bpos.trySendErrorReport(dpError, errMsgr);
 }
 
 if(hasEnoughResources) {
  scheduleAllBlockReport(0);
  return; // do not shutdown
 }
 
 LOG.warn("DataNode is shutting down: " + errMsgr);
 shouldRun = false;
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

assertFileLocksReleased(
 new ArrayList<String>(oldDirs).subList(1, oldDirs.size()));
dn.scheduleAllBlockReport(0);

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

Mockito.<StorageBlockReport[]>anyObject(),
  Mockito.<BlockReportContext>anyObject());
dn.scheduleAllBlockReport(0);
delayer.waitForCall();

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

dn.scheduleAllBlockReport(0);
delayer.waitForCall();

相关文章

DataNode类方法