org.apache.hadoop.hdfs.server.datanode.DataNode.getXmitsInProgress()方法的使用及代码示例

x33g5p2x  于2022-01-18 转载在 其他  
字(2.8k)|赞(0)|评价(0)|浏览(113)

本文整理了Java中org.apache.hadoop.hdfs.server.datanode.DataNode.getXmitsInProgress()方法的一些代码示例,展示了DataNode.getXmitsInProgress()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。DataNode.getXmitsInProgress()方法的具体详情如下:
包路径:org.apache.hadoop.hdfs.server.datanode.DataNode
类名称:DataNode
方法名:getXmitsInProgress

DataNode.getXmitsInProgress介绍

暂无

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

private void sendLifeline() throws IOException {
  StorageReport[] reports =
    dn.getFSDataset().getStorageReports(bpos.getBlockPoolId());
  if (LOG.isDebugEnabled()) {
   LOG.debug("Sending lifeline with " + reports.length + " storage " +
        " reports from service actor: " + BPServiceActor.this);
  }
  VolumeFailureSummary volumeFailureSummary = dn.getFSDataset()
    .getVolumeFailureSummary();
  int numFailedVolumes = volumeFailureSummary != null ?
    volumeFailureSummary.getFailedStorageLocations().length : 0;
  lifelineNamenode.sendLifeline(bpRegistration,
                 reports,
                 dn.getFSDataset().getCacheCapacity(),
                 dn.getFSDataset().getCacheUsed(),
                 dn.getXmitsInProgress(),
                 dn.getXceiverCount(),
                 numFailedVolumes,
                 volumeFailureSummary);
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

dn.getFSDataset().getCacheCapacity(),
dn.getFSDataset().getCacheUsed(),
dn.getXmitsInProgress(),
dn.getXceiverCount(),
numFailedVolumes,

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

HeartbeatResponse sendHeartBeat() throws IOException {
 scheduler.scheduleNextHeartbeat();
 StorageReport[] reports =
   dn.getFSDataset().getStorageReports(bpos.getBlockPoolId());
 if (LOG.isDebugEnabled()) {
  LOG.debug("Sending heartbeat with " + reports.length +
       " storage reports from service actor: " + this);
 }
 
 VolumeFailureSummary volumeFailureSummary = dn.getFSDataset()
   .getVolumeFailureSummary();
 int numFailedVolumes = volumeFailureSummary != null ?
   volumeFailureSummary.getFailedStorageLocations().length : 0;
 return bpNamenode.sendHeartbeat(bpRegistration,
   reports,
   dn.getFSDataset().getCacheCapacity(),
   dn.getFSDataset().getCacheUsed(),
   dn.getXmitsInProgress(),
   dn.getXceiverCount(),
   numFailedVolumes,
   volumeFailureSummary);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

HeartbeatResponse sendHeartBeat() throws IOException {
 scheduler.scheduleNextHeartbeat();
 StorageReport[] reports =
   dn.getFSDataset().getStorageReports(bpos.getBlockPoolId());
 if (LOG.isDebugEnabled()) {
  LOG.debug("Sending heartbeat with " + reports.length +
       " storage reports from service actor: " + this);
 }
 
 VolumeFailureSummary volumeFailureSummary = dn.getFSDataset()
   .getVolumeFailureSummary();
 int numFailedVolumes = volumeFailureSummary != null ?
   volumeFailureSummary.getFailedStorageLocations().length : 0;
 return bpNamenode.sendHeartbeat(bpRegistration,
   reports,
   dn.getFSDataset().getCacheCapacity(),
   dn.getFSDataset().getCacheUsed(),
   dn.getXmitsInProgress(),
   dn.getXceiverCount(),
   numFailedVolumes,
   volumeFailureSummary);
}

相关文章

DataNode类方法