当前位置: 首页>>代码示例>>Java>>正文


Java StorageReport.getBlockPoolUsed方法代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.server.protocol.StorageReport.getBlockPoolUsed方法的典型用法代码示例。如果您正苦于以下问题:Java StorageReport.getBlockPoolUsed方法的具体用法?Java StorageReport.getBlockPoolUsed怎么用?Java StorageReport.getBlockPoolUsed使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hdfs.server.protocol.StorageReport的用法示例。


在下文中一共展示了StorageReport.getBlockPoolUsed方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getUtilization

import org.apache.hadoop.hdfs.server.protocol.StorageReport; //导入方法依赖的package包/类
@Override
Double getUtilization(DatanodeStorageReport r, final StorageType t) {
  long capacity = 0L;
  long blockPoolUsed = 0L;
  for(StorageReport s : r.getStorageReports()) {
    if (s.getStorage().getStorageType() == t) {
      capacity += s.getCapacity();
      blockPoolUsed += s.getBlockPoolUsed();
    }
  }
  return capacity == 0L? null: blockPoolUsed*100.0/capacity;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:BalancingPolicy.java

示例2: getTotalPoolUsage

import org.apache.hadoop.hdfs.server.protocol.StorageReport; //导入方法依赖的package包/类
private static long getTotalPoolUsage(DatanodeStorageReport report) {
  long usage = 0L;
  for (StorageReport sr : report.getStorageReports()) {
    usage += sr.getBlockPoolUsed();
  }
  return usage;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:8,代码来源:TestBalancerWithMultipleNameNodes.java

示例3: updateState

import org.apache.hadoop.hdfs.server.protocol.StorageReport; //导入方法依赖的package包/类
void updateState(StorageReport r) {
  capacity = r.getCapacity();
  dfsUsed = r.getDfsUsed();
  remaining = r.getRemaining();
  blockPoolUsed = r.getBlockPoolUsed();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:7,代码来源:DatanodeStorageInfo.java

示例4: updateHeartbeatState

import org.apache.hadoop.hdfs.server.protocol.StorageReport; //导入方法依赖的package包/类
/**
 * process datanode heartbeat or stats initialization.
 */
public void updateHeartbeatState(StorageReport[] reports, long cacheCapacity,
    long cacheUsed, int xceiverCount, int volFailures,
    VolumeFailureSummary volumeFailureSummary) {
  long totalCapacity = 0;
  long totalRemaining = 0;
  long totalBlockPoolUsed = 0;
  long totalDfsUsed = 0;
  Set<DatanodeStorageInfo> failedStorageInfos = null;

  // Decide if we should check for any missing StorageReport and mark it as
  // failed. There are different scenarios.
  // 1. When DN is running, a storage failed. Given the current DN
  //    implementation doesn't add recovered storage back to its storage list
  //    until DN restart, we can assume volFailures won't decrease
  //    during the current DN registration session.
  //    When volumeFailures == this.volumeFailures, it implies there is no
  //    state change. No need to check for failed storage. This is an
  //    optimization.  Recent versions of the DataNode report a
  //    VolumeFailureSummary containing the date/time of the last volume
  //    failure.  If that's available, then we check that instead for greater
  //    accuracy.
  // 2. After DN restarts, volFailures might not increase and it is possible
  //    we still have new failed storage. For example, admins reduce
  //    available storages in configuration. Another corner case
  //    is the failed volumes might change after restart; a) there
  //    is one good storage A, one restored good storage B, so there is
  //    one element in storageReports and that is A. b) A failed. c) Before
  //    DN sends HB to NN to indicate A has failed, DN restarts. d) After DN
  //    restarts, storageReports has one element which is B.
  final boolean checkFailedStorages;
  if (volumeFailureSummary != null && this.volumeFailureSummary != null) {
    checkFailedStorages = volumeFailureSummary.getLastVolumeFailureDate() >
        this.volumeFailureSummary.getLastVolumeFailureDate();
  } else {
    checkFailedStorages = (volFailures > this.volumeFailures) ||
        !heartbeatedSinceRegistration;
  }

  if (checkFailedStorages) {
    LOG.info("Number of failed storage changes from "
        + this.volumeFailures + " to " + volFailures);
    failedStorageInfos = new HashSet<DatanodeStorageInfo>(
        storageMap.values());
  }

  setCacheCapacity(cacheCapacity);
  setCacheUsed(cacheUsed);
  setXceiverCount(xceiverCount);
  setLastUpdate(Time.now());    
  this.volumeFailures = volFailures;
  this.volumeFailureSummary = volumeFailureSummary;
  for (StorageReport report : reports) {
    DatanodeStorageInfo storage = updateStorage(report.getStorage());
    if (checkFailedStorages) {
      failedStorageInfos.remove(storage);
    }

    storage.receivedHeartbeat(report);
    totalCapacity += report.getCapacity();
    totalRemaining += report.getRemaining();
    totalBlockPoolUsed += report.getBlockPoolUsed();
    totalDfsUsed += report.getDfsUsed();
  }
  rollBlocksScheduled(getLastUpdate());

  // Update total metrics for the node.
  setCapacity(totalCapacity);
  setRemaining(totalRemaining);
  setBlockPoolUsed(totalBlockPoolUsed);
  setDfsUsed(totalDfsUsed);
  if (checkFailedStorages) {
    updateFailedStorage(failedStorageInfos);
  }

  if (storageMap.size() != reports.length) {
    pruneStorageMap(reports);
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:82,代码来源:DatanodeDescriptor.java

示例5: updateHeartbeatState

import org.apache.hadoop.hdfs.server.protocol.StorageReport; //导入方法依赖的package包/类
/**
 * process datanode heartbeat or stats initialization.
 */
public void updateHeartbeatState(StorageReport[] reports, long cacheCapacity,
    long cacheUsed, int xceiverCount, int volFailures) {
  long totalCapacity = 0;
  long totalRemaining = 0;
  long totalBlockPoolUsed = 0;
  long totalDfsUsed = 0;
  Set<DatanodeStorageInfo> failedStorageInfos = null;

  // Decide if we should check for any missing StorageReport and mark it as
  // failed. There are different scenarios.
  // 1. When DN is running, a storage failed. Given the current DN
  //    implementation doesn't add recovered storage back to its storage list
  //    until DN restart, we can assume volFailures won't decrease
  //    during the current DN registration session.
  //    When volumeFailures == this.volumeFailures, it implies there is no
  //    state change. No need to check for failed storage. This is an
  //    optimization.
  // 2. After DN restarts, volFailures might not increase and it is possible
  //    we still have new failed storage. For example, admins reduce
  //    available storages in configuration. Another corner case
  //    is the failed volumes might change after restart; a) there
  //    is one good storage A, one restored good storage B, so there is
  //    one element in storageReports and that is A. b) A failed. c) Before
  //    DN sends HB to NN to indicate A has failed, DN restarts. d) After DN
  //    restarts, storageReports has one element which is B.
  boolean checkFailedStorages = (volFailures > this.volumeFailures) ||
      !heartbeatedSinceRegistration;

  if (checkFailedStorages) {
    LOG.info("Number of failed storage changes from "
        + this.volumeFailures + " to " + volFailures);
    failedStorageInfos = new HashSet<DatanodeStorageInfo>(
        storageMap.values());
  }

  setCacheCapacity(cacheCapacity);
  setCacheUsed(cacheUsed);
  setXceiverCount(xceiverCount);
  setLastUpdate(Time.now());    
  this.volumeFailures = volFailures;
  for (StorageReport report : reports) {
    DatanodeStorageInfo storage = updateStorage(report.getStorage());
    if (checkFailedStorages) {
      failedStorageInfos.remove(storage);
    }
    //remove the storage which contained in the heartbeat
    

    storage.receivedHeartbeat(report);
    totalCapacity += report.getCapacity();
    totalRemaining += report.getRemaining();
    totalBlockPoolUsed += report.getBlockPoolUsed();
    totalDfsUsed += report.getDfsUsed();
  }
  rollBlocksScheduled(getLastUpdate());

  // Update total metrics for the node.
  setCapacity(totalCapacity);
  setRemaining(totalRemaining);
  setBlockPoolUsed(totalBlockPoolUsed);
  setDfsUsed(totalDfsUsed);
  if (checkFailedStorages) {
    updateFailedStorage(failedStorageInfos);
  }
}
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:69,代码来源:DatanodeDescriptor.java


注:本文中的org.apache.hadoop.hdfs.server.protocol.StorageReport.getBlockPoolUsed方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。