当前位置: 首页>>代码示例>>Java>>正文


Java HDFSBlocksDistribution.getTopHostsWithWeights方法代码示例

本文整理汇总了Java中org.apache.hadoop.hbase.HDFSBlocksDistribution.getTopHostsWithWeights方法的典型用法代码示例。如果您正苦于以下问题:Java HDFSBlocksDistribution.getTopHostsWithWeights方法的具体用法?Java HDFSBlocksDistribution.getTopHostsWithWeights怎么用?Java HDFSBlocksDistribution.getTopHostsWithWeights使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hbase.HDFSBlocksDistribution的用法示例。


在下文中一共展示了HDFSBlocksDistribution.getTopHostsWithWeights方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getBestLocations

import org.apache.hadoop.hbase.HDFSBlocksDistribution; //导入方法依赖的package包/类
/**
 * This computes the locations to be passed from the InputSplit. MR/Yarn schedulers does not take
 * weights into account, thus will treat every location passed from the input split as equal. We
 * do not want to blindly pass all the locations, since we are creating one split per region, and
 * the region's blocks are all distributed throughout the cluster unless favorite node assignment
 * is used. On the expected stable case, only one location will contain most of the blocks as
 * local.
 * On the other hand, in favored node assignment, 3 nodes will contain highly local blocks. Here
 * we are doing a simple heuristic, where we will pass all hosts which have at least 80%
 * (hbase.tablesnapshotinputformat.locality.cutoff.multiplier) as much block locality as the top
 * host with the best locality.
 */
public static List<String> getBestLocations(
    Configuration conf, HDFSBlocksDistribution blockDistribution) {
  List<String> locations = new ArrayList<String>(3);

  HostAndWeight[] hostAndWeights = blockDistribution.getTopHostsWithWeights();

  if (hostAndWeights.length == 0) {
    return locations;
  }

  HostAndWeight topHost = hostAndWeights[0];
  locations.add(topHost.getHost());

  // Heuristic: filter all hosts which have at least cutoffMultiplier % of block locality
  double cutoffMultiplier
    = conf.getFloat(LOCALITY_CUTOFF_MULTIPLIER, DEFAULT_LOCALITY_CUTOFF_MULTIPLIER);

  double filterWeight = topHost.getWeight() * cutoffMultiplier;

  for (int i = 1; i < hostAndWeights.length; i++) {
    if (hostAndWeights[i].getWeight() >= filterWeight) {
      locations.add(hostAndWeights[i].getHost());
    } else {
      break;
    }
  }

  return locations;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:42,代码来源:TableSnapshotInputFormatImpl.java

示例2: getBestLocations

import org.apache.hadoop.hbase.HDFSBlocksDistribution; //导入方法依赖的package包/类
/**
 * This computes the locations to be passed from the InputSplit. MR/Yarn schedulers does not take
 * weights into account, thus will treat every location passed from the input split as equal. We
 * do not want to blindly pass all the locations, since we are creating one split per region, and
 * the region's blocks are all distributed throughout the cluster unless favorite node assignment
 * is used. On the expected stable case, only one location will contain most of the blocks as local.
 * On the other hand, in favored node assignment, 3 nodes will contain highly local blocks. Here
 * we are doing a simple heuristic, where we will pass all hosts which have at least 80%
 * (hbase.tablesnapshotinputformat.locality.cutoff.multiplier) as much block locality as the top
 * host with the best locality.
 */
public static List<String> getBestLocations(
    Configuration conf, HDFSBlocksDistribution blockDistribution) {
  List<String> locations = new ArrayList<String>(3);

  HostAndWeight[] hostAndWeights = blockDistribution.getTopHostsWithWeights();

  if (hostAndWeights.length == 0) {
    return locations;
  }

  HostAndWeight topHost = hostAndWeights[0];
  locations.add(topHost.getHost());

  // Heuristic: filter all hosts which have at least cutoffMultiplier % of block locality
  double cutoffMultiplier
    = conf.getFloat(LOCALITY_CUTOFF_MULTIPLIER, DEFAULT_LOCALITY_CUTOFF_MULTIPLIER);

  double filterWeight = topHost.getWeight() * cutoffMultiplier;

  for (int i = 1; i < hostAndWeights.length; i++) {
    if (hostAndWeights[i].getWeight() >= filterWeight) {
      locations.add(hostAndWeights[i].getHost());
    } else {
      break;
    }
  }

  return locations;
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:41,代码来源:TableSnapshotInputFormatImpl.java

示例3: getBestLocations

import org.apache.hadoop.hbase.HDFSBlocksDistribution; //导入方法依赖的package包/类
/**
 * This computes the locations to be passed from the InputSplit. MR/Yarn schedulers does not take
 * weights into account, thus will treat every location passed from the input split as equal. We
 * do not want to blindly pass all the locations, since we are creating one split per region, and
 * the region's blocks are all distributed throughout the cluster unless favorite node assignment
 * is used. On the expected stable case, only one location will contain most of the blocks as
 * local.
 * On the other hand, in favored node assignment, 3 nodes will contain highly local blocks. Here
 * we are doing a simple heuristic, where we will pass all hosts which have at least 80%
 * (hbase.tablesnapshotinputformat.locality.cutoff.multiplier) as much block locality as the top
 * host with the best locality.
 * Return at most numTopsAtMost locations if there are more than that.
 */
private static List<String> getBestLocations(Configuration conf,
    HDFSBlocksDistribution blockDistribution, int numTopsAtMost) {
  HostAndWeight[] hostAndWeights = blockDistribution.getTopHostsWithWeights();

  if (hostAndWeights.length == 0) { // no matter what numTopsAtMost is
    return null;
  }

  if (numTopsAtMost < 1) { // invalid if numTopsAtMost < 1, correct it to be 1
    numTopsAtMost = 1;
  }
  int top = Math.min(numTopsAtMost, hostAndWeights.length);
  List<String> locations = new ArrayList<>(top);
  HostAndWeight topHost = hostAndWeights[0];
  locations.add(topHost.getHost());

  if (top == 1) { // only care about the top host
    return locations;
  }

  // When top >= 2,
  // do the heuristic: filter all hosts which have at least cutoffMultiplier % of block locality
  double cutoffMultiplier
          = conf.getFloat(LOCALITY_CUTOFF_MULTIPLIER, DEFAULT_LOCALITY_CUTOFF_MULTIPLIER);

  double filterWeight = topHost.getWeight() * cutoffMultiplier;

  for (int i = 1; i <= top - 1; i++) {
    if (hostAndWeights[i].getWeight() >= filterWeight) {
      locations.add(hostAndWeights[i].getHost());
    } else {
      // As hostAndWeights is in descending order,
      // we could break the loop as long as we meet a weight which is less than filterWeight.
      break;
    }
  }

  return locations;
}
 
开发者ID:apache,项目名称:hbase,代码行数:53,代码来源:TableSnapshotInputFormatImpl.java


注:本文中的org.apache.hadoop.hbase.HDFSBlocksDistribution.getTopHostsWithWeights方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。