当前位置: 首页>>代码示例>>Java>>正文


Java StoreFile.getMinimumTimestamp方法代码示例

本文整理汇总了Java中org.apache.hadoop.hbase.regionserver.StoreFile.getMinimumTimestamp方法的典型用法代码示例。如果您正苦于以下问题:Java StoreFile.getMinimumTimestamp方法的具体用法?Java StoreFile.getMinimumTimestamp怎么用?Java StoreFile.getMinimumTimestamp使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hbase.regionserver.StoreFile的用法示例。


在下文中一共展示了StoreFile.getMinimumTimestamp方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: isMajorCompaction

import org.apache.hadoop.hbase.regionserver.StoreFile; //导入方法依赖的package包/类
@Override
public boolean isMajorCompaction(final Collection<StoreFile> filesToCompact)
    throws IOException {
  boolean result = false;
  long mcTime = getNextMajorCompactTime(filesToCompact);
  if (filesToCompact == null || filesToCompact.isEmpty() || mcTime == 0) {
    return result;
  }
  // TODO: Use better method for determining stamp of last major (HBASE-2990)
  long lowTimestamp = StoreUtils.getLowestTimestamp(filesToCompact);
  long now = System.currentTimeMillis();
  if (lowTimestamp > 0l && lowTimestamp < (now - mcTime)) {
    // Major compaction time has elapsed.
    long cfTtl = this.storeConfigInfo.getStoreFileTtl();
    if (filesToCompact.size() == 1) {
      // Single file
      StoreFile sf = filesToCompact.iterator().next();
      Long minTimestamp = sf.getMinimumTimestamp();
      long oldest = (minTimestamp == null)
          ? Long.MIN_VALUE
          : now - minTimestamp.longValue();
      if (sf.isMajorCompaction() &&
          (cfTtl == HConstants.FOREVER || oldest < cfTtl)) {
        float blockLocalityIndex = sf.getHDFSBlockDistribution().getBlockLocalityIndex(
            RSRpcServices.getHostname(comConf.conf, false)
        );
        if (blockLocalityIndex < comConf.getMinLocalityToForceCompact()) {
          if (LOG.isDebugEnabled()) {
            LOG.debug("Major compaction triggered on only store " + this +
                "; to make hdfs blocks local, current blockLocalityIndex is " +
                blockLocalityIndex + " (min " + comConf.getMinLocalityToForceCompact() +
                ")");
          }
          result = true;
        } else {
          if (LOG.isDebugEnabled()) {
            LOG.debug("Skipping major compaction of " + this +
                " because one (major) compacted file only, oldestTime " +
                oldest + "ms is < ttl=" + cfTtl + " and blockLocalityIndex is " +
                blockLocalityIndex + " (min " + comConf.getMinLocalityToForceCompact() +
                ")");
          }
        }
      } else if (cfTtl != HConstants.FOREVER && oldest > cfTtl) {
        LOG.debug("Major compaction triggered on store " + this +
          ", because keyvalues outdated; time since last major compaction " +
          (now - lowTimestamp) + "ms");
        result = true;
      }
    } else {
      if (LOG.isDebugEnabled()) {
        LOG.debug("Major compaction triggered on store " + this +
            "; time since last major compaction " + (now - lowTimestamp) + "ms");
      }
      result = true;
    }
  }
  return result;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:60,代码来源:RatioBasedCompactionPolicy.java


注:本文中的org.apache.hadoop.hbase.regionserver.StoreFile.getMinimumTimestamp方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。