当前位置: 首页>>代码示例>>Java>>正文


Java ExploringCompactionPolicy类代码示例

本文整理汇总了Java中org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy的典型用法代码示例。如果您正苦于以下问题:Java ExploringCompactionPolicy类的具体用法?Java ExploringCompactionPolicy怎么用?Java ExploringCompactionPolicy使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


ExploringCompactionPolicy类属于org.apache.hadoop.hbase.regionserver.compactions包,在下文中一共展示了ExploringCompactionPolicy类的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: checkCompactionPolicy

import org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy; //导入依赖的package包/类
private void checkCompactionPolicy(Configuration conf, HTableDescriptor htd)
    throws IOException {
  // FIFO compaction has some requirements
  // Actually FCP ignores periodic major compactions
  String className =
      htd.getConfigurationValue(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY);
  if (className == null) {
    className =
        conf.get(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY,
          ExploringCompactionPolicy.class.getName());
  }

  int blockingFileCount = HStore.DEFAULT_BLOCKING_STOREFILE_COUNT;
  String sv = htd.getConfigurationValue(HStore.BLOCKING_STOREFILES_KEY);
  if (sv != null) {
    blockingFileCount = Integer.parseInt(sv);
  } else {
    blockingFileCount = conf.getInt(HStore.BLOCKING_STOREFILES_KEY, blockingFileCount);
  }

  for (HColumnDescriptor hcd : htd.getColumnFamilies()) {
    String compactionPolicy =
        hcd.getConfigurationValue(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY);
    if (compactionPolicy == null) {
      compactionPolicy = className;
    }
    if (!compactionPolicy.equals(FIFOCompactionPolicy.class.getName())) {
      continue;
    }
    // FIFOCompaction
    String message = null;

    // 1. Check TTL
    if (hcd.getTimeToLive() == HColumnDescriptor.DEFAULT_TTL) {
      message = "Default TTL is not supported for FIFO compaction";
      throw new IOException(message);
    }

    // 2. Check min versions
    if (hcd.getMinVersions() > 0) {
      message = "MIN_VERSION > 0 is not supported for FIFO compaction";
      throw new IOException(message);
    }

    // 3. blocking file count
    String sbfc = htd.getConfigurationValue(HStore.BLOCKING_STOREFILES_KEY);
    if (sbfc != null) {
      blockingFileCount = Integer.parseInt(sbfc);
    }
    if (blockingFileCount < 1000) {
      message =
          "blocking file count '" + HStore.BLOCKING_STOREFILES_KEY + "' " + blockingFileCount
              + " is below recommended minimum of 1000";
      throw new IOException(message);
    }
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:58,代码来源:HMaster.java

示例2: checkCompactionPolicy

import org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy; //导入依赖的package包/类
private void checkCompactionPolicy(Configuration conf, TableDescriptor htd)
    throws IOException {
  // FIFO compaction has some requirements
  // Actually FCP ignores periodic major compactions
  String className = htd.getValue(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY);
  if (className == null) {
    className =
        conf.get(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY,
          ExploringCompactionPolicy.class.getName());
  }

  int blockingFileCount = HStore.DEFAULT_BLOCKING_STOREFILE_COUNT;
  String sv = htd.getValue(HStore.BLOCKING_STOREFILES_KEY);
  if (sv != null) {
    blockingFileCount = Integer.parseInt(sv);
  } else {
    blockingFileCount = conf.getInt(HStore.BLOCKING_STOREFILES_KEY, blockingFileCount);
  }

  for (ColumnFamilyDescriptor hcd : htd.getColumnFamilies()) {
    String compactionPolicy =
        hcd.getConfigurationValue(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY);
    if (compactionPolicy == null) {
      compactionPolicy = className;
    }
    if (!compactionPolicy.equals(FIFOCompactionPolicy.class.getName())) {
      continue;
    }
    // FIFOCompaction
    String message = null;

    // 1. Check TTL
    if (hcd.getTimeToLive() == ColumnFamilyDescriptorBuilder.DEFAULT_TTL) {
      message = "Default TTL is not supported for FIFO compaction";
      throw new IOException(message);
    }

    // 2. Check min versions
    if (hcd.getMinVersions() > 0) {
      message = "MIN_VERSION > 0 is not supported for FIFO compaction";
      throw new IOException(message);
    }

    // 3. blocking file count
    sv = hcd.getConfigurationValue(HStore.BLOCKING_STOREFILES_KEY);
    if (sv != null) {
      blockingFileCount = Integer.parseInt(sv);
    }
    if (blockingFileCount < 1000) {
      message =
          "Blocking file count '" + HStore.BLOCKING_STOREFILES_KEY + "' " + blockingFileCount
              + " is below recommended minimum of 1000 for column family "+ hcd.getNameAsString();
      throw new IOException(message);
    }
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:57,代码来源:HMaster.java


注:本文中的org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。