当前位置: 首页>>代码示例>>Java>>正文


Java HFile.MIN_FORMAT_VERSION属性代码示例

本文整理汇总了Java中org.apache.hadoop.hbase.io.hfile.HFile.MIN_FORMAT_VERSION属性的典型用法代码示例。如果您正苦于以下问题:Java HFile.MIN_FORMAT_VERSION属性的具体用法?Java HFile.MIN_FORMAT_VERSION怎么用?Java HFile.MIN_FORMAT_VERSION使用的例子?那么恭喜您, 这里精选的属性代码示例或许可以为您提供帮助。您也可以进一步了解该属性所在org.apache.hadoop.hbase.io.hfile.HFile的用法示例。


在下文中一共展示了HFile.MIN_FORMAT_VERSION属性的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: createDeleteBloomAtWrite

/**
 * Creates a new Delete Family Bloom filter at the time of
 * {@link org.apache.hadoop.hbase.regionserver.StoreFile} writing.
 * @param conf
 * @param cacheConf
 * @param maxKeys an estimate of the number of keys we expect to insert.
 *        Irrelevant if compound Bloom filters are enabled.
 * @param writer the HFile writer
 * @return the new Bloom filter, or null in case Bloom filters are disabled
 *         or when failed to create one.
 */
public static BloomFilterWriter createDeleteBloomAtWrite(Configuration conf,
    CacheConfig cacheConf, int maxKeys, HFile.Writer writer) {
  if (!isDeleteFamilyBloomEnabled(conf)) {
    LOG.info("Delete Bloom filters are disabled by configuration for "
        + writer.getPath()
        + (conf == null ? " (configuration is null)" : ""));
    return null;
  }

  float err = getErrorRate(conf);

  if (HFile.getFormatVersion(conf) > HFile.MIN_FORMAT_VERSION) {
    int maxFold = getMaxFold(conf);
    // In case of compound Bloom filters we ignore the maxKeys hint.
    CompoundBloomFilterWriter bloomWriter = new CompoundBloomFilterWriter(
        getBloomBlockSize(conf), err, Hash.getHashType(conf),
        maxFold,
        cacheConf.shouldCacheBloomsOnWrite(), Bytes.BYTES_RAWCOMPARATOR);
    writer.addInlineBlockWriter(bloomWriter);
    return bloomWriter;
  } else {
    LOG.info("Delete Family Bloom filter is not supported in HFile V1");
    return null;
  }
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:36,代码来源:BloomFilterFactory.java

示例2: createGeneralBloomAtWrite

/**
 * Creates a new general (Row or RowCol) Bloom filter at the time of
 * {@link org.apache.hadoop.hbase.regionserver.StoreFile} writing.
 *
 * @param conf
 * @param cacheConf
 * @param bloomType
 * @param maxKeys an estimate of the number of keys we expect to insert.
 *        Irrelevant if compound Bloom filters are enabled.
 * @param writer the HFile writer
 * @return the new Bloom filter, or null in case Bloom filters are disabled
 *         or when failed to create one.
 */
public static BloomFilterWriter createGeneralBloomAtWrite(Configuration conf,
    CacheConfig cacheConf, BloomType bloomType, int maxKeys,
    HFile.Writer writer) {
  if (!isGeneralBloomEnabled(conf)) {
    LOG.trace("Bloom filters are disabled by configuration for "
        + writer.getPath()
        + (conf == null ? " (configuration is null)" : ""));
    return null;
  } else if (bloomType == BloomType.NONE) {
    LOG.trace("Bloom filter is turned off for the column family");
    return null;
  }

  float err = getErrorRate(conf);

  // In case of row/column Bloom filter lookups, each lookup is an OR if two
  // separate lookups. Therefore, if each lookup's false positive rate is p,
  // the resulting false positive rate is err = 1 - (1 - p)^2, and
  // p = 1 - sqrt(1 - err).
  if (bloomType == BloomType.ROWCOL) {
    err = (float) (1 - Math.sqrt(1 - err));
  }

  int maxFold = conf.getInt(IO_STOREFILE_BLOOM_MAX_FOLD,
      MAX_ALLOWED_FOLD_FACTOR);

  // Do we support compound bloom filters?
  if (HFile.getFormatVersion(conf) > HFile.MIN_FORMAT_VERSION) {
    // In case of compound Bloom filters we ignore the maxKeys hint.
    CompoundBloomFilterWriter bloomWriter = new CompoundBloomFilterWriter(
        getBloomBlockSize(conf), err, Hash.getHashType(conf), maxFold,
        cacheConf.shouldCacheBloomsOnWrite(), bloomType == BloomType.ROWCOL
            ? KeyValue.KEY_COMPARATOR : Bytes.BYTES_RAWCOMPARATOR);
    writer.addInlineBlockWriter(bloomWriter);
    return bloomWriter;
  } else {
    // A single-block Bloom filter. Only used when testing HFile format
    // version 1.
    int tooBig = conf.getInt(IO_STOREFILE_BLOOM_MAX_KEYS,
        128 * 1000 * 1000);

    if (maxKeys <= 0) {
      LOG.warn("Invalid maximum number of keys specified: " + maxKeys
          + ", not using Bloom filter");
      return null;
    } else if (maxKeys < tooBig) {
      BloomFilterWriter bloom = new ByteBloomFilter((int) maxKeys, err,
          Hash.getHashType(conf), maxFold);
      bloom.allocBloom();
      return bloom;
    } else {
      LOG.debug("Skipping bloom filter because max keysize too large: "
          + maxKeys);
    }
  }
  return null;
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:70,代码来源:BloomFilterFactory.java


注:本文中的org.apache.hadoop.hbase.io.hfile.HFile.MIN_FORMAT_VERSION属性示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。