當前位置: 首頁>>代碼示例>>Java>>正文


Java HColumnDescriptor.getMinVersions方法代碼示例

本文整理匯總了Java中org.apache.hadoop.hbase.HColumnDescriptor.getMinVersions方法的典型用法代碼示例。如果您正苦於以下問題:Java HColumnDescriptor.getMinVersions方法的具體用法?Java HColumnDescriptor.getMinVersions怎麽用?Java HColumnDescriptor.getMinVersions使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.hbase.HColumnDescriptor的用法示例。


在下文中一共展示了HColumnDescriptor.getMinVersions方法的5個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: preFlushScannerOpen

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
@Override
public InternalScanner preFlushScannerOpen(
    final ObserverContext<RegionCoprocessorEnvironment> c,
    Store store, KeyValueScanner memstoreScanner, InternalScanner s) throws IOException {
  Long newTtl = ttls.get(store.getTableName());
  if (newTtl != null) {
    System.out.println("PreFlush:" + newTtl);
  }
  Integer newVersions = versions.get(store.getTableName());
  ScanInfo oldSI = store.getScanInfo();
  HColumnDescriptor family = store.getFamily();
  ScanInfo scanInfo = new ScanInfo(TEST_UTIL.getConfiguration(),
      family.getName(), family.getMinVersions(),
      newVersions == null ? family.getMaxVersions() : newVersions,
      newTtl == null ? oldSI.getTtl() : newTtl, family.getKeepDeletedCells(),
      oldSI.getTimeToPurgeDeletes(), oldSI.getComparator());
  Scan scan = new Scan();
  scan.setMaxVersions(newVersions == null ? oldSI.getMaxVersions() : newVersions);
  return new StoreScanner(store, scanInfo, scan, Collections.singletonList(memstoreScanner),
      ScanType.COMPACT_RETAIN_DELETES, store.getSmallestReadPoint(),
      HConstants.OLDEST_TIMESTAMP);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:23,代碼來源:TestCoprocessorScanPolicy.java

示例2: preCompactScannerOpen

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
@Override
public InternalScanner preCompactScannerOpen(
    final ObserverContext<RegionCoprocessorEnvironment> c,
    Store store, List<? extends KeyValueScanner> scanners, ScanType scanType,
    long earliestPutTs, InternalScanner s) throws IOException {
  Long newTtl = ttls.get(store.getTableName());
  Integer newVersions = versions.get(store.getTableName());
  ScanInfo oldSI = store.getScanInfo();
  HColumnDescriptor family = store.getFamily();
  ScanInfo scanInfo = new ScanInfo(TEST_UTIL.getConfiguration(),
      family.getName(), family.getMinVersions(),
      newVersions == null ? family.getMaxVersions() : newVersions,
      newTtl == null ? oldSI.getTtl() : newTtl, family.getKeepDeletedCells(),
      oldSI.getTimeToPurgeDeletes(), oldSI.getComparator());
  Scan scan = new Scan();
  scan.setMaxVersions(newVersions == null ? oldSI.getMaxVersions() : newVersions);
  return new StoreScanner(store, scanInfo, scan, scanners, scanType,
      store.getSmallestReadPoint(), earliestPutTs);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:20,代碼來源:TestCoprocessorScanPolicy.java

示例3: preStoreScannerOpen

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
@Override
public KeyValueScanner preStoreScannerOpen(
    final ObserverContext<RegionCoprocessorEnvironment> c, Store store, final Scan scan,
    final NavigableSet<byte[]> targetCols, KeyValueScanner s) throws IOException {
  TableName tn = store.getTableName();
  if (!tn.isSystemTable()) {
    Long newTtl = ttls.get(store.getTableName());
    Integer newVersions = versions.get(store.getTableName());
    ScanInfo oldSI = store.getScanInfo();
    HColumnDescriptor family = store.getFamily();
    ScanInfo scanInfo = new ScanInfo(TEST_UTIL.getConfiguration(),
        family.getName(), family.getMinVersions(),
        newVersions == null ? family.getMaxVersions() : newVersions,
        newTtl == null ? oldSI.getTtl() : newTtl, family.getKeepDeletedCells(),
        oldSI.getTimeToPurgeDeletes(), oldSI.getComparator());
    return new StoreScanner(store, scanInfo, scan, targetCols,
        ((HStore) store).getHRegion().getReadpoint(IsolationLevel.READ_COMMITTED));
  } else {
    return s;
  }
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:22,代碼來源:TestCoprocessorScanPolicy.java

示例4: checkCompactionPolicy

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
private void checkCompactionPolicy(Configuration conf, HTableDescriptor htd)
    throws IOException {
  // FIFO compaction has some requirements
  // Actually FCP ignores periodic major compactions
  String className =
      htd.getConfigurationValue(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY);
  if (className == null) {
    className =
        conf.get(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY,
          ExploringCompactionPolicy.class.getName());
  }

  int blockingFileCount = HStore.DEFAULT_BLOCKING_STOREFILE_COUNT;
  String sv = htd.getConfigurationValue(HStore.BLOCKING_STOREFILES_KEY);
  if (sv != null) {
    blockingFileCount = Integer.parseInt(sv);
  } else {
    blockingFileCount = conf.getInt(HStore.BLOCKING_STOREFILES_KEY, blockingFileCount);
  }

  for (HColumnDescriptor hcd : htd.getColumnFamilies()) {
    String compactionPolicy =
        hcd.getConfigurationValue(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY);
    if (compactionPolicy == null) {
      compactionPolicy = className;
    }
    if (!compactionPolicy.equals(FIFOCompactionPolicy.class.getName())) {
      continue;
    }
    // FIFOCompaction
    String message = null;

    // 1. Check TTL
    if (hcd.getTimeToLive() == HColumnDescriptor.DEFAULT_TTL) {
      message = "Default TTL is not supported for FIFO compaction";
      throw new IOException(message);
    }

    // 2. Check min versions
    if (hcd.getMinVersions() > 0) {
      message = "MIN_VERSION > 0 is not supported for FIFO compaction";
      throw new IOException(message);
    }

    // 3. blocking file count
    String sbfc = htd.getConfigurationValue(HStore.BLOCKING_STOREFILES_KEY);
    if (sbfc != null) {
      blockingFileCount = Integer.parseInt(sbfc);
    }
    if (blockingFileCount < 1000) {
      message =
          "blocking file count '" + HStore.BLOCKING_STOREFILES_KEY + "' " + blockingFileCount
              + " is below recommended minimum of 1000";
      throw new IOException(message);
    }
  }
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:58,代碼來源:HMaster.java

示例5: ScanInfo

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
/**
 * @param conf
 * @param family             {@link HColumnDescriptor} describing the column family
 * @param ttl                Store's TTL (in ms)
 * @param timeToPurgeDeletes duration in ms after which a delete marker can
 *                           be purged during a major compaction.
 * @param comparator         The store's comparator
 */
public ScanInfo(final Configuration conf, final HColumnDescriptor family, final long ttl,
    final long timeToPurgeDeletes, final KVComparator comparator) {
  this(conf, family.getName(), family.getMinVersions(), family.getMaxVersions(), ttl,
      family.getKeepDeletedCells(), timeToPurgeDeletes, comparator);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:14,代碼來源:ScanInfo.java


注:本文中的org.apache.hadoop.hbase.HColumnDescriptor.getMinVersions方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。