当前位置: 首页>>代码示例>>Java>>正文


Java Scan.getStopRow方法代码示例

本文整理汇总了Java中org.apache.hadoop.hbase.client.Scan.getStopRow方法的典型用法代码示例。如果您正苦于以下问题:Java Scan.getStopRow方法的具体用法?Java Scan.getStopRow怎么用?Java Scan.getStopRow使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hbase.client.Scan的用法示例。


在下文中一共展示了Scan.getStopRow方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: StoreIndexScanner

import org.apache.hadoop.hbase.client.Scan; //导入方法依赖的package包/类
public StoreIndexScanner(Store store, List<KeyValueScanner> scanners, KVComparator comparator,
    IndexKVComparator indexComparator, Range range, Scan scan, Set<ByteArray> joinSet,
    boolean isAND) throws IOException {
  // winter scanner is always 1? in my test it is 1 indeed
  this.store = store;
  this.joinSet = joinSet;
  this.isAND = isAND;
  this.memstoreScanner = scanners;
  this.comparator = comparator;
  this.indexComparator = indexComparator;
  this.range = range;
  this.isGet = scan.isGetScan();
  this.cacheBlocks = scan.getCacheBlocks();
  if (isAND) {
    this.isEmptySet = this.joinSet.isEmpty();
    this.indexSet = new HashSet<ByteArray>(10000);
  }
  this.startRow = scan.getStartRow();
  this.startKV = KeyValue.createFirstOnRow(startRow);
  this.stopRow =
      Bytes.compareTo(scan.getStopRow(), HConstants.EMPTY_BYTE_ARRAY) == 0 ? null : scan
          .getStopRow();
  this.stopKV =
      Bytes.compareTo(scan.getStopRow(), HConstants.EMPTY_BYTE_ARRAY) == 0 ? null : KeyValue
          .createLastOnRow(scan.getStopRow());
  this.stopRowCmpValue = scan.isGetScan() ? -1 : 0;

  if (range.getStartValue() != null) {
    switch (range.getStartType()) {
    case EQUAL:
      startIKV =
          IndexKeyValue.createFirstOnQualifier(range.getQualifier(), range.getStartValue());
      stopIKV = startIKV;
      stopIKVCmpValue = -1;
      break;
    case GREATER_OR_EQUAL:
      startIKV =
          IndexKeyValue.createFirstOnQualifier(range.getQualifier(), range.getStartValue());
      stopIKV = null;
      stopIKVCmpValue = 0;
      break;
    case GREATER:
      startIKV = IndexKeyValue.createLastOnQualifier(range.getQualifier(), range.getStartValue());
      stopIKV = null;
      stopIKVCmpValue = 0;
      break;
    default:
      throw new IOException("Invalid Range:" + range);
    }
  } else {
    startIKV = IndexKeyValue.createFirstOnQualifier(range.getQualifier());
    stopIKV = null;
  }

  if (range.getStopValue() != null) {
    switch (range.getStopType()) {
    case LESS:
      stopIKV = IndexKeyValue.createFirstOnQualifier(range.getQualifier(), range.getStopValue());
      stopIKVCmpValue = 0;
      break;
    case LESS_OR_EQUAL:
      stopIKV = IndexKeyValue.createFirstOnQualifier(range.getQualifier(), range.getStopValue());
      stopIKVCmpValue = -1;
      break;
    default:
      throw new IOException("Invalid Range:" + range);
    }
  }
  this.needToRefresh = false;
  getScanners();
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:72,代码来源:StoreIndexScanner.java

示例2: ScanQueryMatcher

import org.apache.hadoop.hbase.client.Scan; //导入方法依赖的package包/类
/**
 * Construct a QueryMatcher for a scan
 * @param scan
 * @param scanInfo The store's immutable scan info
 * @param columns
 * @param scanType Type of the scan
 * @param earliestPutTs Earliest put seen in any of the store files.
 * @param oldestUnexpiredTS the oldest timestamp we are interested in,
 *  based on TTL
 * @param regionCoprocessorHost 
 * @throws IOException 
 */
public ScanQueryMatcher(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> columns,
    ScanType scanType, long readPointToUse, long earliestPutTs, long oldestUnexpiredTS,
    long now, RegionCoprocessorHost regionCoprocessorHost) throws IOException {
  TimeRange timeRange = scan.getColumnFamilyTimeRange().get(scanInfo.getFamily());
  if (timeRange == null) {
    this.tr = scan.getTimeRange();
  } else {
    this.tr = timeRange;
  }
  this.rowComparator = scanInfo.getComparator();
  this.regionCoprocessorHost = regionCoprocessorHost;
  this.deletes =  instantiateDeleteTracker();
  this.stopRow = scan.getStopRow();
  this.startKey = KeyValueUtil.createFirstDeleteFamilyOnRow(scan.getStartRow(),
      scanInfo.getFamily());
  this.filter = scan.getFilter();
  this.earliestPutTs = earliestPutTs;
  this.oldestUnexpiredTS = oldestUnexpiredTS;
  this.now = now;

  this.maxReadPointToTrackVersions = readPointToUse;
  this.timeToPurgeDeletes = scanInfo.getTimeToPurgeDeletes();
  this.ttl = oldestUnexpiredTS;

  /* how to deal with deletes */
  this.isUserScan = scanType == ScanType.USER_SCAN;
  // keep deleted cells: if compaction or raw scan
  this.keepDeletedCells = scan.isRaw() ? KeepDeletedCells.TRUE :
    isUserScan ? KeepDeletedCells.FALSE : scanInfo.getKeepDeletedCells();
  // retain deletes: if minor compaction or raw scanisDone
  this.retainDeletesInOutput = scanType == ScanType.COMPACT_RETAIN_DELETES || scan.isRaw();
  // seePastDeleteMarker: user initiated scans
  this.seePastDeleteMarkers =
      scanInfo.getKeepDeletedCells() != KeepDeletedCells.FALSE && isUserScan;

  int maxVersions =
      scan.isRaw() ? scan.getMaxVersions() : Math.min(scan.getMaxVersions(),
        scanInfo.getMaxVersions());

  // Single branch to deal with two types of reads (columns vs all in family)
  if (columns == null || columns.size() == 0) {
    // there is always a null column in the wildcard column query.
    hasNullColumn = true;

    // use a specialized scan for wildcard column tracker.
    this.columns = new ScanWildcardColumnTracker(
        scanInfo.getMinVersions(), maxVersions, oldestUnexpiredTS);
  } else {
    // whether there is null column in the explicit column query
    hasNullColumn = (columns.first().length == 0);

    // We can share the ExplicitColumnTracker, diff is we reset
    // between rows, not between storefiles.
    this.columns = new ExplicitColumnTracker(columns, scanInfo.getMinVersions(), maxVersions,
        oldestUnexpiredTS);
  }
  this.isReversed = scan.isReversed();
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:71,代码来源:ScanQueryMatcher.java


注:本文中的org.apache.hadoop.hbase.client.Scan.getStopRow方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。