當前位置: 首頁>>代碼示例>>Java>>正文


Java Scan.isRaw方法代碼示例

本文整理匯總了Java中org.apache.hadoop.hbase.client.Scan.isRaw方法的典型用法代碼示例。如果您正苦於以下問題:Java Scan.isRaw方法的具體用法?Java Scan.isRaw怎麽用?Java Scan.isRaw使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.hbase.client.Scan的用法示例。


在下文中一共展示了Scan.isRaw方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: StoreScanner

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
/**
 * Opens a scanner across memstore, snapshot, and all StoreFiles. Assumes we
 * are not in a compaction.
 *
 * @param store   who we scan
 * @param scan    the spec
 * @param columns which columns we are scanning
 * @throws IOException
 */
public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final NavigableSet<byte[]> columns,
    long readPt) throws IOException {
  this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
  if (columns != null && scan.isRaw()) {
    throw new DoNotRetryIOException("Cannot specify any column for a raw scan");
  }
  matcher = new ScanQueryMatcher(scan, scanInfo, columns, ScanType.USER_SCAN, Long.MAX_VALUE,
      HConstants.LATEST_TIMESTAMP, oldestUnexpiredTS, now, store.getCoprocessorHost());
  this.store.addChangedReaderObserver(this);
  // Pass columns to try to filter out unnecessary StoreFiles.
  List<KeyValueScanner> scanners = getScannersNoCompaction();
  // Seek all scanners to the start of the Row (or if the exact matching row
  // key does not exist, then to the start of the next matching Row).
  // Always check bloom filter to optimize the top row seek for delete
  // family marker.
  seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery && lazySeekEnabledGlobally,
      parallelSeekEnabled);
  // set storeLimit
  this.storeLimit = scan.getMaxResultsPerColumnFamily();
  // set rowOffset
  this.storeOffset = scan.getRowOffsetPerColumnFamily();
  // Combine all seeked scanners with a heap
  resetKVHeap(scanners, store.getComparator());
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:34,代碼來源:StoreScanner.java

示例2: ScanQueryMatcher

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
/**
 * Construct a QueryMatcher for a scan
 * @param scan
 * @param scanInfo The store's immutable scan info
 * @param columns
 * @param scanType Type of the scan
 * @param earliestPutTs Earliest put seen in any of the store files.
 * @param oldestUnexpiredTS the oldest timestamp we are interested in,
 *  based on TTL
 * @param regionCoprocessorHost 
 * @throws IOException 
 */
public ScanQueryMatcher(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> columns,
    ScanType scanType, long readPointToUse, long earliestPutTs, long oldestUnexpiredTS,
    long now, RegionCoprocessorHost regionCoprocessorHost) throws IOException {
  TimeRange timeRange = scan.getColumnFamilyTimeRange().get(scanInfo.getFamily());
  if (timeRange == null) {
    this.tr = scan.getTimeRange();
  } else {
    this.tr = timeRange;
  }
  this.rowComparator = scanInfo.getComparator();
  this.regionCoprocessorHost = regionCoprocessorHost;
  this.deletes =  instantiateDeleteTracker();
  this.stopRow = scan.getStopRow();
  this.startKey = KeyValueUtil.createFirstDeleteFamilyOnRow(scan.getStartRow(),
      scanInfo.getFamily());
  this.filter = scan.getFilter();
  this.earliestPutTs = earliestPutTs;
  this.oldestUnexpiredTS = oldestUnexpiredTS;
  this.now = now;

  this.maxReadPointToTrackVersions = readPointToUse;
  this.timeToPurgeDeletes = scanInfo.getTimeToPurgeDeletes();
  this.ttl = oldestUnexpiredTS;

  /* how to deal with deletes */
  this.isUserScan = scanType == ScanType.USER_SCAN;
  // keep deleted cells: if compaction or raw scan
  this.keepDeletedCells = scan.isRaw() ? KeepDeletedCells.TRUE :
    isUserScan ? KeepDeletedCells.FALSE : scanInfo.getKeepDeletedCells();
  // retain deletes: if minor compaction or raw scanisDone
  this.retainDeletesInOutput = scanType == ScanType.COMPACT_RETAIN_DELETES || scan.isRaw();
  // seePastDeleteMarker: user initiated scans
  this.seePastDeleteMarkers =
      scanInfo.getKeepDeletedCells() != KeepDeletedCells.FALSE && isUserScan;

  int maxVersions =
      scan.isRaw() ? scan.getMaxVersions() : Math.min(scan.getMaxVersions(),
        scanInfo.getMaxVersions());

  // Single branch to deal with two types of reads (columns vs all in family)
  if (columns == null || columns.size() == 0) {
    // there is always a null column in the wildcard column query.
    hasNullColumn = true;

    // use a specialized scan for wildcard column tracker.
    this.columns = new ScanWildcardColumnTracker(
        scanInfo.getMinVersions(), maxVersions, oldestUnexpiredTS);
  } else {
    // whether there is null column in the explicit column query
    hasNullColumn = (columns.first().length == 0);

    // We can share the ExplicitColumnTracker, diff is we reset
    // between rows, not between storefiles.
    this.columns = new ExplicitColumnTracker(columns, scanInfo.getMinVersions(), maxVersions,
        oldestUnexpiredTS);
  }
  this.isReversed = scan.isReversed();
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:71,代碼來源:ScanQueryMatcher.java


注:本文中的org.apache.hadoop.hbase.client.Scan.isRaw方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。