當前位置: 首頁>>代碼示例>>Java>>正文


Java Scan.setCaching方法代碼示例

本文整理匯總了Java中org.apache.hadoop.hbase.client.Scan.setCaching方法的典型用法代碼示例。如果您正苦於以下問題:Java Scan.setCaching方法的具體用法?Java Scan.setCaching怎麽用?Java Scan.setCaching使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.hbase.client.Scan的用法示例。


在下文中一共展示了Scan.setCaching方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: rowFilter

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
/**
 * 使用行過濾器 選擇大於rowKey的行
 *
 * @param tableName 表名
 * @param rowKey    行健
 * @param count     數量
 */
public void rowFilter(String tableName, String rowKey, int count) {
    HBaseConfiguration hBaseConfiguration = new HBaseConfiguration();
    Table table = hBaseConfiguration.table(tableName);
    Scan scan = new Scan();
    //使用行過濾器 選擇大於 rowkey的行
    //scan.setFilter(new RowFilter(CompareFilter.CompareOp.GREATER, new BinaryComparator(Bytes.toBytes(rowKey))));//直接行健
    //scan.setFilter(new RowFilter(CompareFilter.CompareOp.GREATER_OR_EQUAL, new RegexStringComparator("row.*")));//正則表達式
    //scan.setFilter(new RowFilter(CompareFilter.CompareOp.GREATER_OR_EQUAL, new SubstringComparator("row")));//字符串包含
    scan.setFilter(new RowFilter(CompareFilter.CompareOp.GREATER_OR_EQUAL, new BinaryPrefixComparator("row".getBytes())));//字符串前綴
    scan.setCaching(10);
    scan.setBatch(10);
    try {
        ResultScanner scanner = table.getScanner(scan);
        Result[] results = scanner.next(count);
        HBaseResultUtil.print(results);
    } catch (IOException e) {
        e.printStackTrace();
    }
}
 
開發者ID:mumuhadoop,項目名稱:mumu-hbase,代碼行數:27,代碼來源:HBaseFilterOperation.java

示例2: familyFilter

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
/**
 * 列族過濾器
 *
 * @param tableName 表名
 * @param rowFamily 列族
 * @param count     數量
 */
public void familyFilter(String tableName, String rowFamily, int count) {
    HBaseConfiguration hBaseConfiguration = new HBaseConfiguration();
    Table table = hBaseConfiguration.table(tableName);
    Scan scan = new Scan();
    //使用列族過濾器
    //scan.setFilter(new FamilyFilter(CompareFilter.CompareOp.GREATER, new BinaryComparator(Bytes.toBytes(rowFamily))));//直接行健
    //scan.setFilter(new FamilyFilter(CompareFilter.CompareOp.GREATER_OR_EQUAL, new RegexStringComparator("row.*")));//正則表達式
    //scan.setFilter(new FamilyFilter(CompareFilter.CompareOp.GREATER_OR_EQUAL, new SubstringComparator("row")));//字符串包含
    scan.setFilter(new FamilyFilter(CompareFilter.CompareOp.GREATER_OR_EQUAL, new BinaryPrefixComparator("mm".getBytes())));//字符串前綴
    scan.setCaching(10);
    scan.setBatch(10);
    try {
        ResultScanner scanner = table.getScanner(scan);
        Result[] results = scanner.next(count);
        HBaseResultUtil.print(results);
    } catch (IOException e) {
        e.printStackTrace();
    }
}
 
開發者ID:mumuhadoop,項目名稱:mumu-hbase,代碼行數:27,代碼來源:HBaseFilterOperation.java

示例3: qualifierFilter

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
/**
 * 列限定符過濾器
 *
 * @param tableName  表名
 * @param columnName 列限定符
 * @param count      數量
 */
public void qualifierFilter(String tableName, String columnName, int count) {
    HBaseConfiguration hBaseConfiguration = new HBaseConfiguration();
    Table table = hBaseConfiguration.table(tableName);
    Scan scan = new Scan();
    //使用列族過濾器
    scan.setFilter(new QualifierFilter(CompareFilter.CompareOp.EQUAL, new BinaryComparator(Bytes.toBytes(columnName))));//直接行健
    //scan.setFilter(new QualifierFilter(CompareFilter.CompareOp.EQUAL, new RegexStringComparator("row.*")));//正則表達式
    //scan.setFilter(new QualifierFilter(CompareFilter.CompareOp.EQUAL, new SubstringComparator("row")));//字符串包含
    //scan.setFilter(new QualifierFilter(CompareFilter.CompareOp.EQUAL, new BinaryPrefixComparator("m".getBytes())));//字符串前綴
    scan.setCaching(10);
    scan.setBatch(10);
    try {
        ResultScanner scanner = table.getScanner(scan);
        Result[] results = scanner.next(count);
        HBaseResultUtil.print(results);
    } catch (IOException e) {
        e.printStackTrace();
    }
}
 
開發者ID:mumuhadoop,項目名稱:mumu-hbase,代碼行數:27,代碼來源:HBaseFilterOperation.java

示例4: dependentColumnFilter

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
/**
 * 參考列過濾器(獲取相同時間戳的列)
 *
 * @param tableName    表名
 * @param columnFamily 列族
 * @param qualifier    列限定符
 * @param columnValue  列值
 * @param count        數量
 */
public void dependentColumnFilter(String tableName, String columnFamily, String qualifier, String columnValue, int count) {
    HBaseConfiguration hBaseConfiguration = new HBaseConfiguration();
    Table table = hBaseConfiguration.table(tableName);
    Scan scan = new Scan();
    scan.setFilter(new PrefixFilter(Bytes.toBytes("")));
    scan.setCaching(10);
    scan.setBatch(10);
    try {
        ResultScanner scanner = table.getScanner(scan);
        Result[] results = scanner.next(count);
        HBaseResultUtil.print(results);
    } catch (IOException e) {
        e.printStackTrace();
    }
}
 
開發者ID:mumuhadoop,項目名稱:mumu-hbase,代碼行數:25,代碼來源:HBaseFilterOperation.java

示例5: SingleColumnValueExcludeFilter

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
/**
 * 單列排除過濾器(返回的列 不包含參考列)
 *
 * @param tableName    表名
 * @param columnFamily 列族
 * @param qualifier    列限定符
 * @param columnValue  列值
 * @param count        數量
 */
public void SingleColumnValueExcludeFilter(String tableName, String columnFamily, String qualifier, String columnValue, int count) {
    HBaseConfiguration hBaseConfiguration = new HBaseConfiguration();
    Table table = hBaseConfiguration.table(tableName);
    Scan scan = new Scan();
    SingleColumnValueExcludeFilter singleColumnValueFilter = new SingleColumnValueExcludeFilter(Bytes.toBytes(columnFamily), Bytes.toBytes(qualifier), CompareFilter.CompareOp.EQUAL, Bytes.toBytes(columnValue));
    //singleColumnValueFilter.setFilterIfMissing(true);//當不存在這列的行 默認不過濾
    singleColumnValueFilter.setLatestVersionOnly(true);//獲取最新版本
    scan.setFilter(singleColumnValueFilter);
    scan.setCaching(10);
    //scan.setBatch(10);
    try {
        ResultScanner scanner = table.getScanner(scan);
        Result[] results = scanner.next(count);
        HBaseResultUtil.print(results);
    } catch (IOException e) {
        e.printStackTrace();
    }
}
 
開發者ID:mumuhadoop,項目名稱:mumu-hbase,代碼行數:28,代碼來源:HBaseFilterOperation.java

示例6: getRowOrBefore

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
public Result getRowOrBefore(Table table, byte[] row, byte[] family) throws IOException {
  long start = System.currentTimeMillis();
  Scan scan = new Scan();
  scan.addFamily(family);
  scan.setReversed(true);
  scan.setStartRow(row);
  scan.setCacheBlocks(false);
  scan.setCaching(1);
  scan.setSmall(true);
  ResultScanner scanner = table.getScanner(scan);
  Result ret = scanner.next();
  scanner.close();
  prevRowTotalTime += System.currentTimeMillis() - start;
  prevRowTotalCount++;
  return ret;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:17,代碼來源:MDIndex.java

示例7: testHeartbeatBetweenRows

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
/**
 * Test the case that the time limit for the scan is reached after each full row of cells is
 * fetched.
 * @throws Exception
 */
public Callable<Void> testHeartbeatBetweenRows() throws Exception {
  return new Callable<Void>() {

    @Override
    public Void call() throws Exception {
      // Configure the scan so that it can read the entire table in a single RPC. We want to test
      // the case where a scan stops on the server side due to a time limit
      Scan scan = new Scan();
      scan.setMaxResultSize(Long.MAX_VALUE);
      scan.setCaching(Integer.MAX_VALUE);

      testEquivalenceOfScanWithHeartbeats(scan, DEFAULT_ROW_SLEEP_TIME, -1, false);
      return null;
    }
  };
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:22,代碼來源:TestScannerHeartbeatMessages.java

示例8: getNextScanner

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
private ResultScanner getNextScanner() throws IOException {
  if (INIT_REGION_SIZE != getRegionNumber()) {
    throw new IOException(
        "region number changed from " + INIT_REGION_SIZE + " to " + getRegionNumber());
  }
  if (regionLocationQueue.isEmpty()) return null;
  HRegionLocation regionLocation = regionLocationQueue.poll();

  Scan newScan = new Scan(rawScan);
  byte[] key = regionLocation.getRegionInfo().getStartKey();
  if (key != null && key.length > 0) newScan.setStartRow(key);
  key = regionLocation.getRegionInfo().getEndKey();
  if (key != null && key.length > 0) newScan.setStopRow(key);
  newScan.setAttribute(IndexConstants.SCAN_WITH_INDEX, Bytes.toBytes("Hi"));
  newScan.setId(rawScan.getId());
  newScan.setCacheBlocks(rawScan.getCacheBlocks());
  newScan.setCaching(rawScan.getCaching());
  return table.getScanner(newScan);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:20,代碼來源:LocalScanner.java

示例9: constructScan

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
protected Scan constructScan(byte[] valuePrefix) throws IOException {
  FilterList list = new FilterList();
  Filter filter = new SingleColumnValueFilter(
      FAMILY_NAME, COLUMN_ZERO, CompareFilter.CompareOp.EQUAL,
      new BinaryComparator(valuePrefix)
  );
  list.addFilter(filter);
  if(opts.filterAll) {
    list.addFilter(new FilterAllFilter());
  }
  Scan scan = new Scan();
  scan.setCaching(opts.caching);
  if (opts.addColumns) {
    scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
  } else {
    scan.addFamily(FAMILY_NAME);
  }
  scan.setFilter(list);
  return scan;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:21,代碼來源:PerformanceEvaluation.java

示例10: doVerify

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
private int doVerify(Path outputDir, int numReducers) throws IOException, InterruptedException,
    ClassNotFoundException {
  job = new Job(getConf());

  job.setJobName("Link Verifier");
  job.setNumReduceTasks(numReducers);
  job.setJarByClass(getClass());

  setJobScannerConf(job);

  Scan scan = new Scan();
  scan.addColumn(FAMILY_NAME, COLUMN_PREV);
  scan.setCaching(10000);
  scan.setCacheBlocks(false);
  String[] split = labels.split(COMMA);

  scan.setAuthorizations(new Authorizations(split[this.labelIndex * 2],
      split[(this.labelIndex * 2) + 1]));

  TableMapReduceUtil.initTableMapperJob(tableName.getName(), scan, VerifyMapper.class,
      BytesWritable.class, BytesWritable.class, job);
  TableMapReduceUtil.addDependencyJars(job.getConfiguration(), AbstractHBaseTool.class);

  job.getConfiguration().setBoolean("mapreduce.map.speculative", false);

  job.setReducerClass(VerifyReducer.class);
  job.setOutputFormatClass(TextOutputFormat.class);
  TextOutputFormat.setOutputPath(job, outputDir);
  boolean success = job.waitForCompletion(true);

  return success ? 0 : 1;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:33,代碼來源:IntegrationTestBigLinkedListWithVisibility.java

示例11: Iter

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
Iter() {
  try {
    Scan scan = new Scan(tableNameStartKey, tableNameStopKey);
    scan.addColumn(FAMILY, QUALIFIER);
    scan.setCaching(config.getMaxIteratorSize() > 100 ? 100 : config.getMaxIteratorSize());
    scanner = table.getScanner(scan);
  } catch (IOException e) {
    throw new DrillRuntimeException("Caught error while creating HBase scanner for table:" + Bytes.toString(table.getTableName()), e);
  }
}
 
開發者ID:skhalifa,項目名稱:QDrill,代碼行數:11,代碼來源:HBasePStore.java

示例12: testBatchingResultWhenRegionMove

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
@Test
public void testBatchingResultWhenRegionMove() throws IOException {
  Table table =
      createTestTable(TableName.valueOf("testBatchingResultWhenRegionMove"), ROWS, FAMILIES,
          QUALIFIERS, VALUE);

  moveRegion(table, 1);

  Scan scan = new Scan();
  scan.setCaching(1);
  scan.setBatch(1);

  ResultScanner scanner = table.getScanner(scan);
  for (int i = 0; i < NUM_FAMILIES * NUM_QUALIFIERS - 1; i++) {
    scanner.next();
  }
  Result result1 = scanner.next();
  assertEquals(1, result1.rawCells().length);
  Cell c1 = result1.rawCells()[0];
  assertCell(c1, ROWS[0], FAMILIES[NUM_FAMILIES - 1], QUALIFIERS[NUM_QUALIFIERS - 1]);

  moveRegion(table, 2);

  Result result2 = scanner.next();
  assertEquals(1, result2.rawCells().length);
  Cell c2 = result2.rawCells()[0];
  assertCell(c2, ROWS[1], FAMILIES[0], QUALIFIERS[0]);

  moveRegion(table, 3);

  Result result3 = scanner.next();
  assertEquals(1, result3.rawCells().length);
  Cell c3 = result3.rawCells()[0];
  assertCell(c3, ROWS[1], FAMILIES[0], QUALIFIERS[1]);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:36,代碼來源:TestPartialResultsFromClientSide.java

示例13: createGCScanner

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
private GCScanner createGCScanner(ScanRange selectedRange) throws IOException {
  List<ScanRange> list = new ArrayList<>(rangeList.getRanges());
  list.remove(selectedRange);
  Scan scan = new Scan();
  scan.setStartRow(selectedRange.getStart());
  scan.setStopRow(selectedRange.getStop());
  scan.setCaching(rawScan.getCaching());
  scan.setCacheBlocks(rawScan.getCacheBlocks());
  scan.setFilter(new ScanRange.ScanRangeList(list).toFilterList());
  Table table = conn.getTable(
      relation.getIndexTableName(selectedRange.getFamily(), selectedRange.getQualifier()));
  ResultScanner scanner = table.getScanner(scan);
  return new GCScanner(this, scanner, selectedRange.getFamily(), selectedRange.getQualifier());
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:15,代碼來源:UDGScanner.java

示例14: testPartialResultsAndCaching

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
/**
 * @param resultSizeRowLimit The row limit that will be enforced through maxResultSize
 * @param cachingRowLimit The row limit that will be enforced through caching
 * @throws Exception
 */
public void testPartialResultsAndCaching(int resultSizeRowLimit, int cachingRowLimit)
    throws Exception {
  Scan scan = new Scan();
  scan.setAllowPartialResults(true);

  // The number of cells specified in the call to getResultSizeForNumberOfCells is offset to
  // ensure that the result size we specify is not an exact multiple of the number of cells
  // in a row. This ensures that partial results will be returned when the result size limit
  // is reached before the caching limit.
  int cellOffset = NUM_COLS / 3;
  long maxResultSize = getResultSizeForNumberOfCells(resultSizeRowLimit * NUM_COLS + cellOffset);
  scan.setMaxResultSize(maxResultSize);
  scan.setCaching(cachingRowLimit);

  ResultScanner scanner = TABLE.getScanner(scan);
  ClientScanner clientScanner = (ClientScanner) scanner;
  Result r = null;

  // Approximate the number of rows we expect will fit into the specified max rsult size. If this
  // approximation is less than caching, then we expect that the max result size limit will be
  // hit before the caching limit and thus partial results may be seen
  boolean expectToSeePartialResults = resultSizeRowLimit < cachingRowLimit;
  while ((r = clientScanner.next()) != null) {
    assertTrue(!r.isPartial() || expectToSeePartialResults);
  }

  scanner.close();
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:34,代碼來源:TestPartialResultsFromClientSide.java

示例15: executeScan

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
private OpResult executeScan() throws IOException, ParseException {
  if (!hasScan()) {
    return new OpResult("scan not supported", 1, 1);
  }
  Table table = conn.getTable(opTblName);
  BufferedReader br = new BufferedReader(new FileReader(scanFilePath));
  String line;
  long totalTime = 0;
  int counter = 0;
  Result[] results;
  while ((line = br.readLine()) != null) {
    Scan scan = new Scan(getIndexTableScanStartKey(line));
    scan.setCaching(workload.getScanCacheSize());
    scan.setCacheBlocks(false);
    long startTime = System.currentTimeMillis();
    ResultScanner scanner = table.getScanner(scan);
    int wantedRecords = sizeScanCovering;
    while (true) {
      results = scanner.next(Math.min(wantedRecords, workload.getScanCacheSize()));
      if (results == null || results.length == 0) break;
      for (Result result : results) {
        int k = recordsInOneResult(result);
        wantedRecords -= k;
        counter += k;
      }
      if (wantedRecords <= 0) break;
    }
    scanner.close();
    totalTime += System.currentTimeMillis() - startTime;
  }
  OpResult ret = new OpResult("scan", counter, totalTime);
  br.close();
  table.close();
  return ret;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:36,代碼來源:PerfScanBase.java


注:本文中的org.apache.hadoop.hbase.client.Scan.setCaching方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。