當前位置: 首頁>>代碼示例>>Java>>正文


Java Scan.setLoadColumnFamiliesOnDemand方法代碼示例

本文整理匯總了Java中org.apache.hadoop.hbase.client.Scan.setLoadColumnFamiliesOnDemand方法的典型用法代碼示例。如果您正苦於以下問題:Java Scan.setLoadColumnFamiliesOnDemand方法的具體用法?Java Scan.setLoadColumnFamiliesOnDemand怎麽用?Java Scan.setLoadColumnFamiliesOnDemand使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.hbase.client.Scan的用法示例。


在下文中一共展示了Scan.setLoadColumnFamiliesOnDemand方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: runScanner

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
private void runScanner(Table table, boolean slow) throws Exception {
  long time = System.nanoTime();
  Scan scan = new Scan();
  scan.addColumn(cf_essential, col_name);
  scan.addColumn(cf_joined, col_name);

  SingleColumnValueFilter filter = new SingleColumnValueFilter(
      cf_essential, col_name, CompareFilter.CompareOp.EQUAL, flag_yes);
  filter.setFilterIfMissing(true);
  scan.setFilter(filter);
  scan.setLoadColumnFamiliesOnDemand(!slow);

  ResultScanner result_scanner = table.getScanner(scan);
  Result res;
  long rows_count = 0;
  while ((res = result_scanner.next()) != null) {
    rows_count++;
  }

  double timeSec = (System.nanoTime() - time) / 1000000000.0;
  result_scanner.close();
  LOG.info((slow ? "Slow" : "Joined") + " scanner finished in " + Double.toString(timeSec)
    + " seconds, got " + Long.toString(rows_count/2) + " rows");
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:25,代碼來源:TestJoinedScanners.java

示例2: testReadersAndWriters

import org.apache.hadoop.hbase.client.Scan; //導入方法依賴的package包/類
@Test
public void testReadersAndWriters() throws Exception {
  Configuration conf = util.getConfiguration();
  String timeoutKey = String.format(TIMEOUT_KEY, this.getClass().getSimpleName());
  long maxRuntime = conf.getLong(timeoutKey, DEFAULT_TIMEOUT_MINUTES);
  long serverCount = util.getHBaseClusterInterface().getClusterStatus().getServersSize();
  long keysToWrite = serverCount * KEYS_TO_WRITE_PER_SERVER;
  Table table = new HTable(conf, TABLE_NAME);

  // Create multi-threaded writer and start it. We write multiple columns/CFs and verify
  // their integrity, therefore multi-put is necessary.
  MultiThreadedWriter writer =
    new MultiThreadedWriter(dataGen, conf, TABLE_NAME);
  writer.setMultiPut(true);

  LOG.info("Starting writer; the number of keys to write is " + keysToWrite);
  // TODO : Need to see if tag support has to be given here in the integration test suite
  writer.start(1, keysToWrite, WRITER_THREADS);

  // Now, do scans.
  long now = EnvironmentEdgeManager.currentTime();
  long timeLimit = now + (maxRuntime * 60000);
  boolean isWriterDone = false;
  while (now < timeLimit && !isWriterDone) {
    LOG.info("Starting the scan; wrote approximately "
      + dataGen.getTotalNumberOfKeys() + " keys");
    isWriterDone = writer.isDone();
    if (isWriterDone) {
      LOG.info("Scanning full result, writer is done");
    }
    Scan scan = new Scan();
    for (byte[] cf : dataGen.getColumnFamilies()) {
      scan.addFamily(cf);
    }
    scan.setFilter(dataGen.getScanFilter());
    scan.setLoadColumnFamiliesOnDemand(true);
    // The number of keys we can expect from scan - lower bound (before scan).
    // Not a strict lower bound - writer knows nothing about filters, so we report
    // this from generator. Writer might have generated the value but not put it yet.
    long onesGennedBeforeScan = dataGen.getExpectedNumberOfKeys();
    long startTs = EnvironmentEdgeManager.currentTime();
    ResultScanner results = table.getScanner(scan);
    long resultCount = 0;
    Result result = null;
    // Verify and count the results.
    while ((result = results.next()) != null) {
      boolean isOk = writer.verifyResultAgainstDataGenerator(result, true, true);
      Assert.assertTrue("Failed to verify [" + Bytes.toString(result.getRow())+ "]", isOk);
      ++resultCount;
    }
    long timeTaken = EnvironmentEdgeManager.currentTime() - startTs;
    // Verify the result count.
    long onesGennedAfterScan = dataGen.getExpectedNumberOfKeys();
    Assert.assertTrue("Read " + resultCount + " keys when at most " + onesGennedAfterScan
      + " were generated ", onesGennedAfterScan >= resultCount);
    if (isWriterDone) {
      Assert.assertTrue("Read " + resultCount + " keys; the writer is done and "
        + onesGennedAfterScan + " keys were generated", onesGennedAfterScan == resultCount);
    } else if (onesGennedBeforeScan * 0.9 > resultCount) {
      LOG.warn("Read way too few keys (" + resultCount + "/" + onesGennedBeforeScan
        + ") - there might be a problem, or the writer might just be slow");
    }
    LOG.info("Scan took " + timeTaken + "ms");
    if (!isWriterDone) {
      Thread.sleep(WAIT_BETWEEN_SCANS_MS);
      now = EnvironmentEdgeManager.currentTime();
    }
  }
  Assert.assertEquals("There are write failures", 0, writer.getNumWriteFailures());
  Assert.assertTrue("Writer is not done", isWriterDone);
  // Assert.fail("Boom!");
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:73,代碼來源:IntegrationTestLazyCfLoading.java


注:本文中的org.apache.hadoop.hbase.client.Scan.setLoadColumnFamiliesOnDemand方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。