当前位置: 首页>>代码示例>>Java>>正文


Java Scan.setLoadColumnFamiliesOnDemand方法代码示例

本文整理汇总了Java中org.apache.hadoop.hbase.client.Scan.setLoadColumnFamiliesOnDemand方法的典型用法代码示例。如果您正苦于以下问题:Java Scan.setLoadColumnFamiliesOnDemand方法的具体用法?Java Scan.setLoadColumnFamiliesOnDemand怎么用?Java Scan.setLoadColumnFamiliesOnDemand使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hbase.client.Scan的用法示例。


在下文中一共展示了Scan.setLoadColumnFamiliesOnDemand方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: runScanner

import org.apache.hadoop.hbase.client.Scan; //导入方法依赖的package包/类
private void runScanner(Table table, boolean slow) throws Exception {
  long time = System.nanoTime();
  Scan scan = new Scan();
  scan.addColumn(cf_essential, col_name);
  scan.addColumn(cf_joined, col_name);

  SingleColumnValueFilter filter = new SingleColumnValueFilter(
      cf_essential, col_name, CompareFilter.CompareOp.EQUAL, flag_yes);
  filter.setFilterIfMissing(true);
  scan.setFilter(filter);
  scan.setLoadColumnFamiliesOnDemand(!slow);

  ResultScanner result_scanner = table.getScanner(scan);
  Result res;
  long rows_count = 0;
  while ((res = result_scanner.next()) != null) {
    rows_count++;
  }

  double timeSec = (System.nanoTime() - time) / 1000000000.0;
  result_scanner.close();
  LOG.info((slow ? "Slow" : "Joined") + " scanner finished in " + Double.toString(timeSec)
    + " seconds, got " + Long.toString(rows_count/2) + " rows");
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:25,代码来源:TestJoinedScanners.java

示例2: testReadersAndWriters

import org.apache.hadoop.hbase.client.Scan; //导入方法依赖的package包/类
@Test
public void testReadersAndWriters() throws Exception {
  Configuration conf = util.getConfiguration();
  String timeoutKey = String.format(TIMEOUT_KEY, this.getClass().getSimpleName());
  long maxRuntime = conf.getLong(timeoutKey, DEFAULT_TIMEOUT_MINUTES);
  long serverCount = util.getHBaseClusterInterface().getClusterStatus().getServersSize();
  long keysToWrite = serverCount * KEYS_TO_WRITE_PER_SERVER;
  Table table = new HTable(conf, TABLE_NAME);

  // Create multi-threaded writer and start it. We write multiple columns/CFs and verify
  // their integrity, therefore multi-put is necessary.
  MultiThreadedWriter writer =
    new MultiThreadedWriter(dataGen, conf, TABLE_NAME);
  writer.setMultiPut(true);

  LOG.info("Starting writer; the number of keys to write is " + keysToWrite);
  // TODO : Need to see if tag support has to be given here in the integration test suite
  writer.start(1, keysToWrite, WRITER_THREADS);

  // Now, do scans.
  long now = EnvironmentEdgeManager.currentTime();
  long timeLimit = now + (maxRuntime * 60000);
  boolean isWriterDone = false;
  while (now < timeLimit && !isWriterDone) {
    LOG.info("Starting the scan; wrote approximately "
      + dataGen.getTotalNumberOfKeys() + " keys");
    isWriterDone = writer.isDone();
    if (isWriterDone) {
      LOG.info("Scanning full result, writer is done");
    }
    Scan scan = new Scan();
    for (byte[] cf : dataGen.getColumnFamilies()) {
      scan.addFamily(cf);
    }
    scan.setFilter(dataGen.getScanFilter());
    scan.setLoadColumnFamiliesOnDemand(true);
    // The number of keys we can expect from scan - lower bound (before scan).
    // Not a strict lower bound - writer knows nothing about filters, so we report
    // this from generator. Writer might have generated the value but not put it yet.
    long onesGennedBeforeScan = dataGen.getExpectedNumberOfKeys();
    long startTs = EnvironmentEdgeManager.currentTime();
    ResultScanner results = table.getScanner(scan);
    long resultCount = 0;
    Result result = null;
    // Verify and count the results.
    while ((result = results.next()) != null) {
      boolean isOk = writer.verifyResultAgainstDataGenerator(result, true, true);
      Assert.assertTrue("Failed to verify [" + Bytes.toString(result.getRow())+ "]", isOk);
      ++resultCount;
    }
    long timeTaken = EnvironmentEdgeManager.currentTime() - startTs;
    // Verify the result count.
    long onesGennedAfterScan = dataGen.getExpectedNumberOfKeys();
    Assert.assertTrue("Read " + resultCount + " keys when at most " + onesGennedAfterScan
      + " were generated ", onesGennedAfterScan >= resultCount);
    if (isWriterDone) {
      Assert.assertTrue("Read " + resultCount + " keys; the writer is done and "
        + onesGennedAfterScan + " keys were generated", onesGennedAfterScan == resultCount);
    } else if (onesGennedBeforeScan * 0.9 > resultCount) {
      LOG.warn("Read way too few keys (" + resultCount + "/" + onesGennedBeforeScan
        + ") - there might be a problem, or the writer might just be slow");
    }
    LOG.info("Scan took " + timeTaken + "ms");
    if (!isWriterDone) {
      Thread.sleep(WAIT_BETWEEN_SCANS_MS);
      now = EnvironmentEdgeManager.currentTime();
    }
  }
  Assert.assertEquals("There are write failures", 0, writer.getNumWriteFailures());
  Assert.assertTrue("Writer is not done", isWriterDone);
  // Assert.fail("Boom!");
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:73,代码来源:IntegrationTestLazyCfLoading.java


注:本文中的org.apache.hadoop.hbase.client.Scan.setLoadColumnFamiliesOnDemand方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。