當前位置: 首頁>>代碼示例>>Java>>正文


Java HTableDescriptor.getMaxFileSize方法代碼示例

本文整理匯總了Java中org.apache.hadoop.hbase.HTableDescriptor.getMaxFileSize方法的典型用法代碼示例。如果您正苦於以下問題:Java HTableDescriptor.getMaxFileSize方法的具體用法?Java HTableDescriptor.getMaxFileSize怎麽用?Java HTableDescriptor.getMaxFileSize使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.hbase.HTableDescriptor的用法示例。


在下文中一共展示了HTableDescriptor.getMaxFileSize方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: configureForRegion

import org.apache.hadoop.hbase.HTableDescriptor; //導入方法依賴的package包/類
@Override
protected void configureForRegion(HRegion region) {
  super.configureForRegion(region);
  Configuration conf = getConf();
  HTableDescriptor desc = region.getTableDesc();
  if (desc != null) {
    this.desiredMaxFileSize = desc.getMaxFileSize();
  }
  if (this.desiredMaxFileSize <= 0) {
    this.desiredMaxFileSize = conf.getLong(HConstants.HREGION_MAX_FILESIZE,
      HConstants.DEFAULT_MAX_FILE_SIZE);
  }
  double jitter = conf.getDouble("hbase.hregion.max.filesize.jitter", 0.25D);
  this.desiredMaxFileSize += (long)(desiredMaxFileSize * (RANDOM.nextFloat() - 0.5D) * jitter);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:16,代碼來源:ConstantSizeRegionSplitPolicy.java

示例2: testModifyTable

import org.apache.hadoop.hbase.HTableDescriptor; //導入方法依賴的package包/類
@Test(timeout=60000)
public void testModifyTable() throws Exception {
  final TableName tableName = TableName.valueOf("testModifyTable");
  final ProcedureExecutor<MasterProcedureEnv> procExec = getMasterProcedureExecutor();

  MasterProcedureTestingUtility.createTable(procExec, tableName, null, "cf");
  UTIL.getHBaseAdmin().disableTable(tableName);

  // Modify the table descriptor
  HTableDescriptor htd = new HTableDescriptor(UTIL.getHBaseAdmin().getTableDescriptor(tableName));

  // Test 1: Modify 1 property
  long newMaxFileSize = htd.getMaxFileSize() * 2;
  htd.setMaxFileSize(newMaxFileSize);
  htd.setRegionReplication(3);

  long procId1 = ProcedureTestingUtility.submitAndWait(
      procExec, new ModifyTableProcedure(procExec.getEnvironment(), htd));
  ProcedureTestingUtility.assertProcNotFailed(procExec.getResult(procId1));

  HTableDescriptor currentHtd = UTIL.getHBaseAdmin().getTableDescriptor(tableName);
  assertEquals(newMaxFileSize, currentHtd.getMaxFileSize());

  // Test 2: Modify multiple properties
  boolean newReadOnlyOption = htd.isReadOnly() ? false : true;
  long newMemStoreFlushSize = htd.getMemStoreFlushSize() * 2;
  htd.setReadOnly(newReadOnlyOption);
  htd.setMemStoreFlushSize(newMemStoreFlushSize);

  long procId2 = ProcedureTestingUtility.submitAndWait(
      procExec, new ModifyTableProcedure(procExec.getEnvironment(), htd));
  ProcedureTestingUtility.assertProcNotFailed(procExec.getResult(procId2));

  currentHtd = UTIL.getHBaseAdmin().getTableDescriptor(tableName);
  assertEquals(newReadOnlyOption, currentHtd.isReadOnly());
  assertEquals(newMemStoreFlushSize, currentHtd.getMemStoreFlushSize());
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:38,代碼來源:TestModifyTableProcedure.java

示例3: perform

import org.apache.hadoop.hbase.HTableDescriptor; //導入方法依賴的package包/類
@Override
public void perform() throws Exception {
  HBaseTestingUtility util = context.getHBaseIntegrationTestingUtility();
  Admin admin = util.getHBaseAdmin();
  HTableDescriptor htd = admin.getTableDescriptor(tableName);

  // Try and get the current value.
  long currentValue = htd.getMaxFileSize();

  // If the current value is not set use the default for the cluster.
  // If configs are really weird this might not work.
  // That's ok. We're trying to cause chaos.
  if (currentValue <= 0) {
    currentValue =
        context.getHBaseCluster().getConf().getLong(HConstants.HREGION_MAX_FILESIZE,
            HConstants.DEFAULT_MAX_FILE_SIZE);
  }

  // Decrease by 10% at a time.
  long newValue = (long) (currentValue * 0.9);

  // We don't want to go too far below 1gb.
  // So go to about 1gb +/- 512 on each side.
  newValue = Math.max(minFileSize, newValue) - (512 - random.nextInt(1024));

  // Change the table descriptor.
  htd.setMaxFileSize(newValue);

  // Don't try the modify if we're stopping
  if (context.isStopping()) {
    return;
  }

  // modify the table.
  admin.modifyTable(tableName, htd);

  // Sleep some time.
  if (sleepTime > 0) {
    Thread.sleep(sleepTime);
  }
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:42,代碼來源:DecreaseMaxHFileSizeAction.java


注:本文中的org.apache.hadoop.hbase.HTableDescriptor.getMaxFileSize方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。