当前位置: 首页>>代码示例>>Java>>正文


Java HTableDescriptor.getMaxFileSize方法代码示例

本文整理汇总了Java中org.apache.hadoop.hbase.HTableDescriptor.getMaxFileSize方法的典型用法代码示例。如果您正苦于以下问题:Java HTableDescriptor.getMaxFileSize方法的具体用法?Java HTableDescriptor.getMaxFileSize怎么用?Java HTableDescriptor.getMaxFileSize使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hbase.HTableDescriptor的用法示例。


在下文中一共展示了HTableDescriptor.getMaxFileSize方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: configureForRegion

import org.apache.hadoop.hbase.HTableDescriptor; //导入方法依赖的package包/类
@Override
protected void configureForRegion(HRegion region) {
  super.configureForRegion(region);
  Configuration conf = getConf();
  HTableDescriptor desc = region.getTableDesc();
  if (desc != null) {
    this.desiredMaxFileSize = desc.getMaxFileSize();
  }
  if (this.desiredMaxFileSize <= 0) {
    this.desiredMaxFileSize = conf.getLong(HConstants.HREGION_MAX_FILESIZE,
      HConstants.DEFAULT_MAX_FILE_SIZE);
  }
  double jitter = conf.getDouble("hbase.hregion.max.filesize.jitter", 0.25D);
  this.desiredMaxFileSize += (long)(desiredMaxFileSize * (RANDOM.nextFloat() - 0.5D) * jitter);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:16,代码来源:ConstantSizeRegionSplitPolicy.java

示例2: testModifyTable

import org.apache.hadoop.hbase.HTableDescriptor; //导入方法依赖的package包/类
@Test(timeout=60000)
public void testModifyTable() throws Exception {
  final TableName tableName = TableName.valueOf("testModifyTable");
  final ProcedureExecutor<MasterProcedureEnv> procExec = getMasterProcedureExecutor();

  MasterProcedureTestingUtility.createTable(procExec, tableName, null, "cf");
  UTIL.getHBaseAdmin().disableTable(tableName);

  // Modify the table descriptor
  HTableDescriptor htd = new HTableDescriptor(UTIL.getHBaseAdmin().getTableDescriptor(tableName));

  // Test 1: Modify 1 property
  long newMaxFileSize = htd.getMaxFileSize() * 2;
  htd.setMaxFileSize(newMaxFileSize);
  htd.setRegionReplication(3);

  long procId1 = ProcedureTestingUtility.submitAndWait(
      procExec, new ModifyTableProcedure(procExec.getEnvironment(), htd));
  ProcedureTestingUtility.assertProcNotFailed(procExec.getResult(procId1));

  HTableDescriptor currentHtd = UTIL.getHBaseAdmin().getTableDescriptor(tableName);
  assertEquals(newMaxFileSize, currentHtd.getMaxFileSize());

  // Test 2: Modify multiple properties
  boolean newReadOnlyOption = htd.isReadOnly() ? false : true;
  long newMemStoreFlushSize = htd.getMemStoreFlushSize() * 2;
  htd.setReadOnly(newReadOnlyOption);
  htd.setMemStoreFlushSize(newMemStoreFlushSize);

  long procId2 = ProcedureTestingUtility.submitAndWait(
      procExec, new ModifyTableProcedure(procExec.getEnvironment(), htd));
  ProcedureTestingUtility.assertProcNotFailed(procExec.getResult(procId2));

  currentHtd = UTIL.getHBaseAdmin().getTableDescriptor(tableName);
  assertEquals(newReadOnlyOption, currentHtd.isReadOnly());
  assertEquals(newMemStoreFlushSize, currentHtd.getMemStoreFlushSize());
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:38,代码来源:TestModifyTableProcedure.java

示例3: perform

import org.apache.hadoop.hbase.HTableDescriptor; //导入方法依赖的package包/类
@Override
public void perform() throws Exception {
  HBaseTestingUtility util = context.getHBaseIntegrationTestingUtility();
  Admin admin = util.getHBaseAdmin();
  HTableDescriptor htd = admin.getTableDescriptor(tableName);

  // Try and get the current value.
  long currentValue = htd.getMaxFileSize();

  // If the current value is not set use the default for the cluster.
  // If configs are really weird this might not work.
  // That's ok. We're trying to cause chaos.
  if (currentValue <= 0) {
    currentValue =
        context.getHBaseCluster().getConf().getLong(HConstants.HREGION_MAX_FILESIZE,
            HConstants.DEFAULT_MAX_FILE_SIZE);
  }

  // Decrease by 10% at a time.
  long newValue = (long) (currentValue * 0.9);

  // We don't want to go too far below 1gb.
  // So go to about 1gb +/- 512 on each side.
  newValue = Math.max(minFileSize, newValue) - (512 - random.nextInt(1024));

  // Change the table descriptor.
  htd.setMaxFileSize(newValue);

  // Don't try the modify if we're stopping
  if (context.isStopping()) {
    return;
  }

  // modify the table.
  admin.modifyTable(tableName, htd);

  // Sleep some time.
  if (sleepTime > 0) {
    Thread.sleep(sleepTime);
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:42,代码来源:DecreaseMaxHFileSizeAction.java


注:本文中的org.apache.hadoop.hbase.HTableDescriptor.getMaxFileSize方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。