當前位置: 首頁>>代碼示例>>Java>>正文


Java HColumnDescriptor.setTimeToLive方法代碼示例

本文整理匯總了Java中org.apache.hadoop.hbase.HColumnDescriptor.setTimeToLive方法的典型用法代碼示例。如果您正苦於以下問題:Java HColumnDescriptor.setTimeToLive方法的具體用法?Java HColumnDescriptor.setTimeToLive怎麽用?Java HColumnDescriptor.setTimeToLive使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.hbase.HColumnDescriptor的用法示例。


在下文中一共展示了HColumnDescriptor.setTimeToLive方法的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: createWriteTable

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
private void createWriteTable(int numberOfServers) throws IOException {
  int numberOfRegions = (int)(numberOfServers * regionsLowerLimit);
  LOG.info("Number of live regionservers: " + numberOfServers + ", "
      + "pre-splitting the canary table into " + numberOfRegions + " regions "
      + "(current  lower limi of regions per server is " + regionsLowerLimit
      + " and you can change it by config: "
      + HConstants.HBASE_CANARY_WRITE_PERSERVER_REGIONS_LOWERLIMIT_KEY + " )");
  HTableDescriptor desc = new HTableDescriptor(writeTableName);
  HColumnDescriptor family = new HColumnDescriptor(CANARY_TABLE_FAMILY_NAME);
  family.setMaxVersions(1);
  family.setTimeToLive(writeDataTTL);

  desc.addFamily(family);
  byte[][] splits = new RegionSplitter.HexStringSplit().split(numberOfRegions);
  admin.createTable(desc, splits);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:17,代碼來源:Canary.java

示例2: prepareData

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
private Store prepareData() throws IOException {
  HBaseAdmin admin = TEST_UTIL.getHBaseAdmin();
  if (admin.tableExists(tableName)) {
    admin.disableTable(tableName);
    admin.deleteTable(tableName);
  }
  HTableDescriptor desc = new HTableDescriptor(tableName);
  desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
    FIFOCompactionPolicy.class.getName());
  desc.setConfiguration(HConstants.HBASE_REGION_SPLIT_POLICY_KEY, 
    DisabledRegionSplitPolicy.class.getName());
  HColumnDescriptor colDesc = new HColumnDescriptor(family);
  colDesc.setTimeToLive(1); // 1 sec
  desc.addFamily(colDesc);

  admin.createTable(desc);
  Table table = TEST_UTIL.getConnection().getTable(tableName);
  Random rand = new Random();
  TimeOffsetEnvironmentEdge edge =
      (TimeOffsetEnvironmentEdge) EnvironmentEdgeManager.getDelegate();
  for (int i = 0; i < 10; i++) {
    for (int j = 0; j < 10; j++) {
      byte[] value = new byte[128 * 1024];
      rand.nextBytes(value);
      table.put(new Put(Bytes.toBytes(i * 10 + j)).addColumn(family, qualifier, value));
    }
    admin.flush(tableName);
    edge.increment(1001);
  }
  return getStoreWithName(tableName);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:32,代碼來源:TestFIFOCompactionPolicy.java

示例3: testSanityCheckMinVersion

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
@Test  
public void testSanityCheckMinVersion() throws Exception
{
  Configuration conf = TEST_UTIL.getConfiguration();
  conf.setInt(HStore.BLOCKING_STOREFILES_KEY, 10000);
  TEST_UTIL.startMiniCluster(1);

  HBaseAdmin admin = TEST_UTIL.getHBaseAdmin();
  String tableName = this.tableName.getNameAsString()+"-MinVersion";
  if (admin.tableExists(tableName)) {
    admin.disableTable(tableName);
    admin.deleteTable(tableName);
  }
  HTableDescriptor desc = new HTableDescriptor(tableName);
  desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
    FIFOCompactionPolicy.class.getName());
  desc.setConfiguration(HConstants.HBASE_REGION_SPLIT_POLICY_KEY, 
    DisabledRegionSplitPolicy.class.getName());
  HColumnDescriptor colDesc = new HColumnDescriptor(family);
  colDesc.setTimeToLive(1); // 1 sec
  colDesc.setMinVersions(1);
  desc.addFamily(colDesc);
  try{
    admin.createTable(desc);
    Assert.fail();
  }catch(Exception e){      
  }finally{
    TEST_UTIL.shutdownMiniCluster();
  }
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:31,代碼來源:TestFIFOCompactionPolicy.java

示例4: testSanityCheckBlockingStoreFiles

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
@Test  
public void testSanityCheckBlockingStoreFiles() throws Exception
{
  Configuration conf = TEST_UTIL.getConfiguration();
  conf.setInt(HStore.BLOCKING_STOREFILES_KEY, 10);
  TEST_UTIL.startMiniCluster(1);

  HBaseAdmin admin = TEST_UTIL.getHBaseAdmin();
  String tableName = this.tableName.getNameAsString()+"-MinVersion";
  if (admin.tableExists(tableName)) {
    admin.disableTable(tableName);
    admin.deleteTable(tableName);
  }
  HTableDescriptor desc = new HTableDescriptor(tableName);
  desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
    FIFOCompactionPolicy.class.getName());
  desc.setConfiguration(HConstants.HBASE_REGION_SPLIT_POLICY_KEY, 
    DisabledRegionSplitPolicy.class.getName());
  HColumnDescriptor colDesc = new HColumnDescriptor(family);
  colDesc.setTimeToLive(1); // 1 sec
  desc.addFamily(colDesc);
  try{
    admin.createTable(desc);
    Assert.fail();
  }catch(Exception e){      
  }finally{
    TEST_UTIL.shutdownMiniCluster();
  }
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:30,代碼來源:TestFIFOCompactionPolicy.java


注:本文中的org.apache.hadoop.hbase.HColumnDescriptor.setTimeToLive方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。