當前位置: 首頁>>代碼示例>>Java>>正文


Java Table.getTableDescriptor方法代碼示例

本文整理匯總了Java中org.apache.hadoop.hbase.client.Table.getTableDescriptor方法的典型用法代碼示例。如果您正苦於以下問題:Java Table.getTableDescriptor方法的具體用法?Java Table.getTableDescriptor怎麽用?Java Table.getTableDescriptor使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.hbase.client.Table的用法示例。


在下文中一共展示了Table.getTableDescriptor方法的5個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: configureIncrementalLoadMap

import org.apache.hadoop.hbase.client.Table; //導入方法依賴的package包/類
public static void configureIncrementalLoadMap(Job job, Table table) throws IOException {
  Configuration conf = job.getConfiguration();

  job.setOutputKeyClass(ImmutableBytesWritable.class);
  job.setOutputValueClass(KeyValue.class);
  job.setOutputFormatClass(HFileOutputFormat2.class);

  // Set compression algorithms based on column families
  configureCompression(conf, table.getTableDescriptor());
  configureBloomType(table.getTableDescriptor(), conf);
  configureBlockSize(table.getTableDescriptor(), conf);
  HTableDescriptor tableDescriptor = table.getTableDescriptor();
  configureDataBlockEncoding(tableDescriptor, conf);

  TableMapReduceUtil.addDependencyJars(job);
  TableMapReduceUtil.initCredentials(job);
  LOG.info("Incremental table " + table.getName() + " output configured.");
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:19,代碼來源:HFileOutputFormat2.java

示例2: createRegion

import org.apache.hadoop.hbase.client.Table; //導入方法依賴的package包/類
protected HRegionInfo createRegion(Configuration conf, final Table htbl,
    byte[] startKey, byte[] endKey) throws IOException {
  Table meta = new HTable(conf, TableName.META_TABLE_NAME);
  HTableDescriptor htd = htbl.getTableDescriptor();
  HRegionInfo hri = new HRegionInfo(htbl.getName(), startKey, endKey);

  LOG.info("manually adding regioninfo and hdfs data: " + hri.toString());
  Path rootDir = FSUtils.getRootDir(conf);
  FileSystem fs = rootDir.getFileSystem(conf);
  Path p = new Path(FSUtils.getTableDir(rootDir, htbl.getName()),
      hri.getEncodedName());
  fs.mkdirs(p);
  Path riPath = new Path(p, HRegionFileSystem.REGION_INFO_FILE);
  FSDataOutputStream out = fs.create(riPath);
  out.write(hri.toDelimitedByteArray());
  out.close();

  // add to meta.
  MetaTableAccessor.addRegionToMeta(meta, hri);
  meta.close();
  return hri;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:23,代碼來源:OfflineMetaRebuildTestCore.java

示例3: testSerializeDeserializeFamilyDataBlockEncodingMap

import org.apache.hadoop.hbase.client.Table; //導入方法依賴的package包/類
/**
 * Test for {@link HFileOutputFormat2#configureDataBlockEncoding(HTableDescriptor, Configuration)}
 * and {@link HFileOutputFormat2#createFamilyDataBlockEncodingMap(Configuration)}.
 * Tests that the compression map is correctly serialized into
 * and deserialized from configuration
 *
 * @throws IOException
 */
@Ignore("Goes zombie too frequently; needs work. See HBASE-14563") @Test
public void testSerializeDeserializeFamilyDataBlockEncodingMap() throws IOException {
  for (int numCfs = 0; numCfs <= 3; numCfs++) {
    Configuration conf = new Configuration(this.util.getConfiguration());
    Map<String, DataBlockEncoding> familyToDataBlockEncoding =
        getMockColumnFamiliesForDataBlockEncoding(numCfs);
    Table table = Mockito.mock(HTable.class);
    setupMockColumnFamiliesForDataBlockEncoding(table,
        familyToDataBlockEncoding);
    HTableDescriptor tableDescriptor = table.getTableDescriptor();
    HFileOutputFormat2.configureDataBlockEncoding(tableDescriptor, conf);

    // read back family specific data block encoding settings from the
    // configuration
    Map<byte[], DataBlockEncoding> retrievedFamilyToDataBlockEncodingMap =
        HFileOutputFormat2
        .createFamilyDataBlockEncodingMap(conf);

    // test that we have a value for all column families that matches with the
    // used mock values
    for (Entry<String, DataBlockEncoding> entry : familyToDataBlockEncoding.entrySet()) {
      assertEquals("DataBlockEncoding configuration incorrect for column family:"
          + entry.getKey(), entry.getValue(),
          retrievedFamilyToDataBlockEncodingMap.get(entry.getKey().getBytes()));
    }
  }
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:36,代碼來源:TestHFileOutputFormat2.java

示例4: getTableSchema

import org.apache.hadoop.hbase.client.Table; //導入方法依賴的package包/類
private HTableDescriptor getTableSchema() throws IOException,
    TableNotFoundException {
  Table table = servlet.getTable(tableResource.getName());
  try {
    return table.getTableDescriptor();
  } finally {
    table.close();
  }
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:10,代碼來源:SchemaResource.java

示例5: configureDataBlockEncoding

import org.apache.hadoop.hbase.client.Table; //導入方法依賴的package包/類
/**
 * Serialize column family to data block encoding map to configuration.
 * Invoked while configuring the MR job for incremental load.
 *
 * @param table to read the properties from
 * @param conf to persist serialized values into
 * @throws IOException
 *           on failure to read column family descriptors
 */
@VisibleForTesting
static void configureDataBlockEncoding(Table table,
    Configuration conf) throws IOException {
  HTableDescriptor tableDescriptor = table.getTableDescriptor();
  HFileOutputFormat2.configureDataBlockEncoding(tableDescriptor, conf);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:16,代碼來源:HFileOutputFormat.java


注:本文中的org.apache.hadoop.hbase.client.Table.getTableDescriptor方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。