当前位置: 首页>>代码示例>>Java>>正文


Java Table.getTableDescriptor方法代码示例

本文整理汇总了Java中org.apache.hadoop.hbase.client.Table.getTableDescriptor方法的典型用法代码示例。如果您正苦于以下问题:Java Table.getTableDescriptor方法的具体用法?Java Table.getTableDescriptor怎么用?Java Table.getTableDescriptor使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hbase.client.Table的用法示例。


在下文中一共展示了Table.getTableDescriptor方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: configureIncrementalLoadMap

import org.apache.hadoop.hbase.client.Table; //导入方法依赖的package包/类
public static void configureIncrementalLoadMap(Job job, Table table) throws IOException {
  Configuration conf = job.getConfiguration();

  job.setOutputKeyClass(ImmutableBytesWritable.class);
  job.setOutputValueClass(KeyValue.class);
  job.setOutputFormatClass(HFileOutputFormat2.class);

  // Set compression algorithms based on column families
  configureCompression(conf, table.getTableDescriptor());
  configureBloomType(table.getTableDescriptor(), conf);
  configureBlockSize(table.getTableDescriptor(), conf);
  HTableDescriptor tableDescriptor = table.getTableDescriptor();
  configureDataBlockEncoding(tableDescriptor, conf);

  TableMapReduceUtil.addDependencyJars(job);
  TableMapReduceUtil.initCredentials(job);
  LOG.info("Incremental table " + table.getName() + " output configured.");
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:19,代码来源:HFileOutputFormat2.java

示例2: createRegion

import org.apache.hadoop.hbase.client.Table; //导入方法依赖的package包/类
protected HRegionInfo createRegion(Configuration conf, final Table htbl,
    byte[] startKey, byte[] endKey) throws IOException {
  Table meta = new HTable(conf, TableName.META_TABLE_NAME);
  HTableDescriptor htd = htbl.getTableDescriptor();
  HRegionInfo hri = new HRegionInfo(htbl.getName(), startKey, endKey);

  LOG.info("manually adding regioninfo and hdfs data: " + hri.toString());
  Path rootDir = FSUtils.getRootDir(conf);
  FileSystem fs = rootDir.getFileSystem(conf);
  Path p = new Path(FSUtils.getTableDir(rootDir, htbl.getName()),
      hri.getEncodedName());
  fs.mkdirs(p);
  Path riPath = new Path(p, HRegionFileSystem.REGION_INFO_FILE);
  FSDataOutputStream out = fs.create(riPath);
  out.write(hri.toDelimitedByteArray());
  out.close();

  // add to meta.
  MetaTableAccessor.addRegionToMeta(meta, hri);
  meta.close();
  return hri;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:23,代码来源:OfflineMetaRebuildTestCore.java

示例3: testSerializeDeserializeFamilyDataBlockEncodingMap

import org.apache.hadoop.hbase.client.Table; //导入方法依赖的package包/类
/**
 * Test for {@link HFileOutputFormat2#configureDataBlockEncoding(HTableDescriptor, Configuration)}
 * and {@link HFileOutputFormat2#createFamilyDataBlockEncodingMap(Configuration)}.
 * Tests that the compression map is correctly serialized into
 * and deserialized from configuration
 *
 * @throws IOException
 */
@Ignore("Goes zombie too frequently; needs work. See HBASE-14563") @Test
public void testSerializeDeserializeFamilyDataBlockEncodingMap() throws IOException {
  for (int numCfs = 0; numCfs <= 3; numCfs++) {
    Configuration conf = new Configuration(this.util.getConfiguration());
    Map<String, DataBlockEncoding> familyToDataBlockEncoding =
        getMockColumnFamiliesForDataBlockEncoding(numCfs);
    Table table = Mockito.mock(HTable.class);
    setupMockColumnFamiliesForDataBlockEncoding(table,
        familyToDataBlockEncoding);
    HTableDescriptor tableDescriptor = table.getTableDescriptor();
    HFileOutputFormat2.configureDataBlockEncoding(tableDescriptor, conf);

    // read back family specific data block encoding settings from the
    // configuration
    Map<byte[], DataBlockEncoding> retrievedFamilyToDataBlockEncodingMap =
        HFileOutputFormat2
        .createFamilyDataBlockEncodingMap(conf);

    // test that we have a value for all column families that matches with the
    // used mock values
    for (Entry<String, DataBlockEncoding> entry : familyToDataBlockEncoding.entrySet()) {
      assertEquals("DataBlockEncoding configuration incorrect for column family:"
          + entry.getKey(), entry.getValue(),
          retrievedFamilyToDataBlockEncodingMap.get(entry.getKey().getBytes()));
    }
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:36,代码来源:TestHFileOutputFormat2.java

示例4: getTableSchema

import org.apache.hadoop.hbase.client.Table; //导入方法依赖的package包/类
private HTableDescriptor getTableSchema() throws IOException,
    TableNotFoundException {
  Table table = servlet.getTable(tableResource.getName());
  try {
    return table.getTableDescriptor();
  } finally {
    table.close();
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:10,代码来源:SchemaResource.java

示例5: configureDataBlockEncoding

import org.apache.hadoop.hbase.client.Table; //导入方法依赖的package包/类
/**
 * Serialize column family to data block encoding map to configuration.
 * Invoked while configuring the MR job for incremental load.
 *
 * @param table to read the properties from
 * @param conf to persist serialized values into
 * @throws IOException
 *           on failure to read column family descriptors
 */
@VisibleForTesting
static void configureDataBlockEncoding(Table table,
    Configuration conf) throws IOException {
  HTableDescriptor tableDescriptor = table.getTableDescriptor();
  HFileOutputFormat2.configureDataBlockEncoding(tableDescriptor, conf);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:16,代码来源:HFileOutputFormat.java


注:本文中的org.apache.hadoop.hbase.client.Table.getTableDescriptor方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。