当前位置: 首页>>代码示例>>Java>>正文


Java MetaReader.getRegionCount方法代码示例

本文整理汇总了Java中org.apache.hadoop.hbase.catalog.MetaReader.getRegionCount方法的典型用法代码示例。如果您正苦于以下问题:Java MetaReader.getRegionCount方法的具体用法?Java MetaReader.getRegionCount怎么用?Java MetaReader.getRegionCount使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hbase.catalog.MetaReader的用法示例。


在下文中一共展示了MetaReader.getRegionCount方法的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: initTableReduceJob

import org.apache.hadoop.hbase.catalog.MetaReader; //导入方法依赖的package包/类
/**
 * Use this before submitting a TableReduce job. It will
 * appropriately set up the JobConf.
 *
 * @param table  The output table.
 * @param reducer  The reducer class to use.
 * @param job  The current job configuration to adjust.
 * @param partitioner  Partitioner to use. Pass <code>null</code> to use
 * default partitioner.
 * @param addDependencyJars upload HBase jars and jars for any of the configured
 *           job classes via the distributed cache (tmpjars).
 * @throws IOException When determining the region count fails.
 */
public static void initTableReduceJob(String table,
  Class<? extends TableReduce> reducer, JobConf job, Class partitioner,
  boolean addDependencyJars) throws IOException {
  job.setOutputFormat(TableOutputFormat.class);
  job.setReducerClass(reducer);
  job.set(TableOutputFormat.OUTPUT_TABLE, table);
  job.setOutputKeyClass(ImmutableBytesWritable.class);
  job.setOutputValueClass(Put.class);
  job.setStrings("io.serializations", job.get("io.serializations"),
      MutationSerialization.class.getName(), ResultSerialization.class.getName());
  if (partitioner == HRegionPartitioner.class) {
    job.setPartitionerClass(HRegionPartitioner.class);
    int regions = MetaReader.getRegionCount(HBaseConfiguration.create(job), table);
    if (job.getNumReduceTasks() > regions) {
      job.setNumReduceTasks(regions);
    }
  } else if (partitioner != null) {
    job.setPartitionerClass(partitioner);
  }
  if (addDependencyJars) {
    addDependencyJars(job);
  }
  initCredentials(job);
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:38,代码来源:TableMapReduceUtil.java

示例2: createPresplitTable

import org.apache.hadoop.hbase.catalog.MetaReader; //导入方法依赖的package包/类
static void createPresplitTable(String tableName, SplitAlgorithm splitAlgo,
        String[] columnFamilies, Configuration conf) throws IOException,
        InterruptedException {
  final int splitCount = conf.getInt("split.count", 0);
  Preconditions.checkArgument(splitCount > 1, "Split count must be > 1");

  Preconditions.checkArgument(columnFamilies.length > 0,
      "Must specify at least one column family. ");
  LOG.debug("Creating table " + tableName + " with " + columnFamilies.length
      + " column families.  Presplitting to " + splitCount + " regions");

  HTableDescriptor desc = new HTableDescriptor(TableName.valueOf(tableName));
  for (String cf : columnFamilies) {
    desc.addFamily(new HColumnDescriptor(Bytes.toBytes(cf)));
  }
  HBaseAdmin admin = new HBaseAdmin(conf);
  Preconditions.checkArgument(!admin.tableExists(tableName),
      "Table already exists: " + tableName);
  admin.createTable(desc, splitAlgo.split(splitCount));
  admin.close();
  LOG.debug("Table created!  Waiting for regions to show online in META...");
  if (!conf.getBoolean("split.verify", true)) {
    // NOTE: createTable is synchronous on the table, but not on the regions
    int onlineRegions = 0;
    while (onlineRegions < splitCount) {
      onlineRegions = MetaReader.getRegionCount(conf, tableName);
      LOG.debug(onlineRegions + " of " + splitCount + " regions online...");
      if (onlineRegions < splitCount) {
        Thread.sleep(10 * 1000); // sleep
      }
    }
  }

  LOG.debug("Finished creating table with " + splitCount + " regions");
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:36,代码来源:RegionSplitter.java

示例3: initTableReducerJob

import org.apache.hadoop.hbase.catalog.MetaReader; //导入方法依赖的package包/类
/**
 * Use this before submitting a TableReduce job. It will
 * appropriately set up the JobConf.
 *
 * @param table  The output table.
 * @param reducer  The reducer class to use.
 * @param job  The current job to adjust.  Make sure the passed job is
 * carrying all necessary HBase configuration.
 * @param partitioner  Partitioner to use. Pass <code>null</code> to use
 * default partitioner.
 * @param quorumAddress Distant cluster to write to; default is null for
 * output to the cluster that is designated in <code>hbase-site.xml</code>.
 * Set this String to the zookeeper ensemble of an alternate remote cluster
 * when you would have the reduce write a cluster that is other than the
 * default; e.g. copying tables between clusters, the source would be
 * designated by <code>hbase-site.xml</code> and this param would have the
 * ensemble address of the remote cluster.  The format to pass is particular.
 * Pass <code> &lt;hbase.zookeeper.quorum>:&lt;hbase.zookeeper.client.port>:&lt;zookeeper.znode.parent>
 * </code> such as <code>server,server2,server3:2181:/hbase</code>.
 * @param serverClass redefined hbase.regionserver.class
 * @param serverImpl redefined hbase.regionserver.impl
 * @param addDependencyJars upload HBase jars and jars for any of the configured
 *           job classes via the distributed cache (tmpjars).
 * @throws IOException When determining the region count fails.
 */
public static void initTableReducerJob(String table,
  Class<? extends TableReducer> reducer, Job job,
  Class partitioner, String quorumAddress, String serverClass,
  String serverImpl, boolean addDependencyJars) throws IOException {

  Configuration conf = job.getConfiguration();
  HBaseConfiguration.merge(conf, HBaseConfiguration.create(conf));
  job.setOutputFormatClass(TableOutputFormat.class);
  if (reducer != null) job.setReducerClass(reducer);
  conf.set(TableOutputFormat.OUTPUT_TABLE, table);
  conf.setStrings("io.serializations", conf.get("io.serializations"),
      MutationSerialization.class.getName(), ResultSerialization.class.getName());
  // If passed a quorum/ensemble address, pass it on to TableOutputFormat.
  if (quorumAddress != null) {
    // Calling this will validate the format
    ZKUtil.transformClusterKey(quorumAddress);
    conf.set(TableOutputFormat.QUORUM_ADDRESS,quorumAddress);
  }
  if (serverClass != null && serverImpl != null) {
    conf.set(TableOutputFormat.REGION_SERVER_CLASS, serverClass);
    conf.set(TableOutputFormat.REGION_SERVER_IMPL, serverImpl);
  }
  job.setOutputKeyClass(ImmutableBytesWritable.class);
  job.setOutputValueClass(Writable.class);
  if (partitioner == HRegionPartitioner.class) {
    job.setPartitionerClass(HRegionPartitioner.class);
    int regions = MetaReader.getRegionCount(conf, table);
    if (job.getNumReduceTasks() > regions) {
      job.setNumReduceTasks(regions);
    }
  } else if (partitioner != null) {
    job.setPartitionerClass(partitioner);
  }

  if (addDependencyJars) {
    addDependencyJars(job);
  }

  initCredentials(job);
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:66,代码来源:TableMapReduceUtil.java

示例4: limitNumReduceTasks

import org.apache.hadoop.hbase.catalog.MetaReader; //导入方法依赖的package包/类
/**
 * Ensures that the given number of reduce tasks for the given job
 * configuration does not exceed the number of regions for the given table.
 *
 * @param table  The table to get the region count for.
 * @param job  The current job configuration to adjust.
 * @throws IOException When retrieving the table details fails.
 */
public static void limitNumReduceTasks(String table, JobConf job)
throws IOException {
  int regions = MetaReader.getRegionCount(HBaseConfiguration.create(job), table);
  if (job.getNumReduceTasks() > regions)
    job.setNumReduceTasks(regions);
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:15,代码来源:TableMapReduceUtil.java

示例5: limitNumMapTasks

import org.apache.hadoop.hbase.catalog.MetaReader; //导入方法依赖的package包/类
/**
 * Ensures that the given number of map tasks for the given job
 * configuration does not exceed the number of regions for the given table.
 *
 * @param table  The table to get the region count for.
 * @param job  The current job configuration to adjust.
 * @throws IOException When retrieving the table details fails.
 */
public static void limitNumMapTasks(String table, JobConf job)
throws IOException {
  int regions = MetaReader.getRegionCount(HBaseConfiguration.create(job), table);
  if (job.getNumMapTasks() > regions)
    job.setNumMapTasks(regions);
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:15,代码来源:TableMapReduceUtil.java

示例6: limitNumReduceTasks

import org.apache.hadoop.hbase.catalog.MetaReader; //导入方法依赖的package包/类
/**
 * Ensures that the given number of reduce tasks for the given job
 * configuration does not exceed the number of regions for the given table.
 *
 * @param table  The table to get the region count for.
 * @param job  The current job to adjust.
 * @throws IOException When retrieving the table details fails.
 */
public static void limitNumReduceTasks(String table, Job job)
throws IOException {
  int regions = MetaReader.getRegionCount(job.getConfiguration(), table);
  if (job.getNumReduceTasks() > regions)
    job.setNumReduceTasks(regions);
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:15,代码来源:TableMapReduceUtil.java


注:本文中的org.apache.hadoop.hbase.catalog.MetaReader.getRegionCount方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。