当前位置: 首页>>代码示例>>Java>>正文


Java TableReducer类代码示例

本文整理汇总了Java中org.apache.hadoop.hbase.mapreduce.TableReducer的典型用法代码示例。如果您正苦于以下问题:Java TableReducer类的具体用法?Java TableReducer怎么用?Java TableReducer使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


TableReducer类属于org.apache.hadoop.hbase.mapreduce包,在下文中一共展示了TableReducer类的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: prepareTableReducer

import org.apache.hadoop.hbase.mapreduce.TableReducer; //导入依赖的package包/类
/**
 * @param inputPath input path
 * @param outputTable output table
 * @param inputFormat input format
 * @param mapper mapper class
 * @param mapperKey mapper class key
 * @param mapperValue mapper class value
 * @param reducer table reducer
 * @return a ready to execute job, unless you need to set specific job setting for the mapper / reducer
 * @throws IOException
 */
@SuppressWarnings("rawtypes")
public Job prepareTableReducer(Path inputPath, String outputTable, Class<? extends InputFormat> inputFormat,
		Class<? extends Mapper> mapper, Class<? extends Writable> mapperKey, Class<? extends Writable> mapperValue,
		Class<? extends TableReducer> reducer) throws IOException
{
	setOutputTable(outputTable);

	Configuration conf = getConf();
	conf.set("mapred.input.dir", inputPath.toString());

	Job job = new Job(conf);
	job.setJobName(getCustomJobName(job, mapper, reducer));

	job.setInputFormatClass(inputFormat);
	job.setMapperClass(mapper);
	job.setMapOutputKeyClass(mapperKey);
	job.setMapOutputValueClass(mapperValue);

	TableMapReduceUtil.initTableReducerJob(getOutputTable(), reducer, job);

	return job;
}
 
开发者ID:beeldengeluid,项目名称:zieook,代码行数:34,代码来源:ZieOokRunnerTool.java

示例2: initTableReducerJob

import org.apache.hadoop.hbase.mapreduce.TableReducer; //导入依赖的package包/类
public static void initTableReducerJob(String table, Class<? extends TableReducer> reducer,
    Job job, Class partitioner, String quorumAddress, String serverClass, String serverImpl,
    boolean addDependencyJars, Class<? extends OutputFormat> outputFormatClass) throws IOException {

  Configuration conf = job.getConfiguration();
  HBaseConfiguration.merge(conf, HBaseConfiguration.create(conf));
  job.setOutputFormatClass(outputFormatClass);
  if (reducer != null) job.setReducerClass(reducer);
  conf.set(TableOutputFormat.OUTPUT_TABLE, table);
  // If passed a quorum/ensemble address, pass it on to TableOutputFormat.
  if (quorumAddress != null) {
    // Calling this will validate the format
    ZKUtil.transformClusterKey(quorumAddress);
    conf.set(TableOutputFormat.QUORUM_ADDRESS, quorumAddress);
  }
  if (serverClass != null && serverImpl != null) {
    conf.set(TableOutputFormat.REGION_SERVER_CLASS, serverClass);
    conf.set(TableOutputFormat.REGION_SERVER_IMPL, serverImpl);
  }
  job.setOutputKeyClass(ImmutableBytesWritable.class);
  job.setOutputValueClass(Writable.class);
  if (partitioner == HRegionPartitioner.class) {
    job.setPartitionerClass(HRegionPartitioner.class);
    HTable outputTable = new HTable(conf, table);
    int regions = outputTable.getRegionsInfo().size();
    if (job.getNumReduceTasks() > regions) {
      job.setNumReduceTasks(outputTable.getRegionsInfo().size());
    }
  } else if (partitioner != null) {
    job.setPartitionerClass(partitioner);
  }

  if (addDependencyJars) {
    addDependencyJars(job);
  }

  TableMapReduceUtil.initCredentials(job);
}
 
开发者ID:XiaoMi,项目名称:themis,代码行数:39,代码来源:ThemisTableMapReduceUtil.java

示例3: initTableReducerJob

import org.apache.hadoop.hbase.mapreduce.TableReducer; //导入依赖的package包/类
/**
 * Use this before submitting a TableReduce job. It will
 * appropriately set up the JobConf.
 *
 * @param table  The output table.
 * @param reducer  The reducer class to use.
 * @param job  The current job to adjust.  Make sure the passed job is
 * carrying all necessary HBase configuration.
 * @param partitioner  Partitioner to use. Pass <code>null</code> to use
 * default partitioner.
 * @param quorumAddress Distant cluster to write to; default is null for
 * output to the cluster that is designated in <code>hbase-site.xml</code>.
 * Set this String to the zookeeper ensemble of an alternate remote cluster
 * when you would have the reduce write a cluster that is other than the
 * default; e.g. copying tables between clusters, the source would be
 * designated by <code>hbase-site.xml</code> and this param would have the
 * ensemble address of the remote cluster.  The format to pass is particular.
 * Pass <code> &lt;hbase.zookeeper.quorum>:&lt;hbase.zookeeper.client.port>:&lt;zookeeper.znode.parent>
 * </code> such as <code>server,server2,server3:2181:/hbase</code>.
 * @param serverClass redefined hbase.regionserver.class
 * @param serverImpl redefined hbase.regionserver.impl
 * @param addDependencyJars upload HBase jars and jars for any of the configured
 *           job classes via the distributed cache (tmpjars).
 * @throws IOException When determining the region count fails.
 */
public static void initTableReducerJob(String table,
  Class<? extends TableReducer> reducer, Job job,
  Class partitioner, String quorumAddress, String serverClass,
  String serverImpl, boolean addDependencyJars) throws IOException {

  Configuration conf = job.getConfiguration();    
  HBaseConfiguration.merge(conf, HBaseConfiguration.create(conf));
  job.setOutputFormatClass(TableOutputFormat.class);
  if (reducer != null) job.setReducerClass(reducer);
  conf.set(TableOutputFormat.OUTPUT_TABLE, table);
  // If passed a quorum/ensemble address, pass it on to TableOutputFormat.
  if (quorumAddress != null) {
    // Calling this will validate the format
    ZKUtil.transformClusterKey(quorumAddress);
    conf.set(TableOutputFormat.QUORUM_ADDRESS,quorumAddress);
  }
  if (serverClass != null && serverImpl != null) {
    conf.set(TableOutputFormat.REGION_SERVER_CLASS, serverClass);
    conf.set(TableOutputFormat.REGION_SERVER_IMPL, serverImpl);
  }
  job.setOutputKeyClass(ImmutableBytesWritable.class);
  job.setOutputValueClass(Writable.class);
  if (partitioner == HRegionPartitioner.class) {
    job.setPartitionerClass(HRegionPartitioner.class);
    HTable outputTable = new HTable(conf, table);
    int regions = outputTable.getRegionsInfo().size();
    if (job.getNumReduceTasks() > regions) {
      job.setNumReduceTasks(outputTable.getRegionsInfo().size());
    }
  } else if (partitioner != null) {
    job.setPartitionerClass(partitioner);
  }

  if (addDependencyJars) {
    addDependencyJars(job);
  }

  initCredentials(job);
}
 
开发者ID:lifeng5042,项目名称:RStore,代码行数:65,代码来源:TableMapReduceUtil.java

示例4: createReduceDriver

import org.apache.hadoop.hbase.mapreduce.TableReducer; //导入依赖的package包/类
public TableReduceDriver<ImmutableBytesWritable, IntWritable, ImmutableBytesWritable> createReduceDriver(TableReducer<ImmutableBytesWritable, IntWritable, ImmutableBytesWritable> tableReducer) {
    TableReduceDriver<ImmutableBytesWritable, IntWritable, ImmutableBytesWritable> reduceDriver = TableReduceDriver.newTableReduceDriver(tableReducer);
    configure(reduceDriver.getConfiguration());
    return reduceDriver;
}
 
开发者ID:flipkart-incubator,项目名称:hbase-object-mapper,代码行数:6,代码来源:AbstractMRTest.java

示例5: initMultiTableReducerJob

import org.apache.hadoop.hbase.mapreduce.TableReducer; //导入依赖的package包/类
public static void initMultiTableReducerJob(Class<? extends TableReducer> reducer,
    Job job) throws IOException {
  initTableReducerJob("", reducer, job, null, null, null, null, true,
    MultiThemisTableOutputFormat.class);
}
 
开发者ID:XiaoMi,项目名称:themis,代码行数:6,代码来源:ThemisTableMapReduceUtil.java


注:本文中的org.apache.hadoop.hbase.mapreduce.TableReducer类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。