当前位置: 首页>>代码示例>>Java>>正文


Java ChainReducer.addMapper方法代码示例

本文整理汇总了Java中org.apache.hadoop.mapreduce.lib.chain.ChainReducer.addMapper方法的典型用法代码示例。如果您正苦于以下问题:Java ChainReducer.addMapper方法的具体用法?Java ChainReducer.addMapper怎么用?Java ChainReducer.addMapper使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.mapreduce.lib.chain.ChainReducer的用法示例。


在下文中一共展示了ChainReducer.addMapper方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: run

import org.apache.hadoop.mapreduce.lib.chain.ChainReducer; //导入方法依赖的package包/类
@Override
public int run(String[] args) throws Exception {
    Configuration conf = this.getConf();
    Configuration reduceConf = new Configuration(false);
    Configuration mapConf = new Configuration(false);
    Job job = Job.getInstance(conf, "correlate logs");
    job.setJarByClass(CorrelateLogs.class);    
            
    Scan scan = new Scan();
    scan.setCaching(500);
    scan.setCacheBlocks(false);
    scan.addFamily(Bytes.toBytes("struct"));
    TableMapReduceUtil.initTableMapperJob(args[0], scan, HBaseMapper.class, Text.class, LongWritable.class, job);
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(LongWritable.class);       
    
    job.setNumReduceTasks(1);
    
    ChainReducer.setReducer(job, HBaseReducer.class, Text.class, LongWritable.class,
    		Text.class, LongPairWritable.class, reduceConf);
    ChainReducer.addMapper(job, AggregateMapper.class, Text.class, LongPairWritable.class, Text.class, DoubleWritable.class, mapConf);
    
    
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(DoubleWritable.class);
    job.setOutputFormatClass(TextOutputFormat.class);
    TextInputFormat.addInputPath(job, new Path(args[0]));
    TextOutputFormat.setOutputPath(job, new Path(args[1]));
 
    return job.waitForCompletion(true) ? 0 : 1;
}
 
开发者ID:hanhanwu,项目名称:Hanhan-HBase-MapReduce-in-Java,代码行数:32,代码来源:CorrelateLogs.java

示例2: addMap

import org.apache.hadoop.mapreduce.lib.chain.ChainReducer; //导入方法依赖的package包/类
@Override
public void addMap(final Class<? extends Mapper> mapper,
                   final Class<? extends WritableComparable> mapOutputKey,
                   final Class<? extends WritableComparable> mapOutputValue,
                   Configuration configuration) {

    Configuration mergedConf = overlayConfiguration(getConf(), configuration);

    try {
        final Job job;
        if (State.NONE == this.state) {
            // Create a new job with a reference to mergedConf
            job = Job.getInstance(mergedConf);
            job.setNumReduceTasks(0);
            job.setJobName(makeClassName(mapper));
            HBaseAuthHelper.setHBaseAuthToken(mergedConf, job);
            this.jobs.add(job);
        } else {
            job = this.jobs.get(this.jobs.size() - 1);
            job.setJobName(job.getJobName() + ARROW + makeClassName(mapper));
        }

        if (State.MAPPER == this.state || State.NONE == this.state) {
            ChainMapper.addMapper(job, mapper, NullWritable.class, FaunusVertex.class, mapOutputKey, mapOutputValue, mergedConf);
            /* In case no reducer is defined later for this job, set the job
             * output k/v to match the mapper output k-v.  Output formats that
             * care about their configured k-v classes (such as
             * SequenceFileOutputFormat) require these to be set correctly lest
             * they throw an exception at runtime.
             *
             * ChainReducer.setReducer overwrites these k-v settings, so if a
             * reducer is added onto this job later, these settings will be
             * overridden by the actual reducer's output k-v.
             */
            job.setOutputKeyClass(mapOutputKey);
            job.setOutputValueClass(mapOutputValue);
            this.state = State.MAPPER;
            logger.info("Added mapper " + job.getJobName() + " via ChainMapper with output (" + mapOutputKey + "," + mapOutputValue + "); current state is " + state);
        } else {
            ChainReducer.addMapper(job, mapper, NullWritable.class, FaunusVertex.class, mapOutputKey, mapOutputValue, mergedConf);
            this.state = State.REDUCER;
            logger.info("Added mapper " + job.getJobName() + " via ChainReducer with output (" + mapOutputKey + "," + mapOutputValue + "); current state is " + state);
        }
    } catch (IOException e) {
        throw new RuntimeException(e.getMessage(), e);
    }
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:48,代码来源:Hadoop2Compiler.java


注:本文中的org.apache.hadoop.mapreduce.lib.chain.ChainReducer.addMapper方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。