当前位置: 首页>>代码示例>>Java>>正文


Java Reducer.Context方法代码示例

本文整理汇总了Java中org.apache.hadoop.mapreduce.Reducer.Context方法的典型用法代码示例。如果您正苦于以下问题:Java Reducer.Context方法的具体用法?Java Reducer.Context怎么用?Java Reducer.Context使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.mapreduce.Reducer的用法示例。


在下文中一共展示了Reducer.Context方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: reduce

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
@Override
public void reduce(final IntermediateProspect prospect, final Iterable<LongWritable> counts, final Date timestamp, final Reducer.Context context) throws IOException, InterruptedException {
    long sum = 0;
    for(final LongWritable count : counts) {
        sum += count.get();
    }

    final String indexType = prospect.getTripleValueType().getIndexType();

    // not sure if this is the best idea..
    if ((sum >= 0) || indexType.equals(TripleValueType.PREDICATE.getIndexType())) {
        final Mutation m = new Mutation(indexType + DELIM + prospect.getData() + DELIM + ProspectorUtils.getReverseIndexDateTime(timestamp));

        final String dataType = prospect.getDataType();
        final ColumnVisibility visibility = new ColumnVisibility(prospect.getVisibility());
        final Value sumValue = new Value(("" + sum).getBytes(StandardCharsets.UTF_8));
        m.put(COUNT, prospect.getDataType(), visibility, timestamp.getTime(), sumValue);

        context.write(null, m);
    }
}
 
开发者ID:apache,项目名称:incubator-rya,代码行数:22,代码来源:CountPlan.java

示例2: addReducer

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
/**
 * Add reducer that reads from context and writes to a queue
 */
@SuppressWarnings("unchecked")
void addReducer(TaskInputOutputContext inputContext,
    ChainBlockingQueue<KeyValuePair<?, ?>> outputQueue) throws IOException,
    InterruptedException {

  Class<?> keyOutClass = rConf.getClass(REDUCER_OUTPUT_KEY_CLASS,
      Object.class);
  Class<?> valueOutClass = rConf.getClass(REDUCER_OUTPUT_VALUE_CLASS,
      Object.class);
  RecordWriter rw = new ChainRecordWriter(keyOutClass, valueOutClass,
      outputQueue, rConf);
  Reducer.Context reducerContext = createReduceContext(rw,
      (ReduceContext) inputContext, rConf);
  ReduceRunner runner = new ReduceRunner(reducerContext, reducer, rw);
  threads.add(runner);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:20,代码来源:Chain.java

示例3: setup

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
@Override
public void setup(final Reducer.Context context) throws IOException, InterruptedException {
    faunusConf = ModifiableHadoopConfiguration.of(DEFAULT_COMPAT.getContextConfiguration(context));

    if (!faunusConf.has(LINK_DIRECTION)) {
        Iterator<Entry<String, String>> it = DEFAULT_COMPAT.getContextConfiguration(context).iterator();
        log.error("Broken configuration missing {}", LINK_DIRECTION);
        log.error("---- Start config dump ----");
        while (it.hasNext()) {
            Entry<String,String> ent = it.next();
            log.error("k:{} -> v:{}", ent.getKey(), ent.getValue());
        }
        log.error("---- End config dump   ----");
        throw new NullPointerException();
    }
    direction = faunusConf.get(LINK_DIRECTION).opposite();
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:18,代码来源:LinkMapReduce.java

示例4: runReducer

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
@SuppressWarnings("unchecked")
<KEYIN, VALUEIN, KEYOUT, VALUEOUT> void runReducer(
    TaskInputOutputContext<KEYIN, VALUEIN, KEYOUT, VALUEOUT> context)
    throws IOException, InterruptedException {
  RecordWriter<KEYOUT, VALUEOUT> rw = new ChainRecordWriter<KEYOUT, VALUEOUT>(
      context);
  Reducer.Context reducerContext = createReduceContext(rw,
      (ReduceContext) context, rConf);
  reducer.run(reducerContext);
  rw.close(context);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:Chain.java

示例5: reduce

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
@Test
   public void reduce(@Mocked final Reducer.Context defaultContext) throws IOException,InterruptedException {
BitcoinTransactionReducer reducer = new BitcoinTransactionReducer();
final Text defaultKey = new Text("Transaction Input Count:");
final IntWritable oneInt = new IntWritable(1);
final IntWritable twoInt = new IntWritable(2);
final LongWritable resultLong = new LongWritable(3);
final ArrayList al = new ArrayList<IntWritable>();
al.add(oneInt);
al.add(twoInt);
new Expectations() {{
	defaultContext.write(defaultKey,resultLong); times=1;
}};
reducer.reduce(defaultKey,al,defaultContext);
   }
 
开发者ID:ZuInnoTe,项目名称:hadoopcryptoledger,代码行数:16,代码来源:MapReduceBitcoinTransactionTest.java

示例6: reduce

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
@Test
   public void reduce(@Mocked final Reducer.Context defaultContext) throws IOException,InterruptedException {
EthereumBlockReducer reducer = new EthereumBlockReducer();
final Text defaultKey = new Text("Transaction Count:");
final IntWritable oneInt = new IntWritable(1);
final IntWritable twoInt = new IntWritable(2);
final LongWritable resultLong = new LongWritable(3);
final ArrayList al = new ArrayList<IntWritable>();
al.add(oneInt);
al.add(twoInt);
new Expectations() {{
	defaultContext.write(defaultKey,resultLong); times=1;
}};
reducer.reduce(defaultKey,al,defaultContext);
   }
 
开发者ID:ZuInnoTe,项目名称:hadoopcryptoledger,代码行数:16,代码来源:MapReduceEthereumBlockTest.java

示例7: reduce

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
@Test
   public void reduce(@Mocked final Reducer.Context defaultContext) throws IOException,InterruptedException {
BitcoinBlockReducer reducer = new BitcoinBlockReducer();
final Text defaultKey = new Text("Transaction Count:");
final IntWritable oneInt = new IntWritable(1);
final IntWritable twoInt = new IntWritable(2);
final LongWritable resultLong = new LongWritable(3);
final ArrayList al = new ArrayList<IntWritable>();
al.add(oneInt);
al.add(twoInt);
new Expectations() {{
	defaultContext.write(defaultKey,resultLong); times=1;
}};
reducer.reduce(defaultKey,al,defaultContext);
   }
 
开发者ID:ZuInnoTe,项目名称:hadoopcryptoledger,代码行数:16,代码来源:MapReduceBitcoinBlockTest.java

示例8: checkBinaries

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
protected String checkBinaries(Reducer.Context context) throws IOException {
    Logger.DEBUG("Checking for binaries...");
    String binDir = null;
    URI[] localPaths = context.getCacheArchives();
    for(int i = 0; i < localPaths.length; i++ ) {
        Path path = new Path(localPaths[i].getPath());
        if(path.getName().startsWith("bin") && path.getName().endsWith(".tar.gz")) {
            binDir = "./" + path.getName() + "/bin/";
        }
    }
    if(binDir == null) 
        throw new IOException("Can't find the binary file, the filename should start with 'bin' and end in '.tar.gz'");
    printDirectoryTree(new File(binDir), 0);
    return binDir;
}
 
开发者ID:biointec,项目名称:halvade,代码行数:16,代码来源:HalvadeReducer.java

示例9: setContext

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
public void setContext(Reducer.Context context) {
        this.context = context;
//        mem = context.getConfiguration().get("mapreduce.reduce.java.opts");
        tmp = HalvadeConf.getScratchTempDir(context.getConfiguration());
        java.add(javaTmpdir + tmp + "javatmp/");
        mem = "-Xmx" + (int)(0.8*Integer.parseInt(context.getConfiguration().get("mapreduce.reduce.memory.mb"))) + "m";
        String customArgs = HalvadeConf.getCustomArgs(context.getConfiguration(), "java", ""); 
        if(customArgs != null)
            java.add(customArgs);
    }
 
开发者ID:biointec,项目名称:halvade,代码行数:11,代码来源:GATKTools.java

示例10: setContext

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
public void setContext(Reducer.Context context) {
        this.context = context;
        mem = "-Xmx" + (int)(0.8*Integer.parseInt(context.getConfiguration().get("mapreduce.reduce.memory.mb"))) + "m";
//        mem = context.getConfiguration().get("mapreduce.reduce.java.opts");
        java.add(javaTmpdir + HalvadeConf.getScratchTempDir(context.getConfiguration()) + "javatmp/");
        String customArgs = HalvadeConf.getCustomArgs(context.getConfiguration(), "java", "");  
        if(customArgs != null)
            java.add(customArgs);
    }
 
开发者ID:biointec,项目名称:halvade,代码行数:10,代码来源:PreprocessingTools.java

示例11: streamElPrep

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
public int streamElPrep(Reducer.Context context, String output, String rg, 
            int threads, SAMRecordIterator SAMit, 
            SAMFileHeader header, String dictFile, boolean updateRG, boolean keepDups, String RGID) throws InterruptedException, IOException, QualityException {
        long startTime = System.currentTimeMillis();
        String customArgs = HalvadeConf.getCustomArgs(context.getConfiguration(), "elprep", "");  
        String[] command = CommandGenerator.elPrep(bin, "/dev/stdin", output, threads, true, rg, null, !keepDups, customArgs);
//        runProcessAndWait(command);
        ProcessBuilderWrapper builder = new ProcessBuilderWrapper(command, null);
        builder.startProcess(true);        
        BufferedWriter localWriter = builder.getSTDINWriter();
        
        // write header
        final StringWriter headerTextBuffer = new StringWriter();
        new SAMTextHeaderCodec().encode(headerTextBuffer, header);
        final String headerText = headerTextBuffer.toString();
        localWriter.write(headerText, 0, headerText.length());
        
        
        SAMRecord sam;
        int reads = 0;
        while(SAMit.hasNext()) {
            sam = SAMit.next();
            if(updateRG)
                sam.setAttribute(SAMTag.RG.name(), RGID);
            String samString = sam.getSAMString();
            localWriter.write(samString, 0, samString.length());
            reads++;
        }
        localWriter.flush();
        localWriter.close();
                
        int error = builder.waitForCompletion();
        if(error != 0)
            throw new ProcessException("elPrep", error);
        long estimatedTime = System.currentTimeMillis() - startTime;
        Logger.DEBUG("estimated time: " + estimatedTime / 1000);
        if(context != null)
            context.getCounter(HalvadeCounters.TIME_ELPREP).increment(estimatedTime);
        return reads;
    }
 
开发者ID:biointec,项目名称:halvade,代码行数:41,代码来源:PreprocessingTools.java

示例12: buildNewReducerContext

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
@SuppressWarnings({ "unchecked", "rawtypes" })
private static <K1, V1, K2, V2> Reducer<K1, V1, K2, V2>.Context buildNewReducerContext(
    Configuration configuration, RecordWriter<K2, V2> output,
    Class<K1> keyClass, Class<V1> valueClass) throws Exception {
  Class<?> reduceContextImplClass = Class
      .forName("org.apache.hadoop.mapreduce.task.ReduceContextImpl");
  Constructor<?> cons = reduceContextImplClass.getConstructors()[0];
  Object reduceContextImpl = cons.newInstance(configuration,
      new TaskAttemptID(), new MockIterator(), null, null, output, null,
      new DummyStatusReporter(), null, keyClass, valueClass);

  Class<?> wrappedReducerClass = Class
      .forName("org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer");
  Object wrappedReducer = wrappedReducerClass.newInstance();
  Method getReducerContext = wrappedReducerClass.getMethod(
      "getReducerContext", ReduceContext.class);
  return (Reducer.Context) getReducerContext.invoke(wrappedReducer,
      reduceContextImpl);
}
 
开发者ID:SiddharthMalhotra,项目名称:sPCA,代码行数:20,代码来源:DummyRecordWriter.java

示例13: buildOldReducerContext

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
@SuppressWarnings({ "unchecked", "rawtypes" })
private static <K1, V1, K2, V2> Reducer<K1, V1, K2, V2>.Context buildOldReducerContext(
    Reducer<K1, V1, K2, V2> reducer, Configuration configuration,
    RecordWriter<K2, V2> output, Class<K1> keyClass, Class<V1> valueClass)
    throws Exception {
  Constructor<?> cons = getNestedContextConstructor(reducer.getClass());
  // first argument to the constructor is the enclosing instance
  return (Reducer.Context) cons.newInstance(reducer, configuration,
      new TaskAttemptID(), new MockIterator(), null, null, output, null,
      new DummyStatusReporter(), null, keyClass, valueClass);
}
 
开发者ID:SiddharthMalhotra,项目名称:sPCA,代码行数:12,代码来源:DummyRecordWriter.java

示例14: shouldGetGroupFromElementConverter

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
@Test
public void shouldGetGroupFromElementConverter() throws IOException, InterruptedException {
    // Given
    MockAccumuloElementConverter.mock = mock(AccumuloElementConverter.class);
    final Key key = mock(Key.class);
    final List<Value> values = Arrays.asList(mock(Value.class), mock(Value.class));
    final Reducer.Context context = mock(Reducer.Context.class);
    final Configuration conf = mock(Configuration.class);
    final Schema schema = new Schema.Builder()
            .edge(TestGroups.ENTITY, new SchemaEdgeDefinition())
            .build();
    final ByteSequence colFamData = mock(ByteSequence.class);
    final byte[] colFam = StringUtil.toBytes(TestGroups.ENTITY);

    given(context.nextKey()).willReturn(true, false);
    given(context.getCurrentKey()).willReturn(key);
    given(context.getValues()).willReturn(values);
    given(context.getConfiguration()).willReturn(conf);
    given(context.getCounter(any(), any())).willReturn(mock(Counter.class));
    given(conf.get(SCHEMA)).willReturn(StringUtil.toString(schema.toCompactJson()));
    given(conf.get(AccumuloStoreConstants.ACCUMULO_ELEMENT_CONVERTER_CLASS)).willReturn(MockAccumuloElementConverter.class.getName());
    given(colFamData.getBackingArray()).willReturn(colFam);
    given(key.getColumnFamilyData()).willReturn(colFamData);
    given(MockAccumuloElementConverter.mock.getGroupFromColumnFamily(colFam)).willReturn(TestGroups.ENTITY);

    final AccumuloKeyValueReducer reducer = new AccumuloKeyValueReducer();

    // When
    reducer.run(context);

    // Then
    verify(MockAccumuloElementConverter.mock, times(1)).getGroupFromColumnFamily(colFam);
}
 
开发者ID:gchq,项目名称:Gaffer,代码行数:34,代码来源:AccumuloKeyValueReducerTest.java

示例15: createReduceContext

import org.apache.hadoop.mapreduce.Reducer; //导入方法依赖的package包/类
@SuppressWarnings({ "rawtypes", "unchecked" })
public Reducer.Context createReduceContext(Configuration conf, TaskAttemptID taskid, RawKeyValueIterator input,
        Counter inputKeyCounter, Counter inputValueCounter, RecordWriter output, OutputCommitter committer,
        StatusReporter reporter, RawComparator comparator, Class keyClass, Class valueClass)
        throws HyracksDataException {
    try {
        return new WrappedReducer().getReducerContext(new ReduceContextImpl(conf, taskid, input, inputKeyCounter,
                inputValueCounter, output, committer, reporter, comparator, keyClass, valueClass));
    } catch (Exception e) {
        throw new HyracksDataException(e);
    }
}
 
开发者ID:apache,项目名称:incubator-asterixdb-hyracks,代码行数:13,代码来源:MRContextUtil.java


注:本文中的org.apache.hadoop.mapreduce.Reducer.Context方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。