當前位置: 首頁>>代碼示例>>Java>>正文


Java Reducer.Context方法代碼示例

本文整理匯總了Java中org.apache.hadoop.mapreduce.Reducer.Context方法的典型用法代碼示例。如果您正苦於以下問題:Java Reducer.Context方法的具體用法?Java Reducer.Context怎麽用?Java Reducer.Context使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.mapreduce.Reducer的用法示例。


在下文中一共展示了Reducer.Context方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: reduce

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
@Override
public void reduce(final IntermediateProspect prospect, final Iterable<LongWritable> counts, final Date timestamp, final Reducer.Context context) throws IOException, InterruptedException {
    long sum = 0;
    for(final LongWritable count : counts) {
        sum += count.get();
    }

    final String indexType = prospect.getTripleValueType().getIndexType();

    // not sure if this is the best idea..
    if ((sum >= 0) || indexType.equals(TripleValueType.PREDICATE.getIndexType())) {
        final Mutation m = new Mutation(indexType + DELIM + prospect.getData() + DELIM + ProspectorUtils.getReverseIndexDateTime(timestamp));

        final String dataType = prospect.getDataType();
        final ColumnVisibility visibility = new ColumnVisibility(prospect.getVisibility());
        final Value sumValue = new Value(("" + sum).getBytes(StandardCharsets.UTF_8));
        m.put(COUNT, prospect.getDataType(), visibility, timestamp.getTime(), sumValue);

        context.write(null, m);
    }
}
 
開發者ID:apache,項目名稱:incubator-rya,代碼行數:22,代碼來源:CountPlan.java

示例2: addReducer

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
/**
 * Add reducer that reads from context and writes to a queue
 */
@SuppressWarnings("unchecked")
void addReducer(TaskInputOutputContext inputContext,
    ChainBlockingQueue<KeyValuePair<?, ?>> outputQueue) throws IOException,
    InterruptedException {

  Class<?> keyOutClass = rConf.getClass(REDUCER_OUTPUT_KEY_CLASS,
      Object.class);
  Class<?> valueOutClass = rConf.getClass(REDUCER_OUTPUT_VALUE_CLASS,
      Object.class);
  RecordWriter rw = new ChainRecordWriter(keyOutClass, valueOutClass,
      outputQueue, rConf);
  Reducer.Context reducerContext = createReduceContext(rw,
      (ReduceContext) inputContext, rConf);
  ReduceRunner runner = new ReduceRunner(reducerContext, reducer, rw);
  threads.add(runner);
}
 
開發者ID:ict-carch,項目名稱:hadoop-plus,代碼行數:20,代碼來源:Chain.java

示例3: setup

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
@Override
public void setup(final Reducer.Context context) throws IOException, InterruptedException {
    faunusConf = ModifiableHadoopConfiguration.of(DEFAULT_COMPAT.getContextConfiguration(context));

    if (!faunusConf.has(LINK_DIRECTION)) {
        Iterator<Entry<String, String>> it = DEFAULT_COMPAT.getContextConfiguration(context).iterator();
        log.error("Broken configuration missing {}", LINK_DIRECTION);
        log.error("---- Start config dump ----");
        while (it.hasNext()) {
            Entry<String,String> ent = it.next();
            log.error("k:{} -> v:{}", ent.getKey(), ent.getValue());
        }
        log.error("---- End config dump   ----");
        throw new NullPointerException();
    }
    direction = faunusConf.get(LINK_DIRECTION).opposite();
}
 
開發者ID:graben1437,項目名稱:titan0.5.4-hbase1.1.1-custom,代碼行數:18,代碼來源:LinkMapReduce.java

示例4: runReducer

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
@SuppressWarnings("unchecked")
<KEYIN, VALUEIN, KEYOUT, VALUEOUT> void runReducer(
    TaskInputOutputContext<KEYIN, VALUEIN, KEYOUT, VALUEOUT> context)
    throws IOException, InterruptedException {
  RecordWriter<KEYOUT, VALUEOUT> rw = new ChainRecordWriter<KEYOUT, VALUEOUT>(
      context);
  Reducer.Context reducerContext = createReduceContext(rw,
      (ReduceContext) context, rConf);
  reducer.run(reducerContext);
  rw.close(context);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:12,代碼來源:Chain.java

示例5: reduce

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
@Test
   public void reduce(@Mocked final Reducer.Context defaultContext) throws IOException,InterruptedException {
BitcoinTransactionReducer reducer = new BitcoinTransactionReducer();
final Text defaultKey = new Text("Transaction Input Count:");
final IntWritable oneInt = new IntWritable(1);
final IntWritable twoInt = new IntWritable(2);
final LongWritable resultLong = new LongWritable(3);
final ArrayList al = new ArrayList<IntWritable>();
al.add(oneInt);
al.add(twoInt);
new Expectations() {{
	defaultContext.write(defaultKey,resultLong); times=1;
}};
reducer.reduce(defaultKey,al,defaultContext);
   }
 
開發者ID:ZuInnoTe,項目名稱:hadoopcryptoledger,代碼行數:16,代碼來源:MapReduceBitcoinTransactionTest.java

示例6: reduce

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
@Test
   public void reduce(@Mocked final Reducer.Context defaultContext) throws IOException,InterruptedException {
EthereumBlockReducer reducer = new EthereumBlockReducer();
final Text defaultKey = new Text("Transaction Count:");
final IntWritable oneInt = new IntWritable(1);
final IntWritable twoInt = new IntWritable(2);
final LongWritable resultLong = new LongWritable(3);
final ArrayList al = new ArrayList<IntWritable>();
al.add(oneInt);
al.add(twoInt);
new Expectations() {{
	defaultContext.write(defaultKey,resultLong); times=1;
}};
reducer.reduce(defaultKey,al,defaultContext);
   }
 
開發者ID:ZuInnoTe,項目名稱:hadoopcryptoledger,代碼行數:16,代碼來源:MapReduceEthereumBlockTest.java

示例7: reduce

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
@Test
   public void reduce(@Mocked final Reducer.Context defaultContext) throws IOException,InterruptedException {
BitcoinBlockReducer reducer = new BitcoinBlockReducer();
final Text defaultKey = new Text("Transaction Count:");
final IntWritable oneInt = new IntWritable(1);
final IntWritable twoInt = new IntWritable(2);
final LongWritable resultLong = new LongWritable(3);
final ArrayList al = new ArrayList<IntWritable>();
al.add(oneInt);
al.add(twoInt);
new Expectations() {{
	defaultContext.write(defaultKey,resultLong); times=1;
}};
reducer.reduce(defaultKey,al,defaultContext);
   }
 
開發者ID:ZuInnoTe,項目名稱:hadoopcryptoledger,代碼行數:16,代碼來源:MapReduceBitcoinBlockTest.java

示例8: checkBinaries

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
protected String checkBinaries(Reducer.Context context) throws IOException {
    Logger.DEBUG("Checking for binaries...");
    String binDir = null;
    URI[] localPaths = context.getCacheArchives();
    for(int i = 0; i < localPaths.length; i++ ) {
        Path path = new Path(localPaths[i].getPath());
        if(path.getName().startsWith("bin") && path.getName().endsWith(".tar.gz")) {
            binDir = "./" + path.getName() + "/bin/";
        }
    }
    if(binDir == null) 
        throw new IOException("Can't find the binary file, the filename should start with 'bin' and end in '.tar.gz'");
    printDirectoryTree(new File(binDir), 0);
    return binDir;
}
 
開發者ID:biointec,項目名稱:halvade,代碼行數:16,代碼來源:HalvadeReducer.java

示例9: setContext

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
public void setContext(Reducer.Context context) {
        this.context = context;
//        mem = context.getConfiguration().get("mapreduce.reduce.java.opts");
        tmp = HalvadeConf.getScratchTempDir(context.getConfiguration());
        java.add(javaTmpdir + tmp + "javatmp/");
        mem = "-Xmx" + (int)(0.8*Integer.parseInt(context.getConfiguration().get("mapreduce.reduce.memory.mb"))) + "m";
        String customArgs = HalvadeConf.getCustomArgs(context.getConfiguration(), "java", ""); 
        if(customArgs != null)
            java.add(customArgs);
    }
 
開發者ID:biointec,項目名稱:halvade,代碼行數:11,代碼來源:GATKTools.java

示例10: setContext

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
public void setContext(Reducer.Context context) {
        this.context = context;
        mem = "-Xmx" + (int)(0.8*Integer.parseInt(context.getConfiguration().get("mapreduce.reduce.memory.mb"))) + "m";
//        mem = context.getConfiguration().get("mapreduce.reduce.java.opts");
        java.add(javaTmpdir + HalvadeConf.getScratchTempDir(context.getConfiguration()) + "javatmp/");
        String customArgs = HalvadeConf.getCustomArgs(context.getConfiguration(), "java", "");  
        if(customArgs != null)
            java.add(customArgs);
    }
 
開發者ID:biointec,項目名稱:halvade,代碼行數:10,代碼來源:PreprocessingTools.java

示例11: streamElPrep

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
public int streamElPrep(Reducer.Context context, String output, String rg, 
            int threads, SAMRecordIterator SAMit, 
            SAMFileHeader header, String dictFile, boolean updateRG, boolean keepDups, String RGID) throws InterruptedException, IOException, QualityException {
        long startTime = System.currentTimeMillis();
        String customArgs = HalvadeConf.getCustomArgs(context.getConfiguration(), "elprep", "");  
        String[] command = CommandGenerator.elPrep(bin, "/dev/stdin", output, threads, true, rg, null, !keepDups, customArgs);
//        runProcessAndWait(command);
        ProcessBuilderWrapper builder = new ProcessBuilderWrapper(command, null);
        builder.startProcess(true);        
        BufferedWriter localWriter = builder.getSTDINWriter();
        
        // write header
        final StringWriter headerTextBuffer = new StringWriter();
        new SAMTextHeaderCodec().encode(headerTextBuffer, header);
        final String headerText = headerTextBuffer.toString();
        localWriter.write(headerText, 0, headerText.length());
        
        
        SAMRecord sam;
        int reads = 0;
        while(SAMit.hasNext()) {
            sam = SAMit.next();
            if(updateRG)
                sam.setAttribute(SAMTag.RG.name(), RGID);
            String samString = sam.getSAMString();
            localWriter.write(samString, 0, samString.length());
            reads++;
        }
        localWriter.flush();
        localWriter.close();
                
        int error = builder.waitForCompletion();
        if(error != 0)
            throw new ProcessException("elPrep", error);
        long estimatedTime = System.currentTimeMillis() - startTime;
        Logger.DEBUG("estimated time: " + estimatedTime / 1000);
        if(context != null)
            context.getCounter(HalvadeCounters.TIME_ELPREP).increment(estimatedTime);
        return reads;
    }
 
開發者ID:biointec,項目名稱:halvade,代碼行數:41,代碼來源:PreprocessingTools.java

示例12: buildNewReducerContext

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
@SuppressWarnings({ "unchecked", "rawtypes" })
private static <K1, V1, K2, V2> Reducer<K1, V1, K2, V2>.Context buildNewReducerContext(
    Configuration configuration, RecordWriter<K2, V2> output,
    Class<K1> keyClass, Class<V1> valueClass) throws Exception {
  Class<?> reduceContextImplClass = Class
      .forName("org.apache.hadoop.mapreduce.task.ReduceContextImpl");
  Constructor<?> cons = reduceContextImplClass.getConstructors()[0];
  Object reduceContextImpl = cons.newInstance(configuration,
      new TaskAttemptID(), new MockIterator(), null, null, output, null,
      new DummyStatusReporter(), null, keyClass, valueClass);

  Class<?> wrappedReducerClass = Class
      .forName("org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer");
  Object wrappedReducer = wrappedReducerClass.newInstance();
  Method getReducerContext = wrappedReducerClass.getMethod(
      "getReducerContext", ReduceContext.class);
  return (Reducer.Context) getReducerContext.invoke(wrappedReducer,
      reduceContextImpl);
}
 
開發者ID:SiddharthMalhotra,項目名稱:sPCA,代碼行數:20,代碼來源:DummyRecordWriter.java

示例13: buildOldReducerContext

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
@SuppressWarnings({ "unchecked", "rawtypes" })
private static <K1, V1, K2, V2> Reducer<K1, V1, K2, V2>.Context buildOldReducerContext(
    Reducer<K1, V1, K2, V2> reducer, Configuration configuration,
    RecordWriter<K2, V2> output, Class<K1> keyClass, Class<V1> valueClass)
    throws Exception {
  Constructor<?> cons = getNestedContextConstructor(reducer.getClass());
  // first argument to the constructor is the enclosing instance
  return (Reducer.Context) cons.newInstance(reducer, configuration,
      new TaskAttemptID(), new MockIterator(), null, null, output, null,
      new DummyStatusReporter(), null, keyClass, valueClass);
}
 
開發者ID:SiddharthMalhotra,項目名稱:sPCA,代碼行數:12,代碼來源:DummyRecordWriter.java

示例14: shouldGetGroupFromElementConverter

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
@Test
public void shouldGetGroupFromElementConverter() throws IOException, InterruptedException {
    // Given
    MockAccumuloElementConverter.mock = mock(AccumuloElementConverter.class);
    final Key key = mock(Key.class);
    final List<Value> values = Arrays.asList(mock(Value.class), mock(Value.class));
    final Reducer.Context context = mock(Reducer.Context.class);
    final Configuration conf = mock(Configuration.class);
    final Schema schema = new Schema.Builder()
            .edge(TestGroups.ENTITY, new SchemaEdgeDefinition())
            .build();
    final ByteSequence colFamData = mock(ByteSequence.class);
    final byte[] colFam = StringUtil.toBytes(TestGroups.ENTITY);

    given(context.nextKey()).willReturn(true, false);
    given(context.getCurrentKey()).willReturn(key);
    given(context.getValues()).willReturn(values);
    given(context.getConfiguration()).willReturn(conf);
    given(context.getCounter(any(), any())).willReturn(mock(Counter.class));
    given(conf.get(SCHEMA)).willReturn(StringUtil.toString(schema.toCompactJson()));
    given(conf.get(AccumuloStoreConstants.ACCUMULO_ELEMENT_CONVERTER_CLASS)).willReturn(MockAccumuloElementConverter.class.getName());
    given(colFamData.getBackingArray()).willReturn(colFam);
    given(key.getColumnFamilyData()).willReturn(colFamData);
    given(MockAccumuloElementConverter.mock.getGroupFromColumnFamily(colFam)).willReturn(TestGroups.ENTITY);

    final AccumuloKeyValueReducer reducer = new AccumuloKeyValueReducer();

    // When
    reducer.run(context);

    // Then
    verify(MockAccumuloElementConverter.mock, times(1)).getGroupFromColumnFamily(colFam);
}
 
開發者ID:gchq,項目名稱:Gaffer,代碼行數:34,代碼來源:AccumuloKeyValueReducerTest.java

示例15: createReduceContext

import org.apache.hadoop.mapreduce.Reducer; //導入方法依賴的package包/類
@SuppressWarnings({ "rawtypes", "unchecked" })
public Reducer.Context createReduceContext(Configuration conf, TaskAttemptID taskid, RawKeyValueIterator input,
        Counter inputKeyCounter, Counter inputValueCounter, RecordWriter output, OutputCommitter committer,
        StatusReporter reporter, RawComparator comparator, Class keyClass, Class valueClass)
        throws HyracksDataException {
    try {
        return new WrappedReducer().getReducerContext(new ReduceContextImpl(conf, taskid, input, inputKeyCounter,
                inputValueCounter, output, committer, reporter, comparator, keyClass, valueClass));
    } catch (Exception e) {
        throw new HyracksDataException(e);
    }
}
 
開發者ID:apache,項目名稱:incubator-asterixdb-hyracks,代碼行數:13,代碼來源:MRContextUtil.java


注:本文中的org.apache.hadoop.mapreduce.Reducer.Context方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。