當前位置: 首頁>>代碼示例>>Java>>正文


Java Reducer類代碼示例

本文整理匯總了Java中org.apache.hadoop.mapred.Reducer的典型用法代碼示例。如果您正苦於以下問題:Java Reducer類的具體用法?Java Reducer怎麽用?Java Reducer使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


Reducer類屬於org.apache.hadoop.mapred包,在下文中一共展示了Reducer類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: combineAndSpill

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
private void combineAndSpill(
    RawKeyValueIterator kvIter,
    Counters.Counter inCounter) throws IOException {
  JobConf job = jobConf;
  Reducer combiner = ReflectionUtils.newInstance(combinerClass, job);
  Class<K> keyClass = (Class<K>) job.getMapOutputKeyClass();
  Class<V> valClass = (Class<V>) job.getMapOutputValueClass();
  RawComparator<K> comparator = 
    (RawComparator<K>)job.getCombinerKeyGroupingComparator();
  try {
    CombineValuesIterator values = new CombineValuesIterator(
        kvIter, comparator, keyClass, valClass, job, Reporter.NULL,
        inCounter);
    while (values.more()) {
      combiner.reduce(values.getKey(), values, combineCollector,
                      Reporter.NULL);
      values.nextKey();
    }
  } finally {
    combiner.close();
  }
}
 
開發者ID:aliyun-beta,項目名稱:aliyun-oss-hadoop-fs,代碼行數:23,代碼來源:MergeManagerImpl.java

示例2: HadoopReduceCombineFunction

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
/**
 * Maps two Hadoop Reducer (mapred API) to a combinable Flink GroupReduceFunction.
 *
 * @param hadoopReducer The Hadoop Reducer that is mapped to a GroupReduceFunction.
 * @param hadoopCombiner The Hadoop Reducer that is mapped to the combiner function.
 * @param conf The JobConf that is used to configure both Hadoop Reducers.
 */
public HadoopReduceCombineFunction(Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT> hadoopReducer,
							Reducer<KEYIN, VALUEIN, KEYIN, VALUEIN> hadoopCombiner, JobConf conf) {
	if (hadoopReducer == null) {
		throw new NullPointerException("Reducer may not be null.");
	}
	if (hadoopCombiner == null) {
		throw new NullPointerException("Combiner may not be null.");
	}
	if (conf == null) {
		throw new NullPointerException("JobConf may not be null.");
	}

	this.reducer = hadoopReducer;
	this.combiner = hadoopCombiner;
	this.jobConf = conf;
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:24,代碼來源:HadoopReduceCombineFunction.java

示例3: HadoopReduceCombineFunction

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
/**
 * Maps two Hadoop Reducer (mapred API) to a combinable Flink GroupReduceFunction.
 * 
 * @param hadoopReducer The Hadoop Reducer that is mapped to a GroupReduceFunction.
 * @param hadoopCombiner The Hadoop Reducer that is mapped to the combiner function.
 * @param conf The JobConf that is used to configure both Hadoop Reducers.
 */
public HadoopReduceCombineFunction(Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT> hadoopReducer,
							Reducer<KEYIN,VALUEIN,KEYIN,VALUEIN> hadoopCombiner, JobConf conf) {
	if(hadoopReducer == null) {
		throw new NullPointerException("Reducer may not be null.");
	}
	if(hadoopCombiner == null) {
		throw new NullPointerException("Combiner may not be null.");
	}
	if(conf == null) {
		throw new NullPointerException("JobConf may not be null.");
	}
	
	this.reducer = hadoopReducer;
	this.combiner = hadoopCombiner;
	this.jobConf = conf;
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:24,代碼來源:HadoopReduceCombineFunction.java

示例4: open

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public void open(Configuration parameters) throws Exception {
	super.open(parameters);
	this.reducer.configure(jobConf);
	
	this.reporter = new HadoopDummyReporter();
	this.reduceCollector = new HadoopOutputCollector<KEYOUT, VALUEOUT>();
	Class<KEYIN> inKeyClass = (Class<KEYIN>) TypeExtractor.getParameterType(Reducer.class, reducer.getClass(), 0);
	TypeSerializer<KEYIN> keySerializer = TypeExtractor.getForClass(inKeyClass).createSerializer(getRuntimeContext().getExecutionConfig());
	this.valueIterator = new HadoopTupleUnwrappingIterator<KEYIN, VALUEIN>(keySerializer);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:13,代碼來源:HadoopReduceFunction.java

示例5: configure

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
public void configure(JobConf job) {
    super.configure(job);
    Class<?> c = job.getClass("stream.reduce.posthook", null, Mapper.class);
    if(c != null) {
        postMapper = (Mapper)ReflectionUtils.newInstance(c, job);
        LOG.info("PostHook="+c.getName());
    }

    c = job.getClass("stream.reduce.prehook", null, Reducer.class);
    if(c != null) {
        preReducer = (Reducer)ReflectionUtils.newInstance(c, job);
        oc = new InmemBufferingOutputCollector();
        LOG.info("PreHook="+c.getName());
    }
    this.ignoreKey = job.getBoolean("stream.reduce.ignoreKey", false);
}
 
開發者ID:rhli,項目名稱:hadoop-EAR,代碼行數:17,代碼來源:PipeReducer.java

示例6: combineAndSpill

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
private void combineAndSpill(
    RawKeyValueIterator kvIter,
    Counters.Counter inCounter) throws IOException {
  JobConf job = jobConf;
  Reducer combiner = ReflectionUtils.newInstance(combinerClass, job);
  Class<K> keyClass = (Class<K>) job.getMapOutputKeyClass();
  Class<V> valClass = (Class<V>) job.getMapOutputValueClass();
  RawComparator<K> comparator = 
    (RawComparator<K>)job.getOutputKeyComparator();
  try {
    CombineValuesIterator values = new CombineValuesIterator(
        kvIter, comparator, keyClass, valClass, job, Reporter.NULL,
        inCounter);
    while (values.more()) {
      combiner.reduce(values.getKey(), values, combineCollector,
                      Reporter.NULL);
      values.nextKey();
    }
  } finally {
    combiner.close();
  }
}
 
開發者ID:ict-carch,項目名稱:hadoop-plus,代碼行數:23,代碼來源:MergeManagerImpl.java

示例7: runOldCombiner

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
private void runOldCombiner(final TezRawKeyValueIterator rawIter, final Writer writer) throws IOException {
  Class<? extends Reducer> reducerClazz = (Class<? extends Reducer>) conf.getClass("mapred.combiner.class", null, Reducer.class);
  
  Reducer combiner = ReflectionUtils.newInstance(reducerClazz, conf);
  
  OutputCollector collector = new OutputCollector() {
    @Override
    public void collect(Object key, Object value) throws IOException {
      writer.append(key, value);
    }
  };
  
  CombinerValuesIterator values = new CombinerValuesIterator(rawIter, keyClass, valClass, comparator);
  
  while (values.moveToNext()) {
    combiner.reduce(values.getKey(), values.getValues().iterator(), collector, reporter);
  }
}
 
開發者ID:apache,項目名稱:incubator-tez,代碼行數:19,代碼來源:MRCombiner.java

示例8: runOldCombiner

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
private void runOldCombiner(final TezRawKeyValueIterator rawIter, final Writer writer) throws IOException {
  Class<? extends Reducer> reducerClazz = (Class<? extends Reducer>) conf.getClass("mapred.combiner.class", null, Reducer.class);
  
  Reducer combiner = ReflectionUtils.newInstance(reducerClazz, conf);
  
  OutputCollector collector = new OutputCollector() {
    @Override
    public void collect(Object key, Object value) throws IOException {
      writer.append(key, value);
      combineOutputRecordsCounter.increment(1);
    }
  };
  
  CombinerValuesIterator values = new CombinerValuesIterator(rawIter, keyClass, valClass, comparator);
  
  while (values.moveToNext()) {
    combiner.reduce(values.getKey(), values.getValues().iterator(), collector, reporter);
  }
}
 
開發者ID:apache,項目名稱:tez,代碼行數:20,代碼來源:MRCombiner.java

示例9: open

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public void open(Configuration parameters) throws Exception {
	super.open(parameters);
	this.reducer.configure(jobConf);
	this.combiner.configure(jobConf);

	this.reporter = new HadoopDummyReporter();
	Class<KEYIN> inKeyClass = (Class<KEYIN>) TypeExtractor.getParameterType(Reducer.class, reducer.getClass(), 0);
	TypeSerializer<KEYIN> keySerializer = TypeExtractor.getForClass(inKeyClass).createSerializer(getRuntimeContext().getExecutionConfig());
	this.valueIterator = new HadoopTupleUnwrappingIterator<>(keySerializer);
	this.combineCollector = new HadoopOutputCollector<>();
	this.reduceCollector = new HadoopOutputCollector<>();
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:15,代碼來源:HadoopReduceCombineFunction.java

示例10: getProducedType

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public TypeInformation<Tuple2<KEYOUT, VALUEOUT>> getProducedType() {
	Class<KEYOUT> outKeyClass = (Class<KEYOUT>) TypeExtractor.getParameterType(Reducer.class, reducer.getClass(), 2);
	Class<VALUEOUT> outValClass = (Class<VALUEOUT>) TypeExtractor.getParameterType(Reducer.class, reducer.getClass(), 3);

	final TypeInformation<KEYOUT> keyTypeInfo = TypeExtractor.getForClass(outKeyClass);
	final TypeInformation<VALUEOUT> valueTypleInfo = TypeExtractor.getForClass(outValClass);
	return new TupleTypeInfo<>(keyTypeInfo, valueTypleInfo);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:11,代碼來源:HadoopReduceCombineFunction.java

示例11: readObject

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
@SuppressWarnings("unchecked")
private void readObject(final ObjectInputStream in) throws IOException, ClassNotFoundException {

	Class<Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>> reducerClass =
			(Class<Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>>) in.readObject();
	reducer = InstantiationUtil.instantiate(reducerClass);

	Class<Reducer<KEYIN, VALUEIN, KEYIN, VALUEIN>> combinerClass =
			(Class<Reducer<KEYIN, VALUEIN, KEYIN, VALUEIN>>) in.readObject();
	combiner = InstantiationUtil.instantiate(combinerClass);

	jobConf = new JobConf();
	jobConf.readFields(in);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:15,代碼來源:HadoopReduceCombineFunction.java

示例12: HadoopReduceFunction

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
/**
 * Maps a Hadoop Reducer (mapred API) to a non-combinable Flink GroupReduceFunction.
	 *
 * @param hadoopReducer The Hadoop Reducer to wrap.
 * @param conf The JobConf that is used to configure the Hadoop Reducer.
 */
public HadoopReduceFunction(Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT> hadoopReducer, JobConf conf) {
	if (hadoopReducer == null) {
		throw new NullPointerException("Reducer may not be null.");
	}
	if (conf == null) {
		throw new NullPointerException("JobConf may not be null.");
	}

	this.reducer = hadoopReducer;
	this.jobConf = conf;
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:18,代碼來源:HadoopReduceFunction.java

示例13: open

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public void open(Configuration parameters) throws Exception {
	super.open(parameters);
	this.reducer.configure(jobConf);

	this.reporter = new HadoopDummyReporter();
	this.reduceCollector = new HadoopOutputCollector<KEYOUT, VALUEOUT>();
	Class<KEYIN> inKeyClass = (Class<KEYIN>) TypeExtractor.getParameterType(Reducer.class, reducer.getClass(), 0);
	TypeSerializer<KEYIN> keySerializer = TypeExtractor.getForClass(inKeyClass).createSerializer(getRuntimeContext().getExecutionConfig());
	this.valueIterator = new HadoopTupleUnwrappingIterator<KEYIN, VALUEIN>(keySerializer);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:13,代碼來源:HadoopReduceFunction.java

示例14: getProducedType

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public TypeInformation<Tuple2<KEYOUT, VALUEOUT>> getProducedType() {
	Class<KEYOUT> outKeyClass = (Class<KEYOUT>) TypeExtractor.getParameterType(Reducer.class, reducer.getClass(), 2);
	Class<VALUEOUT> outValClass = (Class<VALUEOUT>) TypeExtractor.getParameterType(Reducer.class, reducer.getClass(), 3);

	final TypeInformation<KEYOUT> keyTypeInfo = TypeExtractor.getForClass((Class<KEYOUT>) outKeyClass);
	final TypeInformation<VALUEOUT> valueTypleInfo = TypeExtractor.getForClass((Class<VALUEOUT>) outValClass);
	return new TupleTypeInfo<Tuple2<KEYOUT, VALUEOUT>>(keyTypeInfo, valueTypleInfo);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:11,代碼來源:HadoopReduceFunction.java

示例15: readObject

import org.apache.hadoop.mapred.Reducer; //導入依賴的package包/類
@SuppressWarnings("unchecked")
private void readObject(final ObjectInputStream in) throws IOException, ClassNotFoundException {

	Class<Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>> reducerClass =
			(Class<Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>>) in.readObject();
	reducer = InstantiationUtil.instantiate(reducerClass);

	jobConf = new JobConf();
	jobConf.readFields(in);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:11,代碼來源:HadoopReduceFunction.java


注:本文中的org.apache.hadoop.mapred.Reducer類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。