當前位置: 首頁>>代碼示例>>Java>>正文


Java HadoopTupleUnwrappingIterator類代碼示例

本文整理匯總了Java中org.apache.flink.hadoopcompatibility.mapred.wrapper.HadoopTupleUnwrappingIterator的典型用法代碼示例。如果您正苦於以下問題:Java HadoopTupleUnwrappingIterator類的具體用法?Java HadoopTupleUnwrappingIterator怎麽用?Java HadoopTupleUnwrappingIterator使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


HadoopTupleUnwrappingIterator類屬於org.apache.flink.hadoopcompatibility.mapred.wrapper包,在下文中一共展示了HadoopTupleUnwrappingIterator類的6個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: open

import org.apache.flink.hadoopcompatibility.mapred.wrapper.HadoopTupleUnwrappingIterator; //導入依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public void open(Configuration parameters) throws Exception {
	super.open(parameters);
	this.reducer.configure(jobConf);
	
	this.reporter = new HadoopDummyReporter();
	this.reduceCollector = new HadoopOutputCollector<KEYOUT, VALUEOUT>();
	Class<KEYIN> inKeyClass = (Class<KEYIN>) TypeExtractor.getParameterType(Reducer.class, reducer.getClass(), 0);
	TypeSerializer<KEYIN> keySerializer = TypeExtractor.getForClass(inKeyClass).createSerializer(getRuntimeContext().getExecutionConfig());
	this.valueIterator = new HadoopTupleUnwrappingIterator<KEYIN, VALUEIN>(keySerializer);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:13,代碼來源:HadoopReduceFunction.java

示例2: open

import org.apache.flink.hadoopcompatibility.mapred.wrapper.HadoopTupleUnwrappingIterator; //導入依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public void open(Configuration parameters) throws Exception {
	super.open(parameters);
	this.reducer.configure(jobConf);
	this.combiner.configure(jobConf);

	this.reporter = new HadoopDummyReporter();
	Class<KEYIN> inKeyClass = (Class<KEYIN>) TypeExtractor.getParameterType(Reducer.class, reducer.getClass(), 0);
	TypeSerializer<KEYIN> keySerializer = TypeExtractor.getForClass(inKeyClass).createSerializer(getRuntimeContext().getExecutionConfig());
	this.valueIterator = new HadoopTupleUnwrappingIterator<>(keySerializer);
	this.combineCollector = new HadoopOutputCollector<>();
	this.reduceCollector = new HadoopOutputCollector<>();
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:15,代碼來源:HadoopReduceCombineFunction.java

示例3: open

import org.apache.flink.hadoopcompatibility.mapred.wrapper.HadoopTupleUnwrappingIterator; //導入依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public void open(Configuration parameters) throws Exception {
	super.open(parameters);
	this.reducer.configure(jobConf);

	this.reporter = new HadoopDummyReporter();
	this.reduceCollector = new HadoopOutputCollector<KEYOUT, VALUEOUT>();
	Class<KEYIN> inKeyClass = (Class<KEYIN>) TypeExtractor.getParameterType(Reducer.class, reducer.getClass(), 0);
	TypeSerializer<KEYIN> keySerializer = TypeExtractor.getForClass(inKeyClass).createSerializer(getRuntimeContext().getExecutionConfig());
	this.valueIterator = new HadoopTupleUnwrappingIterator<KEYIN, VALUEIN>(keySerializer);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:13,代碼來源:HadoopReduceFunction.java

示例4: open

import org.apache.flink.hadoopcompatibility.mapred.wrapper.HadoopTupleUnwrappingIterator; //導入依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public void open(Configuration parameters) throws Exception {
	super.open(parameters);
	this.reducer.configure(jobConf);
	this.combiner.configure(jobConf);
	
	this.reporter = new HadoopDummyReporter();
	Class<KEYIN> inKeyClass = (Class<KEYIN>) TypeExtractor.getParameterType(Reducer.class, reducer.getClass(), 0);
	TypeSerializer<KEYIN> keySerializer = TypeExtractor.getForClass(inKeyClass).createSerializer(getRuntimeContext().getExecutionConfig());
	this.valueIterator = new HadoopTupleUnwrappingIterator<>(keySerializer);
	this.combineCollector = new HadoopOutputCollector<>();
	this.reduceCollector = new HadoopOutputCollector<>();
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:15,代碼來源:HadoopReduceCombineFunction.java

示例5: open

import org.apache.flink.hadoopcompatibility.mapred.wrapper.HadoopTupleUnwrappingIterator; //導入依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public void open(Configuration parameters) throws Exception {
	super.open(parameters);
	this.reducer.configure(jobConf);
	this.combiner.configure(jobConf);
	
	this.reporter = new HadoopDummyReporter();
	Class<KEYIN> inKeyClass = (Class<KEYIN>) TypeExtractor.getParameterType(Reducer.class, reducer.getClass(), 0);
	this.valueIterator = new HadoopTupleUnwrappingIterator<KEYIN, VALUEIN>(inKeyClass);
	this.combineCollector = new HadoopOutputCollector<KEYIN, VALUEIN>();
	this.reduceCollector = new HadoopOutputCollector<KEYOUT, VALUEOUT>();
}
 
開發者ID:citlab,項目名稱:vs.msc.ws14,代碼行數:14,代碼來源:HadoopReduceCombineFunction.java

示例6: open

import org.apache.flink.hadoopcompatibility.mapred.wrapper.HadoopTupleUnwrappingIterator; //導入依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public void open(Configuration parameters) throws Exception {
	super.open(parameters);
	this.reducer.configure(jobConf);
	
	this.reporter = new HadoopDummyReporter();
	this.reduceCollector = new HadoopOutputCollector<KEYOUT, VALUEOUT>();
	Class<KEYIN> inKeyClass = (Class<KEYIN>) TypeExtractor.getParameterType(Reducer.class, reducer.getClass(), 0);
	this.valueIterator = new HadoopTupleUnwrappingIterator<KEYIN, VALUEIN>(inKeyClass);
}
 
開發者ID:citlab,項目名稱:vs.msc.ws14,代碼行數:12,代碼來源:HadoopReduceFunction.java


注:本文中的org.apache.flink.hadoopcompatibility.mapred.wrapper.HadoopTupleUnwrappingIterator類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。