当前位置: 首页>>代码示例>>Java>>正文


Java DataSet.reduceGroup方法代码示例

本文整理汇总了Java中org.apache.flink.api.java.DataSet.reduceGroup方法的典型用法代码示例。如果您正苦于以下问题:Java DataSet.reduceGroup方法的具体用法?Java DataSet.reduceGroup怎么用?Java DataSet.reduceGroup使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.flink.api.java.DataSet的用法示例。


在下文中一共展示了DataSet.reduceGroup方法的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testUngroupedHadoopReducer

import org.apache.flink.api.java.DataSet; //导入方法依赖的package包/类
@Test
public void testUngroupedHadoopReducer() throws Exception {
	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

	DataSet<Tuple2<IntWritable, IntWritable>> ds = HadoopTestData.getKVPairDataSet(env).
			map(new Mapper2());

	DataSet<Tuple2<IntWritable, IntWritable>> sum = ds.
			reduceGroup(new HadoopReduceCombineFunction<IntWritable, IntWritable, IntWritable, IntWritable>(
					new SumReducer(), new SumReducer()));

	String resultPath = tempFolder.newFile().toURI().toString();

	sum.writeAsText(resultPath);
	env.execute();

	String expected = "(0,231)\n";

	compareResultsByLinesInMemory(expected, resultPath);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:21,代码来源:HadoopReduceCombineFunctionITCase.java

示例2: testCorrectnessOfAllGroupReduceForTuples

import org.apache.flink.api.java.DataSet; //导入方法依赖的package包/类
@Test
public void testCorrectnessOfAllGroupReduceForTuples() throws Exception {
	/*
	 * check correctness of all-groupreduce for tuples
	 */

	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

	DataSet<Tuple3<Integer, Long, String>> ds = CollectionDataSets.get3TupleDataSet(env);
	DataSet<Tuple3<Integer, Long, String>> reduceDs = ds.reduceGroup(new AllAddingTuple3GroupReduce());

	List<Tuple3<Integer, Long, String>> result = reduceDs.collect();

	String expected = "231,91,Hello World\n";

	compareResultAsTuples(result, expected);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:18,代码来源:GroupReduceITCase.java

示例3: testCorrectnessOfAllGroupReduceForCustomTypes

import org.apache.flink.api.java.DataSet; //导入方法依赖的package包/类
@Test
public void testCorrectnessOfAllGroupReduceForCustomTypes() throws Exception {
	/*
	 * check correctness of all-groupreduce for custom types
	 */

	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

	DataSet<CustomType> ds = CollectionDataSets.getCustomTypeDataSet(env);
	DataSet<CustomType> reduceDs = ds.reduceGroup(new AllAddingCustomTypeGroupReduce());

	List<CustomType> result = reduceDs.collect();

	String expected = "91,210,Hello!";

	compareResultAsText(result, expected);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:18,代码来源:GroupReduceITCase.java

示例4: testForkingReduceOnNonKeyedDataset

import org.apache.flink.api.java.DataSet; //导入方法依赖的package包/类
@Test
public void testForkingReduceOnNonKeyedDataset() throws Exception {

	// set up the execution environment
	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
	env.setParallelism(4);

	// creates the input data and distributes them evenly among the available downstream tasks
	DataSet<Tuple2<Integer, Boolean>> input = createNonKeyedInput(env);

	DataSet<Tuple2<Integer, Boolean>> r1 = input.reduceGroup(new NonKeyedCombReducer());
	DataSet<Tuple2<Integer, Boolean>> r2 = input.reduceGroup(new NonKeyedGroupCombReducer());

	List<Tuple2<Integer, Boolean>> actual = r1.union(r2).collect();
	String expected = "10,true\n10,true\n";
	compareResultAsTuples(actual, expected);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:18,代码来源:ReduceWithCombinerITCase.java

示例5: testProgram

import org.apache.flink.api.java.DataSet; //导入方法依赖的package包/类
@Override
protected void testProgram() throws Exception {
	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

	DataSet<String> stringDs = env.fromElements("aa", "ab", "ac", "ad");
	DataSet<String> concatDs = stringDs.reduceGroup((values, out) -> {
		String conc = "";
		for (String s : values) {
			conc = conc.concat(s);
		}
		out.collect(conc);
	});
	concatDs.writeAsText(resultPath);
	env.execute();
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:16,代码来源:AllGroupReduceITCase.java

示例6: testUngroupedHadoopReducer

import org.apache.flink.api.java.DataSet; //导入方法依赖的package包/类
@Test
public void testUngroupedHadoopReducer() throws Exception {
	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

	DataSet<Tuple2<IntWritable, Text>> ds = HadoopTestData.getKVPairDataSet(env);

	DataSet<Tuple2<IntWritable, IntWritable>> commentCnts = ds.
			reduceGroup(new HadoopReduceFunction<IntWritable, Text, IntWritable, IntWritable>(new AllCommentCntReducer()));

	String resultPath = tempFolder.newFile().toURI().toString();

	commentCnts.writeAsText(resultPath);
	env.execute();

	String expected = "(42,15)\n";

	compareResultsByLinesInMemory(expected, resultPath);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:19,代码来源:HadoopReduceFunctionITCase.java


注:本文中的org.apache.flink.api.java.DataSet.reduceGroup方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。