当前位置: 首页>>代码示例>>Java>>正文


Java Order类代码示例

本文整理汇总了Java中org.apache.flink.api.common.operators.Order的典型用法代码示例。如果您正苦于以下问题:Java Order类的具体用法?Java Order怎么用?Java Order使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


Order类属于org.apache.flink.api.common.operators包,在下文中一共展示了Order类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: transformation

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
/**
 * Data transformation.
 * The method group by trackId, sum the number of occurrences, sort the output
 * and get the top elements defined by the user.
 * @param input
 * @return
 */
@Override
public DataSet<ChartsResult> transformation(DataSet<?> input) {
    log.info("Transformation Phase. Computing the tags");
    return input
            .groupBy(0) // Grouping by trackId
            .sum(1) // Sum the occurrences of each grouped item
            .sortPartition(1, Order.DESCENDING).setParallelism(1) // Sort by count
            .first(pipelineConf.args.getLimit())
            .map( t -> {
                    Tuple3<Long, Integer, TagEvent> tuple= (Tuple3<Long, Integer, TagEvent>) t;
                    return new ChartsResult(tuple.f0, tuple.f1, tuple.f2);
            })
            .returns(new TypeHint<ChartsResult>(){});
}
 
开发者ID:aaitor,项目名称:flink-charts,代码行数:22,代码来源:SimpleChartsPipeline.java

示例2: transformation

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
/**
 * Data transformation.
 * The method group by trackId, sum the number of occurrences, sort the output
 * and get the top elements defined by the user.
 * @param input
 * @return
 */
@Override
public DataSet<ChartsResult> transformation(DataSet<?> input) {
    final int limit= pipelineConf.getArgs().getLimit();

    log.info("Transformation Phase. Computing the tags");
    SortPartitionOperator<Tuple4<Long, Integer, String, TagEvent>> grouped = (SortPartitionOperator<Tuple4<Long, Integer, String, TagEvent>>) input
            .groupBy(2, 0) // Grouping by state & trackId
            .sum(1) // Sum the occurrences of each grouped item
            .sortPartition(2, Order.ASCENDING).setParallelism(1) // Sort by state
            .sortPartition(1, Order.DESCENDING).setParallelism(1);// Sort by count
            return grouped.reduceGroup(new ReduceLimit(limit, 2)); // Reducing groups applying the limit specified by user
}
 
开发者ID:aaitor,项目名称:flink-charts,代码行数:20,代码来源:StateChartsPipeline.java

示例3: testSortPartitionByTwoKeyFields

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
@Test
public void testSortPartitionByTwoKeyFields() throws Exception {
	/*
	 * Test sort partition on two key fields
	 */

	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
	env.setParallelism(2);

	DataSet<Tuple5<Integer, Long, Integer, String, Long>> ds = CollectionDataSets.get5TupleDataSet(env);
	List<Tuple1<Boolean>> result = ds
			.map(new IdMapper<Tuple5<Integer, Long, Integer, String, Long>>()).setParallelism(2) // parallelize input
			.sortPartition(4, Order.ASCENDING)
			.sortPartition(2, Order.DESCENDING)
			.mapPartition(new OrderCheckMapper<>(new Tuple5Checker()))
			.distinct().collect();

	String expected = "(true)\n";

	compareResultAsText(result, expected);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:22,代码来源:SortPartitionITCase.java

示例4: testSortPartitionByTwoFieldExpressions

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
@Test
public void testSortPartitionByTwoFieldExpressions() throws Exception {
	/*
	 * Test sort partition on two field expressions
	 */

	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
	env.setParallelism(2);

	DataSet<Tuple5<Integer, Long, Integer, String, Long>> ds = CollectionDataSets.get5TupleDataSet(env);
	List<Tuple1<Boolean>> result = ds
			.map(new IdMapper<Tuple5<Integer, Long, Integer, String, Long>>()).setParallelism(2) // parallelize input
			.sortPartition("f4", Order.ASCENDING)
			.sortPartition("f2", Order.DESCENDING)
			.mapPartition(new OrderCheckMapper<>(new Tuple5Checker()))
			.distinct().collect();

	String expected = "(true)\n";

	compareResultAsText(result, expected);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:22,代码来源:SortPartitionITCase.java

示例5: testSortPartitionByNestedFieldExpression

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
@Test
public void testSortPartitionByNestedFieldExpression() throws Exception {
	/*
	 * Test sort partition on nested field expressions
	 */

	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
	env.setParallelism(3);

	DataSet<Tuple2<Tuple2<Integer, Integer>, String>> ds = CollectionDataSets.getGroupSortedNestedTupleDataSet(env);
	List<Tuple1<Boolean>> result = ds
			.map(new IdMapper<Tuple2<Tuple2<Integer, Integer>, String>>()).setParallelism(3) // parallelize input
			.sortPartition("f0.f1", Order.ASCENDING)
			.sortPartition("f1", Order.DESCENDING)
			.mapPartition(new OrderCheckMapper<>(new NestedTupleChecker()))
			.distinct().collect();

	String expected = "(true)\n";

	compareResultAsText(result, expected);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:22,代码来源:SortPartitionITCase.java

示例6: testSortPartitionParallelismChange

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
@Test
public void testSortPartitionParallelismChange() throws Exception {
	/*
	 * Test sort partition with parallelism change
	 */

	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
	env.setParallelism(3);

	DataSet<Tuple3<Integer, Long, String>> ds = CollectionDataSets.get3TupleDataSet(env);
	List<Tuple1<Boolean>> result = ds
			.sortPartition(1, Order.DESCENDING).setParallelism(3) // change parallelism
			.mapPartition(new OrderCheckMapper<>(new Tuple3Checker()))
			.distinct().collect();

	String expected = "(true)\n";

	compareResultAsText(result, expected);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:20,代码来源:SortPartitionITCase.java

示例7: testSortPartitionWithKeySelector1

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
@Test
public void testSortPartitionWithKeySelector1() throws Exception {
	/*
	 * Test sort partition on an extracted key
	 */

	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
	env.setParallelism(4);

	DataSet<Tuple3<Integer, Long, String>> ds = CollectionDataSets.get3TupleDataSet(env);
	List<Tuple1<Boolean>> result = ds
		.map(new IdMapper<Tuple3<Integer, Long, String>>()).setParallelism(4) // parallelize input
		.sortPartition(new KeySelector<Tuple3<Integer, Long, String>, Long>() {
			@Override
			public Long getKey(Tuple3<Integer, Long, String> value) throws Exception {
				return value.f1;
			}
		}, Order.ASCENDING)
		.mapPartition(new OrderCheckMapper<>(new Tuple3AscendingChecker()))
		.distinct().collect();

	String expected = "(true)\n";

	compareResultAsText(result, expected);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:26,代码来源:SortPartitionITCase.java

示例8: testCorrectnessOfGroupReduceOnTuplesWithKeyFieldSelectorAndGroupSorting

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
@Test
public void testCorrectnessOfGroupReduceOnTuplesWithKeyFieldSelectorAndGroupSorting() throws Exception {
	/*
	 * check correctness of groupReduce on tuples with key field selector and group sorting
	 */

	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
	env.setParallelism(1);

	DataSet<Tuple3<Integer, Long, String>> ds = CollectionDataSets.get3TupleDataSet(env);
	DataSet<Tuple3<Integer, Long, String>> reduceDs = ds.
			groupBy(1).sortGroup(2, Order.ASCENDING).reduceGroup(new Tuple3SortedGroupReduce());

	List<Tuple3<Integer, Long, String>> result = reduceDs.collect();

	String expected = "1,1,Hi\n"
			+
			"5,2,Hello-Hello world\n" +
			"15,3,Hello world, how are you?-I am fine.-Luke Skywalker\n" +
			"34,4,Comment#1-Comment#2-Comment#3-Comment#4\n" +
			"65,5,Comment#5-Comment#6-Comment#7-Comment#8-Comment#9\n" +
			"111,6,Comment#10-Comment#11-Comment#12-Comment#13-Comment#14-Comment#15\n";

	compareResultAsTuples(result, expected);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:26,代码来源:GroupReduceITCase.java

示例9: testTupleSingleOrderExp

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
@Test
public void testTupleSingleOrderExp() {

	final ExecutionEnvironment env = ExecutionEnvironment
			.getExecutionEnvironment();
	DataSet<Tuple5<Integer, Long, String, Long, Integer>> tupleDs = env
			.fromCollection(emptyTupleData, tupleTypeInfo);

	// should work
	try {
		tupleDs.writeAsText("/tmp/willNotHappen")
			.sortLocalOutput("f0", Order.ANY);
	} catch (Exception e) {
		Assert.fail();
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:17,代码来源:DataSinkTest.java

示例10: testStringBasedDefinitionOnGroupSort

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
@Test
public void testStringBasedDefinitionOnGroupSort() throws Exception {
	/*
	 * Test string-based definition on group sort, based on test:
	 * check correctness of groupReduce with descending group sort
	 */
	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
	env.setParallelism(1);

	DataSet<Tuple3<Integer, Long, String>> ds = CollectionDataSets.get3TupleDataSet(env);
	DataSet<Tuple3<Integer, Long, String>> reduceDs = ds.
			groupBy(1).sortGroup("f2", Order.DESCENDING).reduceGroup(new Tuple3SortedGroupReduce());

	List<Tuple3<Integer, Long, String>> result = reduceDs.collect();

	String expected = "1,1,Hi\n"
			+
			"5,2,Hello world-Hello\n" +
			"15,3,Luke Skywalker-I am fine.-Hello world, how are you?\n" +
			"34,4,Comment#4-Comment#3-Comment#2-Comment#1\n" +
			"65,5,Comment#9-Comment#8-Comment#7-Comment#6-Comment#5\n" +
			"111,6,Comment#15-Comment#14-Comment#13-Comment#12-Comment#11-Comment#10\n";

	compareResultAsTuples(result, expected);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:26,代码来源:GroupReduceITCase.java

示例11: testStringBasedDefinitionOnGroupSortForPartialNestedTuple

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
@Test
public void testStringBasedDefinitionOnGroupSortForPartialNestedTuple() throws Exception {
	/*
	 * Test string-based definition on group sort, for (partial) nested Tuple DESC
	 */
	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
	env.setParallelism(1);

	DataSet<Tuple2<Tuple2<Integer, Integer>, String>> ds = CollectionDataSets.getGroupSortedNestedTupleDataSet(env);
	// f0.f0 is first integer
	DataSet<String> reduceDs = ds.groupBy("f1").sortGroup("f0.f0", Order.DESCENDING).reduceGroup(new NestedTupleReducer());
	List<String> result = reduceDs.collect();

	String expected = "a--(2,1)-(1,3)-(1,2)-\n" +
			"b--(2,2)-\n"+
			"c--(4,9)-(3,3)-(3,6)-\n";

	compareResultAsText(result, expected);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:20,代码来源:GroupReduceITCase.java

示例12: testPojoSortingDualParallelism1

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
@Test
public void testPojoSortingDualParallelism1() throws Exception {
	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

	DataSet<CollectionDataSets.POJO> ds = CollectionDataSets.getMixedPojoDataSet(env);
	ds.writeAsText(resultPath)
		.sortLocalOutput("str", Order.ASCENDING)
		.sortLocalOutput("number", Order.DESCENDING)
		.setParallelism(1);

	env.execute();

	String expected =
			"5 First (11,102,2000,One) 10100\n" +
			"3 First (11,102,3000,One) 10200\n" +
			"1 First (10,100,1000,One) 10100\n" +
			"4 First_ (11,106,1000,One) 10300\n" +
			"2 First_ (10,105,1000,One) 10200\n" +
			"6 Second_ (20,200,2000,Two) 10100\n" +
			"7 Third (31,301,2000,Three) 10200\n" +
			"8 Third_ (30,300,1000,Three) 10100\n";

	compareResultsByLinesInMemoryWithStrictOrder(expected, resultPath);

}
 
开发者ID:axbaretto,项目名称:flink,代码行数:26,代码来源:DataSinkITCase.java

示例13: testPojoSortingNestedParallelism1

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
@Test
public void testPojoSortingNestedParallelism1() throws Exception {
	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

	DataSet<CollectionDataSets.POJO> ds = CollectionDataSets.getMixedPojoDataSet(env);
	ds.writeAsText(resultPath)
		.sortLocalOutput("nestedTupleWithCustom.f0", Order.ASCENDING)
		.sortLocalOutput("nestedTupleWithCustom.f1.myInt", Order.DESCENDING)
		.sortLocalOutput("nestedPojo.longNumber", Order.ASCENDING)
		.setParallelism(1);

	env.execute();

	String expected =
			"2 First_ (10,105,1000,One) 10200\n" +
			"1 First (10,100,1000,One) 10100\n" +
			"4 First_ (11,106,1000,One) 10300\n" +
			"5 First (11,102,2000,One) 10100\n" +
			"3 First (11,102,3000,One) 10200\n" +
			"6 Second_ (20,200,2000,Two) 10100\n" +
			"8 Third_ (30,300,1000,Three) 10100\n" +
			"7 Third (31,301,2000,Three) 10200\n";

	compareResultsByLinesInMemoryWithStrictOrder(expected, resultPath);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:26,代码来源:DataSinkITCase.java

示例14: testIdentityWithGroupByAndSort

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
@Test
public void testIdentityWithGroupByAndSort() throws Exception {

	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

	DataSet<Tuple3<Integer, Long, String>> ds = CollectionDataSets.get3TupleDataSet(env);

	DataSet<Tuple3<Integer, Long, String>> reduceDs = ds
			.groupBy(1)
			.sortGroup(1, Order.DESCENDING)
			// reduce partially
			.combineGroup(new IdentityFunction())
			.groupBy(1)
			.sortGroup(1, Order.DESCENDING)
			// fully reduce
			.reduceGroup(new IdentityFunction());

	List<Tuple3<Integer, Long, String>> result = reduceDs.collect();

	compareResultAsTuples(result, identityResult);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:22,代码来源:GroupCombineITCase.java

示例15: testSortPartitionByKeyField

import org.apache.flink.api.common.operators.Order; //导入依赖的package包/类
@Test
public void testSortPartitionByKeyField() throws Exception {
	/*
	 * Test sort partition on key field
	 */

	final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
	env.setParallelism(4);

	DataSet<Tuple3<Integer, Long, String>> ds = CollectionDataSets.get3TupleDataSet(env);
	List<Tuple1<Boolean>> result = ds
			.map(new IdMapper<Tuple3<Integer, Long, String>>()).setParallelism(4) // parallelize input
			.sortPartition(1, Order.DESCENDING)
			.mapPartition(new OrderCheckMapper<>(new Tuple3Checker()))
			.distinct().collect();

	String expected = "(true)\n";

	compareResultAsText(result, expected);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:21,代码来源:SortPartitionITCase.java


注:本文中的org.apache.flink.api.common.operators.Order类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。