当前位置: 首页>>代码示例>>Java>>正文


Java JobExecutionResult.getAccumulatorResult方法代码示例

本文整理汇总了Java中org.apache.flink.api.common.JobExecutionResult.getAccumulatorResult方法的典型用法代码示例。如果您正苦于以下问题:Java JobExecutionResult.getAccumulatorResult方法的具体用法?Java JobExecutionResult.getAccumulatorResult怎么用?Java JobExecutionResult.getAccumulatorResult使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.flink.api.common.JobExecutionResult的用法示例。


在下文中一共展示了JobExecutionResult.getAccumulatorResult方法的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: write

import org.apache.flink.api.common.JobExecutionResult; //导入方法依赖的package包/类
public static void write(final JobExecutionResult res, final String path) {
	double elapsed = res.getNetRuntime(TimeUnit.NANOSECONDS);		
	long tuples = res.getAccumulatorResult("tuples");
	double latency = elapsed / tuples;
	
	PerformanceWriter.write(path, elapsed, latency);
}
 
开发者ID:3Cores,项目名称:sostream,代码行数:8,代码来源:PerformanceWriter.java

示例2: testAccumulatorMinMax

import org.apache.flink.api.common.JobExecutionResult; //导入方法依赖的package包/类
@Test
	public void testAccumulatorMinMax() throws Exception {

		String input = "";

		Random rand = new Random();

		for (int i = 1; i < 1000; i++) {
			if (rand.nextDouble() < 0.2) {
				input += String.valueOf(rand.nextInt(4)) + "\n";
			} else {
				input += String.valueOf(rand.nextInt(100)) + "\n";
			}
		}

		String inputFile = createTempFile("datapoints.txt", input);

		ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
		env.getConfig().disableSysoutLogging();

		OperatorStatisticsConfig operatorStatisticsConfig =
				new OperatorStatisticsConfig(false);
		operatorStatisticsConfig.collectMax = true;
		operatorStatisticsConfig.collectMin = true;

		env.readTextFile(inputFile).
				flatMap(new StringToInt(operatorStatisticsConfig)).
				output(new DiscardingOutputFormat<Tuple1<Integer>>());

		JobExecutionResult result = env.execute();

		OperatorStatistics globalStats = result.getAccumulatorResult(ACCUMULATOR_NAME);
//		System.out.println("Global Stats");
//		System.out.println(globalStats.toString());

		Assert.assertTrue("Min value for accumulator should not be null",globalStats.getMin()!=null);
		Assert.assertTrue("Max value for accumulator should not be null",globalStats.getMax()!=null);
	}
 
开发者ID:axbaretto,项目名称:flink,代码行数:39,代码来源:OperatorStatsAccumulatorTest.java

示例3: testAccumulatorCountDistinctLinearCounting

import org.apache.flink.api.common.JobExecutionResult; //导入方法依赖的package包/类
@Test
	public void testAccumulatorCountDistinctLinearCounting() throws Exception {

		String input = "";

		Random rand = new Random();

		for (int i = 1; i < 1000; i++) {
			if (rand.nextDouble() < 0.2) {
				input += String.valueOf(rand.nextInt(4)) + "\n";
			} else {
				input += String.valueOf(rand.nextInt(100)) + "\n";
			}
		}

		String inputFile = createTempFile("datapoints.txt", input);

		ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
		env.getConfig().disableSysoutLogging();

		OperatorStatisticsConfig operatorStatisticsConfig =
				new OperatorStatisticsConfig(false);
		operatorStatisticsConfig.collectCountDistinct = true;
		operatorStatisticsConfig.countDistinctAlgorithm = OperatorStatisticsConfig.CountDistinctAlgorithm.LINEAR_COUNTING;
		operatorStatisticsConfig.setCountDbitmap(10000);

		env.readTextFile(inputFile).
				flatMap(new StringToInt(operatorStatisticsConfig)).
				output(new DiscardingOutputFormat<Tuple1<Integer>>());

		JobExecutionResult result = env.execute();

		OperatorStatistics globalStats = result.getAccumulatorResult(ACCUMULATOR_NAME);
//		System.out.println("Global Stats");
//		System.out.println(globalStats.toString());

		Assert.assertTrue("Count Distinct for accumulator should not be null",globalStats.countDistinct!=null);
	}
 
开发者ID:axbaretto,项目名称:flink,代码行数:39,代码来源:OperatorStatsAccumulatorTest.java

示例4: testAccumulatorHeavyHitterCountMinSketch

import org.apache.flink.api.common.JobExecutionResult; //导入方法依赖的package包/类
@Test
	public void testAccumulatorHeavyHitterCountMinSketch() throws Exception {

		String input = "";

		Random rand = new Random();

		for (int i = 1; i < 1000; i++) {
			if (rand.nextDouble() < 0.2) {
				input += String.valueOf(rand.nextInt(4)) + "\n";
			} else {
				input += String.valueOf(rand.nextInt(100)) + "\n";
			}
		}

		String inputFile = createTempFile("datapoints.txt", input);

		ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
		env.getConfig().disableSysoutLogging();

		OperatorStatisticsConfig operatorStatisticsConfig =
				new OperatorStatisticsConfig(false);
		operatorStatisticsConfig.collectHeavyHitters = true;
		operatorStatisticsConfig.heavyHitterAlgorithm = OperatorStatisticsConfig.HeavyHitterAlgorithm.COUNT_MIN_SKETCH;

		env.readTextFile(inputFile).
				flatMap(new StringToInt(operatorStatisticsConfig)).
				output(new DiscardingOutputFormat<Tuple1<Integer>>());

		JobExecutionResult result = env.execute();

		OperatorStatistics globalStats = result.getAccumulatorResult(ACCUMULATOR_NAME);
//		System.out.println("Global Stats");
//		System.out.println(globalStats.toString());

		Assert.assertTrue("Count Distinct for accumulator should not be null",globalStats.heavyHitter!=null);
	}
 
开发者ID:axbaretto,项目名称:flink,代码行数:38,代码来源:OperatorStatsAccumulatorTest.java

示例5: checksumHashCode

import org.apache.flink.api.common.JobExecutionResult; //导入方法依赖的package包/类
/**
 * Convenience method to get the count (number of elements) of a DataSet
 * as well as the checksum (sum over element hashes).
 *
 * @return A ChecksumHashCode that represents the count and checksum of elements in the data set.
 * @deprecated replaced with {@code org.apache.flink.graph.asm.dataset.ChecksumHashCode} in Gelly
 */
@Deprecated
public static <T> Utils.ChecksumHashCode checksumHashCode(DataSet<T> input) throws Exception {
	final String id = new AbstractID().toString();

	input.output(new Utils.ChecksumHashCodeHelper<T>(id)).name("ChecksumHashCode");

	JobExecutionResult res = input.getExecutionEnvironment().execute();
	return res.<Utils.ChecksumHashCode> getAccumulatorResult(id);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:17,代码来源:DataSetUtils.java

示例6: count

import org.apache.flink.api.common.JobExecutionResult; //导入方法依赖的package包/类
/**
 * Convenience method to get the count (number of elements) of a DataSet.
 *
 * @return A long integer that represents the number of elements in the data set.
 */
public long count() throws Exception {
	final String id = new AbstractID().toString();

	output(new Utils.CountHelper<T>(id)).name("count()");

	JobExecutionResult res = getExecutionEnvironment().execute();
	return res.<Long> getAccumulatorResult(id);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:14,代码来源:DataSet.java

示例7: main

import org.apache.flink.api.common.JobExecutionResult; //导入方法依赖的package包/类
public static void main(final String[] args) throws Exception {

		final ParameterTool params = ParameterTool.fromArgs(args);

		final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

		// make parameters available in the web interface
		env.getConfig().setGlobalJobParameters(params);

		// get the data set
		final DataSet<StringTriple> file = getDataSet(env, params);

		// filter lines with empty fields
		final DataSet<StringTriple> filteredLines = file.filter(new EmptyFieldFilter());

		// Here, we could do further processing with the filtered lines...
		JobExecutionResult result;
		// output the filtered lines
		if (params.has("output")) {
			filteredLines.writeAsCsv(params.get("output"));
			// execute program
			result = env.execute("Accumulator example");
		} else {
			System.out.println("Printing result to stdout. Use --output to specify output path.");
			filteredLines.print();
			result = env.getLastJobExecutionResult();
		}

		// get the accumulator result via its registration key
		final List<Integer> emptyFields = result.getAccumulatorResult(EMPTY_FIELD_ACCUMULATOR);
		System.out.format("Number of detected empty fields per column: %s\n", emptyFields);
	}
 
开发者ID:axbaretto,项目名称:flink,代码行数:33,代码来源:EmptyFieldsCountAccumulator.java

示例8: main

import org.apache.flink.api.common.JobExecutionResult; //导入方法依赖的package包/类
public static void main(final String[] args) throws Exception {

		if (!parseParameters(args)) {
			return;
		}

		final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

		// get the data set
		final DataSet<Tuple> file = getDataSet(env);

		// filter lines with empty fields
		final DataSet<Tuple> filteredLines = file.filter(new EmptyFieldFilter());

		// Here, we could do further processing with the filtered lines...
		
		// output the filtered lines
		if (outputPath == null) {
			filteredLines.print();
		} else {
			filteredLines.writeAsCsv(outputPath);
		}

		// execute program
		final JobExecutionResult result = env.execute("Accumulator example");

		// get the accumulator result via its registration key
		final List<Integer> emptyFields = result.getAccumulatorResult(EMPTY_FIELD_ACCUMULATOR);
		System.out.format("Number of detected empty fields per column: %s\n", emptyFields);

	}
 
开发者ID:citlab,项目名称:vs.msc.ws14,代码行数:32,代码来源:EmptyFieldsCountAccumulator.java

示例9: testAccumulatorAllStatistics

import org.apache.flink.api.common.JobExecutionResult; //导入方法依赖的package包/类
@Test
	public void testAccumulatorAllStatistics() throws Exception {

		String input = "";

		Random rand = new Random();

		for (int i = 1; i < 1000; i++) {
			if(rand.nextDouble()<0.2){
				input+=String.valueOf(rand.nextInt(4))+"\n";
			}else{
				input+=String.valueOf(rand.nextInt(100))+"\n";
			}
		}

		String inputFile = createTempFile("datapoints.txt", input);

		ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
		env.getConfig().disableSysoutLogging();

		OperatorStatisticsConfig operatorStatisticsConfig =
				new OperatorStatisticsConfig(OperatorStatisticsConfig.CountDistinctAlgorithm.HYPERLOGLOG,
											OperatorStatisticsConfig.HeavyHitterAlgorithm.LOSSY_COUNTING);

		env.readTextFile(inputFile).
				flatMap(new StringToInt(operatorStatisticsConfig)).
				output(new DiscardingOutputFormat<Tuple1<Integer>>());

		JobExecutionResult result = env.execute();

		OperatorStatistics globalStats = result.getAccumulatorResult(ACCUMULATOR_NAME);
//		System.out.println("Global Stats");
//		System.out.println(globalStats.toString());

		OperatorStatistics merged = null;

		Map<String,Object> accResults = result.getAllAccumulatorResults();
		for (String accumulatorName:accResults.keySet()){
			if (accumulatorName.contains(ACCUMULATOR_NAME+"-")){
				OperatorStatistics localStats = (OperatorStatistics) accResults.get(accumulatorName);
				LOG.debug("Local Stats: " + accumulatorName);
				LOG.debug(localStats.toString());
				if (merged == null){
					merged = localStats.clone();
				}else {
					merged.merge(localStats);
				}
			}
		}

		LOG.debug("Local Stats Merged: \n");
		LOG.debug(merged.toString());

		Assert.assertEquals("Global cardinality should be 999", 999, globalStats.cardinality);
		Assert.assertEquals("Count distinct estimate should be around 100 and is "+globalStats.estimateCountDistinct()
				, 100.0, (double)globalStats.estimateCountDistinct(),5.0);
		Assert.assertTrue("The total number of heavy hitters should be between 0 and 5."
				, globalStats.getHeavyHitters().size() > 0 && globalStats.getHeavyHitters().size() <= 5);
		Assert.assertEquals("Min when merging the local accumulators should correspond with min" +
				"of the global accumulator",merged.getMin(),globalStats.getMin());
		Assert.assertEquals("Max resulting from merging the local accumulators should correspond to" +
				"max of the global accumulator",merged.getMax(),globalStats.getMax());
		Assert.assertEquals("Count distinct when merging the local accumulators should correspond to " +
				"count distinct in the global accumulator",merged.estimateCountDistinct(),globalStats.estimateCountDistinct());
		Assert.assertEquals("The number of heavy hitters when merging the local accumulators should correspond " +
				"to the number of heavy hitters in the global accumulator",merged.getHeavyHitters().size(),globalStats.getHeavyHitters().size());
	}
 
开发者ID:axbaretto,项目名称:flink,代码行数:68,代码来源:OperatorStatsAccumulatorTest.java

示例10: getAccumulator

import org.apache.flink.api.common.JobExecutionResult; //导入方法依赖的package包/类
/**
 * Gets the accumulator with the given name. Returns {@code null}, if no accumulator with
 * that name was produced.
 *
 * @param accumulatorName The name of the accumulator
 * @param <A> The generic type of the accumulator value
 * @return The value of the accumulator with the given name
 */
public <A> A getAccumulator(ExecutionEnvironment env, String accumulatorName) {
	JobExecutionResult result = env.getLastJobExecutionResult();

	Preconditions.checkNotNull(result, "No result found for job, was execute() called before getting the result?");

	return result.getAccumulatorResult(id + SEPARATOR + accumulatorName);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:16,代码来源:AnalyticHelper.java


注:本文中的org.apache.flink.api.common.JobExecutionResult.getAccumulatorResult方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。