當前位置: 首頁>>代碼示例>>Java>>正文


Java StreamExecutionEnvironment.createRemoteEnvironment方法代碼示例

本文整理匯總了Java中org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createRemoteEnvironment方法的典型用法代碼示例。如果您正苦於以下問題:Java StreamExecutionEnvironment.createRemoteEnvironment方法的具體用法?Java StreamExecutionEnvironment.createRemoteEnvironment怎麽用?Java StreamExecutionEnvironment.createRemoteEnvironment使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.flink.streaming.api.environment.StreamExecutionEnvironment的用法示例。


在下文中一共展示了StreamExecutionEnvironment.createRemoteEnvironment方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: runPartitioningProgram

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
private static void runPartitioningProgram(int jobManagerPort, int parallelism) throws Exception {
	StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment("localhost", jobManagerPort);
	env.setParallelism(parallelism);
	env.getConfig().enableObjectReuse();

	env.setBufferTimeout(5L);
	env.enableCheckpointing(1000, CheckpointingMode.AT_LEAST_ONCE);

	env
		.addSource(new TimeStampingSource())
		.map(new IdMapper<Tuple2<Long, Long>>())
		.keyBy(0)
		.addSink(new TimestampingSink());

	env.execute("Partitioning Program");
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:17,代碼來源:StreamingScalabilityAndLatency.java

示例2: testInvalidOffset

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
@Test(timeout = 60000)
public void testInvalidOffset() throws Exception {
	
	final int parallelism = 1;
	
	// write 20 messages into topic:
	final String topic = writeSequence("invalidOffsetTopic", 20, parallelism, 1);

	// set invalid offset:
	CuratorFramework curatorClient = ((KafkaTestEnvironmentImpl)kafkaServer).createCuratorClient();
	ZookeeperOffsetHandler.setOffsetInZooKeeper(curatorClient, standardProps.getProperty("group.id"), topic, 0, 1234);
	curatorClient.close();

	// read from topic
	final int valuesCount = 20;
	final int startFrom = 0;

	final StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort);
	env.getConfig().disableSysoutLogging();
	
	readSequence(env, standardProps, parallelism, topic, valuesCount, startFrom);

	deleteTestTopic(topic);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:25,代碼來源:Kafka08ITCase.java

示例3: runFailOnAutoOffsetResetNone

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
/**
 * Ensure that the consumer is properly failing if "auto.offset.reset" is set to "none".
 * @throws Exception
 */
public void runFailOnAutoOffsetResetNone() throws Exception {
	final String topic = "auto-offset-reset-none-test";
	final int parallelism = 1;

	kafkaServer.createTestTopic(topic, parallelism, 1);

	final StreamExecutionEnvironment env =
			StreamExecutionEnvironment.createRemoteEnvironment("localhost", flink.getLeaderRPCPort());
	env.setParallelism(parallelism);
	env.setRestartStrategy(RestartStrategies.noRestart()); // fail immediately
	env.getConfig().disableSysoutLogging();

	// ----------- add consumer ----------

	Properties customProps = new Properties();
	customProps.putAll(standardProps);
	customProps.putAll(secureProps);
	customProps.setProperty("auto.offset.reset", "none"); // test that "none" leads to an exception
	FlinkKafkaConsumerBase<String> source = kafkaServer.getConsumer(topic, new SimpleStringSchema(), customProps);

	DataStreamSource<String> consuming = env.addSource(source);
	consuming.addSink(new DiscardingSink<String>());

	try {
		env.execute("Test auto offset reset none");
	} catch (Throwable e) {
		// check if correct exception has been thrown
		if (!e.getCause().getCause().getMessage().contains("Unable to find previous offset")  // kafka 0.8
			&& !e.getCause().getCause().getMessage().contains("Undefined offset with no reset policy for partition") // kafka 0.9
				) {
			throw e;
		}
	}

	kafkaServer.deleteTestTopic(topic);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:41,代碼來源:KafkaShortRetentionTestBase.java

示例4: runFailOnAutoOffsetResetNone

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
/**
 * Ensure that the consumer is properly failing if "auto.offset.reset" is set to "none"
 * @throws Exception
 */
public void runFailOnAutoOffsetResetNone() throws Exception {
	final String topic = "auto-offset-reset-none-test";
	final int parallelism = 1;
	
	kafkaServer.createTestTopic(topic, parallelism, 1);

	final StreamExecutionEnvironment env =
			StreamExecutionEnvironment.createRemoteEnvironment("localhost", flink.getLeaderRPCPort());
	env.setParallelism(parallelism);
	env.setRestartStrategy(RestartStrategies.noRestart()); // fail immediately
	env.getConfig().disableSysoutLogging();
	
	// ----------- add consumer ----------

	Properties customProps = new Properties();
	customProps.putAll(standardProps);
	customProps.putAll(secureProps);
	customProps.setProperty("auto.offset.reset", "none"); // test that "none" leads to an exception
	FlinkKafkaConsumerBase<String> source = kafkaServer.getConsumer(topic, new SimpleStringSchema(), customProps);

	DataStreamSource<String> consuming = env.addSource(source);
	consuming.addSink(new DiscardingSink<String>());

	try {
		env.execute("Test auto offset reset none");
	} catch(Throwable e) {
		System.out.println("MESSAGE: " + e.getCause().getCause().getMessage());
		// check if correct exception has been thrown
		if(!e.getCause().getCause().getMessage().contains("Unable to find previous offset")  // kafka 0.8
		 && !e.getCause().getCause().getMessage().contains("Undefined offset with no reset policy for partition") // kafka 0.9
				) {
			throw e;
		}
	}

	kafkaServer.deleteTestTopic(topic);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:42,代碼來源:KafkaShortRetentionTestBase.java

示例5: runAllDeletesTest

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
/**
 * Test delete behavior and metrics for producer
 * @throws Exception
 */
public void runAllDeletesTest() throws Exception {
	final String topic = "alldeletestest";
	createTestTopic(topic, 1, 1);
	final int ELEMENT_COUNT = 300;

	// ----------- Write some data into Kafka -------------------

	StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort);
	env.setParallelism(1);
	env.getConfig().setRestartStrategy(RestartStrategies.noRestart());
	env.getConfig().disableSysoutLogging();

	DataStream<Tuple2<byte[], PojoValue>> kvStream = env.addSource(new SourceFunction<Tuple2<byte[], PojoValue>>() {
		@Override
		public void run(SourceContext<Tuple2<byte[], PojoValue>> ctx) throws Exception {
			Random rnd = new Random(1337);
			for (long i = 0; i < ELEMENT_COUNT; i++) {
				final byte[] key = new byte[200];
				rnd.nextBytes(key);
				ctx.collect(new Tuple2<>(key, (PojoValue) null));
			}
		}
		@Override
		public void cancel() {
		}
	});

	TypeInformationKeyValueSerializationSchema<byte[], PojoValue> schema = new TypeInformationKeyValueSerializationSchema<>(byte[].class, PojoValue.class, env.getConfig());

	Properties producerProperties = FlinkKafkaProducerBase.getPropertiesFromBrokerList(brokerConnectionStrings);
	producerProperties.setProperty("retries", "3");
	producerProperties.putAll(secureProps);
	kafkaServer.produceIntoKafka(kvStream, topic, schema, producerProperties, null);

	env.execute("Write deletes to Kafka");

	// ----------- Read the data again -------------------

	env = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort);
	env.setParallelism(1);
	env.getConfig().setRestartStrategy(RestartStrategies.noRestart());
	env.getConfig().disableSysoutLogging();

	Properties props = new Properties();
	props.putAll(standardProps);
	props.putAll(secureProps);
	DataStream<Tuple2<byte[], PojoValue>> fromKafka = env.addSource(kafkaServer.getConsumer(topic, schema, props));

	fromKafka.flatMap(new RichFlatMapFunction<Tuple2<byte[], PojoValue>, Object>() {
		long counter = 0;
		@Override
		public void flatMap(Tuple2<byte[], PojoValue> value, Collector<Object> out) throws Exception {
			// ensure that deleted messages are passed as nulls
			assertNull(value.f1);
			counter++;
			if (counter == ELEMENT_COUNT) {
				// we got the right number of elements
				throw new SuccessException();
			}
		}
	});

	tryExecute(env, "Read deletes from Kafka");

	deleteTestTopic(topic);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:71,代碼來源:KafkaConsumerTestBase.java

示例6: runKeyValueTest

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
public void runKeyValueTest() throws Exception {
	final String topic = "keyvaluetest";
	createTestTopic(topic, 1, 1);
	final int ELEMENT_COUNT = 5000;

	// ----------- Write some data into Kafka -------------------

	StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort);
	env.setParallelism(1);
	env.setRestartStrategy(RestartStrategies.noRestart());
	env.getConfig().disableSysoutLogging();

	DataStream<Tuple2<Long, PojoValue>> kvStream = env.addSource(new SourceFunction<Tuple2<Long, PojoValue>>() {
		@Override
		public void run(SourceContext<Tuple2<Long, PojoValue>> ctx) throws Exception {
			Random rnd = new Random(1337);
			for (long i = 0; i < ELEMENT_COUNT; i++) {
				PojoValue pojo = new PojoValue();
				pojo.when = new Date(rnd.nextLong());
				pojo.lon = rnd.nextLong();
				pojo.lat = i;
				// make every second key null to ensure proper "null" serialization
				Long key = (i % 2 == 0) ? null : i;
				ctx.collect(new Tuple2<>(key, pojo));
			}
		}
		@Override
		public void cancel() {
		}
	});

	KeyedSerializationSchema<Tuple2<Long, PojoValue>> schema = new TypeInformationKeyValueSerializationSchema<>(Long.class, PojoValue.class, env.getConfig());
	Properties producerProperties = FlinkKafkaProducerBase.getPropertiesFromBrokerList(brokerConnectionStrings);
	producerProperties.setProperty("retries", "3");
	kafkaServer.produceIntoKafka(kvStream, topic, schema, producerProperties, null);
	env.execute("Write KV to Kafka");

	// ----------- Read the data again -------------------

	env = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort);
	env.setParallelism(1);
	env.setRestartStrategy(RestartStrategies.noRestart());
	env.getConfig().disableSysoutLogging();


	KeyedDeserializationSchema<Tuple2<Long, PojoValue>> readSchema = new TypeInformationKeyValueSerializationSchema<>(Long.class, PojoValue.class, env.getConfig());

	Properties props = new Properties();
	props.putAll(standardProps);
	props.putAll(secureProps);
	DataStream<Tuple2<Long, PojoValue>> fromKafka = env.addSource(kafkaServer.getConsumer(topic, readSchema, props));
	fromKafka.flatMap(new RichFlatMapFunction<Tuple2<Long,PojoValue>, Object>() {
		long counter = 0;
		@Override
		public void flatMap(Tuple2<Long, PojoValue> value, Collector<Object> out) throws Exception {
			// the elements should be in order.
			Assert.assertTrue("Wrong value " + value.f1.lat, value.f1.lat == counter );
			if (value.f1.lat % 2 == 0) {
				assertNull("key was not null", value.f0);
			} else {
				Assert.assertTrue("Wrong value " + value.f0, value.f0 == counter);
			}
			counter++;
			if (counter == ELEMENT_COUNT) {
				// we got the right number of elements
				throw new SuccessException();
			}
		}
	});

	tryExecute(env, "Read KV from Kafka");

	deleteTestTopic(topic);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:75,代碼來源:KafkaConsumerTestBase.java

示例7: create

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
public static Thread create(final int totalEventCount,
							final int failAtRecordCount,
							final int parallelism,
							final int checkpointInterval,
							final long restartDelay,
							final String awsAccessKey,
							final String awsSecretKey,
							final String awsRegion,
							final String kinesisStreamName,
							final AtomicReference<Throwable> errorHandler,
							final int flinkPort,
							final Configuration flinkConfig) {
	Runnable exactlyOnceValidationConsumer = new Runnable() {
		@Override
		public void run() {
			try {
				StreamExecutionEnvironment see = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort, flinkConfig);
				see.setParallelism(parallelism);
				see.enableCheckpointing(checkpointInterval);
				// we restart two times
				see.setRestartStrategy(RestartStrategies.fixedDelayRestart(2, restartDelay));

				// consuming topology
				Properties consumerProps = new Properties();
				consumerProps.setProperty(ConsumerConfigConstants.AWS_ACCESS_KEY_ID, awsAccessKey);
				consumerProps.setProperty(ConsumerConfigConstants.AWS_SECRET_ACCESS_KEY, awsSecretKey);
				consumerProps.setProperty(ConsumerConfigConstants.AWS_REGION, awsRegion);
				// start reading from beginning
				consumerProps.setProperty(ConsumerConfigConstants.STREAM_INITIAL_POSITION, ConsumerConfigConstants.InitialPosition.TRIM_HORIZON.name());
				DataStream<String> consuming = see.addSource(new FlinkKinesisConsumer<>(kinesisStreamName, new SimpleStringSchema(), consumerProps));
				consuming
					.flatMap(new ArtificialFailOnceFlatMapper(failAtRecordCount))
					// validate consumed records for correctness (use only 1 instance to validate all consumed records)
					.flatMap(new ExactlyOnceValidatingMapper(totalEventCount)).setParallelism(1);

				LOG.info("Starting consuming topology");
				tryExecute(see, "Consuming topo");
				LOG.info("Consuming topo finished");
			} catch (Exception e) {
				LOG.warn("Error while running consuming topology", e);
				errorHandler.set(e);
			}
		}
	};

	return new Thread(exactlyOnceValidationConsumer);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:48,代碼來源:ExactlyOnceValidatingConsumerThread.java

示例8: testTumblingTimeWindow

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
@Test
public void testTumblingTimeWindow() {
	final int NUM_ELEMENTS_PER_KEY = 3000;
	final int WINDOW_SIZE = 100;
	final int NUM_KEYS = 100;
	FailingSource.reset();
	
	try {
		StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment(
				"localhost", cluster.getLeaderRPCPort());
		
		env.setParallelism(PARALLELISM);
		env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
		env.enableCheckpointing(100);
		env.setRestartStrategy(RestartStrategies.fixedDelayRestart(3, 0));
		env.getConfig().disableSysoutLogging();
		env.setStateBackend(this.stateBackend);

		env
				.addSource(new FailingSource(NUM_KEYS, NUM_ELEMENTS_PER_KEY, NUM_ELEMENTS_PER_KEY / 3))
				.rebalance()
				.keyBy(0)
				.timeWindow(Time.of(WINDOW_SIZE, MILLISECONDS))
				.apply(new RichWindowFunction<Tuple2<Long, IntType>, Tuple4<Long, Long, Long, IntType>, Tuple, TimeWindow>() {

					private boolean open = false;

					@Override
					public void open(Configuration parameters) {
						assertEquals(PARALLELISM, getRuntimeContext().getNumberOfParallelSubtasks());
						open = true;
					}

					@Override
					public void apply(
							Tuple tuple,
							TimeWindow window,
							Iterable<Tuple2<Long, IntType>> values,
							Collector<Tuple4<Long, Long, Long, IntType>> out) {

						// validate that the function has been opened properly
						assertTrue(open);

						int sum = 0;
						long key = -1;

						for (Tuple2<Long, IntType> value : values) {
							sum += value.f1.value;
							key = value.f0;
						}
						out.collect(new Tuple4<>(key, window.getStart(), window.getEnd(), new IntType(sum)));
					}
				})
				.addSink(new ValidatingSink(NUM_KEYS, NUM_ELEMENTS_PER_KEY / WINDOW_SIZE)).setParallelism(1);


		tryExecute(env, "Tumbling Window Test");
	}
	catch (Exception e) {
		e.printStackTrace();
		fail(e.getMessage());
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:64,代碼來源:EventTimeWindowCheckpointingITCase.java

示例9: runBrokerFailureTest

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
public void runBrokerFailureTest() throws Exception {
	final String topic = "brokerFailureTestTopic";

	final int parallelism = 2;
	final int numElementsPerPartition = 1000;
	final int totalElements = parallelism * numElementsPerPartition;
	final int failAfterElements = numElementsPerPartition / 3;


	createTestTopic(topic, parallelism, 2);

	DataGenerators.generateRandomizedIntegerSequence(
			StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort),
			kafkaServer,
			topic, parallelism, numElementsPerPartition, true);

	// find leader to shut down
	int leaderId = kafkaServer.getLeaderToShutDown(topic);

	LOG.info("Leader to shutdown {}", leaderId);


	// run the topology (the consumers must handle the failures)

	DeserializationSchema<Integer> schema =
			new TypeInformationSerializationSchema<>(BasicTypeInfo.INT_TYPE_INFO, new ExecutionConfig());

	StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort);
	env.setParallelism(parallelism);
	env.enableCheckpointing(500);
	env.setRestartStrategy(RestartStrategies.noRestart());
	env.getConfig().disableSysoutLogging();

	Properties props = new Properties();
	props.putAll(standardProps);
	props.putAll(secureProps);
	FlinkKafkaConsumerBase<Integer> kafkaSource = kafkaServer.getConsumer(topic, schema, props);

	env
			.addSource(kafkaSource)
			.map(new PartitionValidatingMapper(parallelism, 1))
			.map(new BrokerKillingMapper<Integer>(leaderId, failAfterElements))
			.addSink(new ValidatingExactlyOnceSink(totalElements)).setParallelism(1);

	BrokerKillingMapper.killedLeaderBefore = false;
	tryExecute(env, "Broker failure once test");

	// start a new broker:
	kafkaServer.restartBroker(leaderId);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:51,代碼來源:KafkaConsumerTestBase.java

示例10: testPreAggregatedTumblingTimeWindow

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
@Test
public void testPreAggregatedTumblingTimeWindow() {
	final int NUM_ELEMENTS_PER_KEY = 3000;
	final int WINDOW_SIZE = 100;
	final int NUM_KEYS = 100;
	FailingSource.reset();

	try {
		StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment(
				"localhost", cluster.getLeaderRPCPort());

		env.setParallelism(PARALLELISM);
		env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
		env.enableCheckpointing(100);
		env.setRestartStrategy(RestartStrategies.fixedDelayRestart(3, 0));
		env.getConfig().disableSysoutLogging();
		env.setStateBackend(this.stateBackend);

		env
				.addSource(new FailingSource(NUM_KEYS, NUM_ELEMENTS_PER_KEY, NUM_ELEMENTS_PER_KEY / 3))
				.rebalance()
				.keyBy(0)
				.timeWindow(Time.of(WINDOW_SIZE, MILLISECONDS))
				.reduce(
						new ReduceFunction<Tuple2<Long, IntType>>() {

							@Override
							public Tuple2<Long, IntType> reduce(
									Tuple2<Long, IntType> a,
									Tuple2<Long, IntType> b) {
								return new Tuple2<>(a.f0, new IntType(a.f1.value + b.f1.value));
							}
						},
						new RichWindowFunction<Tuple2<Long, IntType>, Tuple4<Long, Long, Long, IntType>, Tuple, TimeWindow>() {

					private boolean open = false;

					@Override
					public void open(Configuration parameters) {
						assertEquals(PARALLELISM, getRuntimeContext().getNumberOfParallelSubtasks());
						open = true;
					}

					@Override
					public void apply(
							Tuple tuple,
							TimeWindow window,
							Iterable<Tuple2<Long, IntType>> input,
							Collector<Tuple4<Long, Long, Long, IntType>> out) {

						// validate that the function has been opened properly
						assertTrue(open);

						for (Tuple2<Long, IntType> in: input) {
							out.collect(new Tuple4<>(in.f0,
									window.getStart(),
									window.getEnd(),
									in.f1));
						}
					}
				})
				.addSink(new ValidatingSink(NUM_KEYS, NUM_ELEMENTS_PER_KEY / WINDOW_SIZE)).setParallelism(1);


		tryExecute(env, "Tumbling Window Test");
	}
	catch (Exception e) {
		e.printStackTrace();
		fail(e.getMessage());
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:72,代碼來源:EventTimeWindowCheckpointingITCase.java

示例11: testPreAggregatedSlidingTimeWindow

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
@Test
public void testPreAggregatedSlidingTimeWindow() {
	final int NUM_ELEMENTS_PER_KEY = 3000;
	final int WINDOW_SIZE = 1000;
	final int WINDOW_SLIDE = 100;
	final int NUM_KEYS = 100;
	FailingSource.reset();

	try {
		StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment(
				"localhost", cluster.getLeaderRPCPort());

		env.setParallelism(PARALLELISM);
		env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
		env.enableCheckpointing(100);
		env.setRestartStrategy(RestartStrategies.fixedDelayRestart(3, 0));
		env.getConfig().disableSysoutLogging();
		env.setStateBackend(this.stateBackend);

		env
				.addSource(new FailingSource(NUM_KEYS, NUM_ELEMENTS_PER_KEY, NUM_ELEMENTS_PER_KEY / 3))
				.rebalance()
				.keyBy(0)
				.timeWindow(Time.of(WINDOW_SIZE, MILLISECONDS), Time.of(WINDOW_SLIDE, MILLISECONDS))
				.reduce(
						new ReduceFunction<Tuple2<Long, IntType>>() {

							@Override
							public Tuple2<Long, IntType> reduce(
									Tuple2<Long, IntType> a,
									Tuple2<Long, IntType> b) {

								// validate that the function has been opened properly
								return new Tuple2<>(a.f0, new IntType(a.f1.value + b.f1.value));
							}
						},
						new RichWindowFunction<Tuple2<Long, IntType>, Tuple4<Long, Long, Long, IntType>, Tuple, TimeWindow>() {

					private boolean open = false;

					@Override
					public void open(Configuration parameters) {
						assertEquals(PARALLELISM, getRuntimeContext().getNumberOfParallelSubtasks());
						open = true;
					}

					@Override
					public void apply(
							Tuple tuple,
							TimeWindow window,
							Iterable<Tuple2<Long, IntType>> input,
							Collector<Tuple4<Long, Long, Long, IntType>> out) {

						// validate that the function has been opened properly
						assertTrue(open);

						for (Tuple2<Long, IntType> in: input) {
							out.collect(new Tuple4<>(in.f0,
									window.getStart(),
									window.getEnd(),
									in.f1));
						}
					}
				})
				.addSink(new ValidatingSink(NUM_KEYS, NUM_ELEMENTS_PER_KEY / WINDOW_SLIDE)).setParallelism(1);


		tryExecute(env, "Tumbling Window Test");
	}
	catch (Exception e) {
		e.printStackTrace();
		fail(e.getMessage());
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:75,代碼來源:EventTimeWindowCheckpointingITCase.java

示例12: testTaskManagerFailure

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
@Override
public void testTaskManagerFailure(int jobManagerPort, final File coordinateDir) throws Exception {

	final File tempCheckpointDir = tempFolder.newFolder();

	StreamExecutionEnvironment env = StreamExecutionEnvironment
			.createRemoteEnvironment("localhost", jobManagerPort);
	env.setParallelism(PARALLELISM);
	env.getConfig().disableSysoutLogging();
	env.setRestartStrategy(RestartStrategies.fixedDelayRestart(1, 1000));
	env.enableCheckpointing(200);

	env.setStateBackend(new FsStateBackend(tempCheckpointDir.getAbsoluteFile().toURI()));

	DataStream<Long> result = env.addSource(new SleepyDurableGenerateSequence(coordinateDir, DATA_COUNT))
			// add a non-chained no-op map to test the chain state restore logic
			.map(new MapFunction<Long, Long>() {
				@Override
				public Long map(Long value) throws Exception {
					return value;
				}
			}).startNewChain()
					// populate the coordinate directory so we can proceed to TaskManager failure
			.map(new Mapper(coordinateDir));

	//write result to temporary file
	result.addSink(new CheckpointedSink(DATA_COUNT));

	try {
		// blocking call until execution is done
		env.execute();

		// TODO: Figure out why this fails when ran with other tests
		// Check whether checkpoints have been cleaned up properly
		// assertDirectoryEmpty(tempCheckpointDir);
	}
	finally {
		// clean up
		if (tempCheckpointDir.exists()) {
			FileUtils.deleteDirectory(tempCheckpointDir);
		}
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:44,代碼來源:TaskManagerProcessFailureStreamingRecoveryITCase.java

示例13: runFailOnDeployTest

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
/**
 * Tests that the source can be properly canceled when reading full partitions. 
 */
public void runFailOnDeployTest() throws Exception {
	final String topic = "failOnDeployTopic";

	createTestTopic(topic, 2, 1);

	DeserializationSchema<Integer> schema =
			new TypeInformationSerializationSchema<>(BasicTypeInfo.INT_TYPE_INFO, new ExecutionConfig());

	StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort);
	env.setParallelism(12); // needs to be more that the mini cluster has slots
	env.getConfig().disableSysoutLogging();

	Properties props = new Properties();
	props.putAll(standardProps);
	props.putAll(secureProps);
	FlinkKafkaConsumerBase<Integer> kafkaSource = kafkaServer.getConsumer(topic, schema, props);

	env
			.addSource(kafkaSource)
			.addSink(new DiscardingSink<Integer>());

	try {
		env.execute("test fail on deploy");
		fail("this test should fail with an exception");
	}
	catch (ProgramInvocationException e) {

		// validate that we failed due to a NoResourceAvailableException
		Throwable cause = e.getCause();
		int depth = 0;
		boolean foundResourceException = false;

		while (cause != null && depth++ < 20) {
			if (cause instanceof NoResourceAvailableException) {
				foundResourceException = true;
				break;
			}
			cause = cause.getCause();
		}

		assertTrue("Wrong exception", foundResourceException);
	}

	deleteTestTopic(topic);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:49,代碼來源:KafkaConsumerTestBase.java

示例14: testTimestamps

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
/**
 * Kafka 0.10 specific test, ensuring Timestamps are properly written to and read from Kafka
 */
@Test(timeout = 60000)
public void testTimestamps() throws Exception {

	final String topic = "tstopic";
	createTestTopic(topic, 3, 1);

	// ---------- Produce an event time stream into Kafka -------------------

	StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort);
	env.setParallelism(1);
	env.getConfig().setRestartStrategy(RestartStrategies.noRestart());
	env.getConfig().disableSysoutLogging();
	env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

	DataStream<Long> streamWithTimestamps = env.addSource(new SourceFunction<Long>() {
		boolean running = true;

		@Override
		public void run(SourceContext<Long> ctx) throws Exception {
			long i = 0;
			while(running) {
				ctx.collectWithTimestamp(i, i*2);
				if(i++ == 1000L) {
					running = false;
				}
			}
		}

		@Override
		public void cancel() {
			running = false;
		}
	});

	final TypeInformationSerializationSchema<Long> longSer = new TypeInformationSerializationSchema<>(TypeInfoParser.<Long>parse("Long"), env.getConfig());
	FlinkKafkaProducer010.FlinkKafkaProducer010Configuration prod = FlinkKafkaProducer010.writeToKafkaWithTimestamps(streamWithTimestamps, topic, new KeyedSerializationSchemaWrapper<>(longSer), standardProps, new KafkaPartitioner<Long>() {
		@Override
		public int partition(Long next, byte[] serializedKey, byte[] serializedValue, int numPartitions) {
			return (int)(next % 3);
		}
	});
	prod.setParallelism(3);
	prod.setWriteTimestampToKafka(true);
	env.execute("Produce some");

	// ---------- Consume stream from Kafka -------------------

	env = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort);
	env.setParallelism(1);
	env.getConfig().setRestartStrategy(RestartStrategies.noRestart());
	env.getConfig().disableSysoutLogging();
	env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

	FlinkKafkaConsumer010<Long> kafkaSource = new FlinkKafkaConsumer010<>(topic, new LimitedLongDeserializer(), standardProps);
	kafkaSource.assignTimestampsAndWatermarks(new AssignerWithPunctuatedWatermarks<Long>() {
		@Nullable
		@Override
		public Watermark checkAndGetNextWatermark(Long lastElement, long extractedTimestamp) {
			if(lastElement % 10 == 0) {
				return new Watermark(lastElement);
			}
			return null;
		}

		@Override
		public long extractTimestamp(Long element, long previousElementTimestamp) {
			return previousElementTimestamp;
		}
	});

	DataStream<Long> stream = env.addSource(kafkaSource);
	GenericTypeInfo<Object> objectTypeInfo = new GenericTypeInfo<>(Object.class);
	stream.transform("timestamp validating operator", objectTypeInfo, new TimestampValidatingOperator()).setParallelism(1);

	env.execute("Consume again");

	deleteTestTopic(topic);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:82,代碼來源:Kafka010ITCase.java

示例15: runAutoOffsetResetTest

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; //導入方法依賴的package包/類
public void runAutoOffsetResetTest() throws Exception {
	final String topic = "auto-offset-reset-test";

	final int parallelism = 1;
	final int elementsPerPartition = 50000;

	Properties tprops = new Properties();
	tprops.setProperty("retention.ms", "250");
	kafkaServer.createTestTopic(topic, parallelism, 1, tprops);

	final StreamExecutionEnvironment env =
			StreamExecutionEnvironment.createRemoteEnvironment("localhost", flink.getLeaderRPCPort());
	env.setParallelism(parallelism);
	env.setRestartStrategy(RestartStrategies.noRestart()); // fail immediately
	env.getConfig().disableSysoutLogging();


	// ----------- add producer dataflow ----------


	DataStream<String> stream = env.addSource(new RichParallelSourceFunction<String>() {

		private boolean running = true;

		@Override
		public void run(SourceContext<String> ctx) throws InterruptedException {
			int cnt = getRuntimeContext().getIndexOfThisSubtask() * elementsPerPartition;
			int limit = cnt + elementsPerPartition;


			while (running && !stopProducer && cnt < limit) {
				ctx.collect("element-" + cnt);
				cnt++;
				Thread.sleep(10);
			}
			LOG.info("Stopping producer");
		}

		@Override
		public void cancel() {
			running = false;
		}
	});
	Properties props = new Properties();
	props.putAll(standardProps);
	props.putAll(secureProps);
	kafkaServer.produceIntoKafka(stream, topic, new KeyedSerializationSchemaWrapper<>(new SimpleStringSchema()), props, null);

	// ----------- add consumer dataflow ----------

	NonContinousOffsetsDeserializationSchema deserSchema = new NonContinousOffsetsDeserializationSchema();
	FlinkKafkaConsumerBase<String> source = kafkaServer.getConsumer(topic, deserSchema, props);

	DataStreamSource<String> consuming = env.addSource(source);
	consuming.addSink(new DiscardingSink<String>());

	tryExecute(env, "run auto offset reset test");

	kafkaServer.deleteTestTopic(topic);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:61,代碼來源:KafkaShortRetentionTestBase.java


注:本文中的org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createRemoteEnvironment方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。