當前位置: 首頁>>代碼示例>>Java>>正文


Java DataStream.getType方法代碼示例

本文整理匯總了Java中org.apache.flink.streaming.api.datastream.DataStream.getType方法的典型用法代碼示例。如果您正苦於以下問題:Java DataStream.getType方法的具體用法?Java DataStream.getType怎麽用?Java DataStream.getType使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.flink.streaming.api.datastream.DataStream的用法示例。


在下文中一共展示了DataStream.getType方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: addSink

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Writes a DataStream into a Cassandra database.
 *
 * @param input input DataStream
 * @param <IN>  input type
 * @return CassandraSinkBuilder, to further configure the sink
 */
public static <IN> CassandraSinkBuilder<IN> addSink(DataStream<IN> input) {
	TypeInformation<IN> typeInfo = input.getType();
	if (typeInfo instanceof TupleTypeInfo) {
		DataStream<Tuple> tupleInput = (DataStream<Tuple>) input;
		return (CassandraSinkBuilder<IN>) new CassandraTupleSinkBuilder<>(tupleInput, tupleInput.getType(), tupleInput.getType().createSerializer(tupleInput.getExecutionEnvironment().getConfig()));
	}
	if (typeInfo instanceof RowTypeInfo) {
		DataStream<Row> rowInput = (DataStream<Row>) input;
		return (CassandraSinkBuilder<IN>) new CassandraRowSinkBuilder(rowInput, rowInput.getType(), rowInput.getType().createSerializer(rowInput.getExecutionEnvironment().getConfig()));
	}
	if (typeInfo instanceof PojoTypeInfo) {
		return new CassandraPojoSinkBuilder<>(input, input.getType(), input.getType().createSerializer(input.getExecutionEnvironment().getConfig()));
	}
	if (typeInfo instanceof CaseClassTypeInfo) {
		DataStream<Product> productInput = (DataStream<Product>) input;
		return (CassandraSinkBuilder<IN>) new CassandraScalaProductSinkBuilder<>(productInput, productInput.getType(), productInput.getType().createSerializer(input.getExecutionEnvironment().getConfig()));
	}
	throw new IllegalArgumentException("No support for the type of the given DataStream: " + input.getType());
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:27,代碼來源:CassandraSink.java

示例2: registerStream

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Define siddhi stream with streamId, source <code>DataStream</code> and stream schema.
 *
 * @param streamId Unique siddhi streamId
 * @param dataStream DataStream to bind to the siddhi stream.
 * @param fieldNames Siddhi stream schema field names
    */
public <T> void registerStream(final String streamId, DataStream<T> dataStream, String... fieldNames) {
	Preconditions.checkNotNull(streamId,"streamId");
	Preconditions.checkNotNull(dataStream,"dataStream");
	Preconditions.checkNotNull(fieldNames,"fieldNames");
	if (isStreamDefined(streamId)) {
		throw new DuplicatedStreamException("Input stream: " + streamId + " already exists");
	}
	dataStreams.put(streamId, dataStream);
	SiddhiStreamSchema<T> schema = new SiddhiStreamSchema<>(dataStream.getType(), fieldNames);
	schema.setTypeSerializer(schema.getTypeInfo().createSerializer(dataStream.getExecutionConfig()));
	dataStreamSchemas.put(streamId, schema);
}
 
開發者ID:haoch,項目名稱:flink-siddhi,代碼行數:20,代碼來源:SiddhiCEP.java

示例3: registerStream

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Define siddhi stream with streamId, source <code>DataStream</code> and stream schema.
 *
 * @param streamId Unique siddhi streamId
 * @param dataStream DataStream to bind to the siddhi stream.
 * @param fieldNames Siddhi stream schema field names
 */
public <T> void registerStream(final String streamId, DataStream<T> dataStream, String... fieldNames) {
    Preconditions.checkNotNull(streamId,"streamId");
    Preconditions.checkNotNull(dataStream,"dataStream");
    Preconditions.checkNotNull(fieldNames,"fieldNames");
    if (isStreamDefined(streamId)) {
        throw new DuplicatedStreamException("Input stream: " + streamId + " already exists");
    }
    dataStreams.put(streamId, dataStream);
    SiddhiStreamSchema<T> schema = new SiddhiStreamSchema<>(dataStream.getType(), fieldNames);
    schema.setTypeSerializer(schema.getTypeInfo().createSerializer(dataStream.getExecutionConfig()));
    dataStreamSchemas.put(streamId, schema);
}
 
開發者ID:apache,項目名稱:bahir-flink,代碼行數:20,代碼來源:SiddhiCEP.java

示例4: testValueState

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Tests simple value state queryable state instance. Each source emits
 * (subtaskIndex, 0)..(subtaskIndex, numElements) tuples, which are then
 * queried. The tests succeeds after each subtask index is queried with
 * value numElements (the latest element updated the state).
 */
@Test
public void testValueState() throws Exception {

	final Deadline deadline = TEST_TIMEOUT.fromNow();
	final long numElements = 1024L;

	StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
	env.setStateBackend(stateBackend);
	env.setParallelism(maxParallelism);
	// Very important, because cluster is shared between tests and we
	// don't explicitly check that all slots are available before
	// submitting.
	env.setRestartStrategy(RestartStrategies.fixedDelayRestart(Integer.MAX_VALUE, 1000L));

	DataStream<Tuple2<Integer, Long>> source = env.addSource(new TestAscendingValueSource(numElements));

	// Value state
	ValueStateDescriptor<Tuple2<Integer, Long>> valueState = new ValueStateDescriptor<>("any", source.getType());

	source.keyBy(new KeySelector<Tuple2<Integer, Long>, Integer>() {
		private static final long serialVersionUID = 7662520075515707428L;

		@Override
		public Integer getKey(Tuple2<Integer, Long> value) {
			return value.f0;
		}
	}).asQueryableState("hakuna", valueState);

	try (AutoCancellableJob autoCancellableJob = new AutoCancellableJob(cluster, env, deadline)) {

		final JobID jobId = autoCancellableJob.getJobId();
		final JobGraph jobGraph = autoCancellableJob.getJobGraph();

		cluster.submitJobDetached(jobGraph);

		executeValueQuery(deadline, client, jobId, "hakuna", valueState, numElements);
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:45,代碼來源:AbstractQueryableStateTestBase.java

示例5: addSink

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Writes a DataStream into a Cassandra database.
 *
 * @param input input DataStream
 * @param <IN>  input type
 * @return CassandraSinkBuilder, to further configure the sink
 */
public static <IN, T extends Tuple> CassandraSinkBuilder<IN> addSink(DataStream<IN> input) {
	if (input.getType() instanceof TupleTypeInfo) {
		DataStream<T> tupleInput = (DataStream<T>) input;
		return (CassandraSinkBuilder<IN>) new CassandraTupleSinkBuilder<>(tupleInput, tupleInput.getType(), tupleInput.getType().createSerializer(tupleInput.getExecutionEnvironment().getConfig()));
	} else {
		return new CassandraPojoSinkBuilder<>(input, input.getType(), input.getType().createSerializer(input.getExecutionEnvironment().getConfig()));
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:16,代碼來源:CassandraSink.java

示例6: HTMStream

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
HTMStream(final DataStream<T> input, NetworkFactory<T> networkFactory) {
    this.inferenceStreamBuilder = new InferenceStreamBuilder(input, networkFactory);
    this.inputType = input.getType();
}
 
開發者ID:htm-community,項目名稱:flink-htm,代碼行數:5,代碼來源:HTMStream.java

示例7: testValueState

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Tests simple value state queryable state instance. Each source emits
 * (subtaskIndex, 0)..(subtaskIndex, numElements) tuples, which are then
 * queried. The tests succeeds after each subtask index is queried with
 * value numElements (the latest element updated the state).
 */
@Test
public void testValueState() throws Exception {
	// Config
	final Deadline deadline = TEST_TIMEOUT.fromNow();

	final int numElements = 1024;

	final QueryableStateClient client = new QueryableStateClient(cluster.configuration());

	JobID jobId = null;
	try {
		StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
		env.setStateBackend(stateBackend);
		env.setParallelism(maxParallelism);
		// Very important, because cluster is shared between tests and we
		// don't explicitly check that all slots are available before
		// submitting.
		env.setRestartStrategy(RestartStrategies.fixedDelayRestart(Integer.MAX_VALUE, 1000));

		DataStream<Tuple2<Integer, Long>> source = env
				.addSource(new TestAscendingValueSource(numElements));

		// Value state
		ValueStateDescriptor<Tuple2<Integer, Long>> valueState = new ValueStateDescriptor<>(
				"any",
				source.getType());

		QueryableStateStream<Integer, Tuple2<Integer, Long>> queryableState =
				source.keyBy(new KeySelector<Tuple2<Integer, Long>, Integer>() {
					private static final long serialVersionUID = 7662520075515707428L;

					@Override
					public Integer getKey(Tuple2<Integer, Long> value) throws Exception {
						return value.f0;
					}
				}).asQueryableState("hakuna", valueState);

		// Submit the job graph
		JobGraph jobGraph = env.getStreamGraph().getJobGraph();
		jobId = jobGraph.getJobID();

		cluster.submitJobDetached(jobGraph);

		// Now query
		long expected = numElements;

		executeQuery(deadline, client, jobId, "hakuna", valueState, expected);
	} finally {
		// Free cluster resources
		if (jobId != null) {
			Future<CancellationSuccess> cancellation = cluster
					.getLeaderGateway(deadline.timeLeft())
					.ask(new JobManagerMessages.CancelJob(jobId), deadline.timeLeft())
					.mapTo(ClassTag$.MODULE$.<CancellationSuccess>apply(CancellationSuccess.class));

			Await.ready(cancellation, deadline.timeLeft());
		}

		client.shutDown();
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:68,代碼來源:AbstractQueryableStateITCase.java

示例8: testQueryNonStartedJobState

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Similar tests as {@link #testValueState()} but before submitting the
 * job, we already issue one request which fails.
 */
@Test
public void testQueryNonStartedJobState() throws Exception {
	// Config
	final Deadline deadline = TEST_TIMEOUT.fromNow();

	final int numElements = 1024;

	final QueryableStateClient client = new QueryableStateClient(cluster.configuration());

	JobID jobId = null;
	try {
		StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
		env.setStateBackend(stateBackend);
		env.setParallelism(maxParallelism);
		// Very important, because cluster is shared between tests and we
		// don't explicitly check that all slots are available before
		// submitting.
		env.setRestartStrategy(RestartStrategies.fixedDelayRestart(Integer.MAX_VALUE, 1000));

		DataStream<Tuple2<Integer, Long>> source = env
			.addSource(new TestAscendingValueSource(numElements));

		// Value state
		ValueStateDescriptor<Tuple2<Integer, Long>> valueState = new ValueStateDescriptor<>(
			"any",
			source.getType(),
			null);

		QueryableStateStream<Integer, Tuple2<Integer, Long>> queryableState =
			source.keyBy(new KeySelector<Tuple2<Integer, Long>, Integer>() {
				private static final long serialVersionUID = 7480503339992214681L;

				@Override
				public Integer getKey(Tuple2<Integer, Long> value) throws Exception {
					return value.f0;
				}
			}).asQueryableState("hakuna", valueState);

		// Submit the job graph
		JobGraph jobGraph = env.getStreamGraph().getJobGraph();
		jobId = jobGraph.getJobID();

		// Now query
		long expected = numElements;

		// query once
		client.getKvState(
				jobId,
				queryableState.getQueryableStateName(),
				0,
				VoidNamespace.INSTANCE,
				BasicTypeInfo.INT_TYPE_INFO,
				VoidNamespaceTypeInfo.INSTANCE,
				valueState);

		cluster.submitJobDetached(jobGraph);

		executeQuery(deadline, client, jobId, "hakuna", valueState, expected);
	} finally {
		// Free cluster resources
		if (jobId != null) {
			Future<CancellationSuccess> cancellation = cluster
				.getLeaderGateway(deadline.timeLeft())
				.ask(new JobManagerMessages.CancelJob(jobId), deadline.timeLeft())
				.mapTo(ClassTag$.MODULE$.<CancellationSuccess>apply(CancellationSuccess.class));

			Await.ready(cancellation, deadline.timeLeft());
		}

		client.shutDown();
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:77,代碼來源:AbstractQueryableStateITCase.java

示例9: testReducingState

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Tests simple reducing state queryable state instance. Each source emits
 * (subtaskIndex, 0)..(subtaskIndex, numElements) tuples, which are then
 * queried. The reducing state instance sums these up. The test succeeds
 * after each subtask index is queried with result n*(n+1)/2.
 */
@Test
public void testReducingState() throws Exception {
	// Config
	final Deadline deadline = TEST_TIMEOUT.fromNow();

	final int numElements = 1024;

	final QueryableStateClient client = new QueryableStateClient(cluster.configuration());

	JobID jobId = null;
	try {
		StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
		env.setStateBackend(stateBackend);
		env.setParallelism(maxParallelism);
		// Very important, because cluster is shared between tests and we
		// don't explicitly check that all slots are available before
		// submitting.
		env.setRestartStrategy(RestartStrategies.fixedDelayRestart(Integer.MAX_VALUE, 1000));

		DataStream<Tuple2<Integer, Long>> source = env
				.addSource(new TestAscendingValueSource(numElements));

		// Reducing state
		ReducingStateDescriptor<Tuple2<Integer, Long>> reducingState =
				new ReducingStateDescriptor<>(
						"any",
						new SumReduce(),
						source.getType());

		QueryableStateStream<Integer, Tuple2<Integer, Long>> queryableState =
				source.keyBy(new KeySelector<Tuple2<Integer, Long>, Integer>() {
					private static final long serialVersionUID = 8470749712274833552L;

					@Override
					public Integer getKey(Tuple2<Integer, Long> value) throws Exception {
						return value.f0;
					}
				}).asQueryableState("jungle", reducingState);

		// Submit the job graph
		JobGraph jobGraph = env.getStreamGraph().getJobGraph();
		jobId = jobGraph.getJobID();

		cluster.submitJobDetached(jobGraph);

		// Wait until job is running

		// Now query
		long expected = numElements * (numElements + 1) / 2;

		executeQuery(deadline, client, jobId, "jungle", reducingState, expected);
	} finally {
		// Free cluster resources
		if (jobId != null) {
			Future<CancellationSuccess> cancellation = cluster
					.getLeaderGateway(deadline.timeLeft())
					.ask(new JobManagerMessages.CancelJob(jobId), deadline.timeLeft())
					.mapTo(ClassTag$.MODULE$.<CancellationSuccess>apply(CancellationSuccess.class));

			Await.ready(cancellation, deadline.timeLeft());
		}

		client.shutDown();
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:72,代碼來源:AbstractQueryableStateITCase.java

示例10: testValueState

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Tests simple value state queryable state instance. Each source emits
 * (subtaskIndex, 0)..(subtaskIndex, numElements) tuples, which are then
 * queried. The tests succeeds after each subtask index is queried with
 * value numElements (the latest element updated the state).
 */
@Test
public void testValueState() throws Exception {
	// Config
	final Deadline deadline = TEST_TIMEOUT.fromNow();

	final int numElements = 1024;

	final QueryableStateClient client = new QueryableStateClient(cluster.configuration());

	JobID jobId = null;
	try {
		StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
		env.setParallelism(NUM_SLOTS);
		// Very important, because cluster is shared between tests and we
		// don't explicitly check that all slots are available before
		// submitting.
		env.setRestartStrategy(RestartStrategies.fixedDelayRestart(Integer.MAX_VALUE, 1000));

		DataStream<Tuple2<Integer, Long>> source = env
				.addSource(new TestAscendingValueSource(numElements));

		// Value state
		ValueStateDescriptor<Tuple2<Integer, Long>> valueState = new ValueStateDescriptor<>(
				"any",
				source.getType());

		QueryableStateStream<Integer, Tuple2<Integer, Long>> queryableState =
				source.keyBy(new KeySelector<Tuple2<Integer, Long>, Integer>() {
					@Override
					public Integer getKey(Tuple2<Integer, Long> value) throws Exception {
						return value.f0;
					}
				}).asQueryableState("hakuna", valueState);

		// Submit the job graph
		JobGraph jobGraph = env.getStreamGraph().getJobGraph();
		jobId = jobGraph.getJobID();

		cluster.submitJobDetached(jobGraph);

		// Now query
		long expected = numElements;

		executeValueQuery(deadline, client, jobId, queryableState,
			expected);
	} finally {
		// Free cluster resources
		if (jobId != null) {
			Future<CancellationSuccess> cancellation = cluster
					.getLeaderGateway(deadline.timeLeft())
					.ask(new JobManagerMessages.CancelJob(jobId), deadline.timeLeft())
					.mapTo(ClassTag$.MODULE$.<CancellationSuccess>apply(CancellationSuccess.class));

			Await.ready(cancellation, deadline.timeLeft());
		}

		client.shutDown();
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:66,代碼來源:QueryableStateITCase.java

示例11: testQueryNonStartedJobState

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Similar tests as {@link #testValueState()} but before submitting the
 * job, we already issue one request which fails.
 */
@Test
public void testQueryNonStartedJobState() throws Exception {
	// Config
	final Deadline deadline = TEST_TIMEOUT.fromNow();

	final int numElements = 1024;

	final QueryableStateClient client = new QueryableStateClient(cluster.configuration());

	JobID jobId = null;
	try {
		StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
		env.setParallelism(NUM_SLOTS);
		// Very important, because cluster is shared between tests and we
		// don't explicitly check that all slots are available before
		// submitting.
		env.setRestartStrategy(RestartStrategies.fixedDelayRestart(Integer.MAX_VALUE, 1000));

		DataStream<Tuple2<Integer, Long>> source = env
			.addSource(new TestAscendingValueSource(numElements));

		// Value state
		ValueStateDescriptor<Tuple2<Integer, Long>> valueState = new ValueStateDescriptor<>(
			"any",
			source.getType(),
			null);

		QueryableStateStream<Integer, Tuple2<Integer, Long>> queryableState =
			source.keyBy(new KeySelector<Tuple2<Integer, Long>, Integer>() {
				@Override
				public Integer getKey(Tuple2<Integer, Long> value) throws Exception {
					return value.f0;
				}
			}).asQueryableState("hakuna", valueState);

		// Submit the job graph
		JobGraph jobGraph = env.getStreamGraph().getJobGraph();
		jobId = jobGraph.getJobID();

		// Now query
		long expected = numElements;

		// query once
		client.getKvState(jobId, queryableState.getQueryableStateName(), 0,
			KvStateRequestSerializer.serializeKeyAndNamespace(
				0,
				queryableState.getKeySerializer(),
				VoidNamespace.INSTANCE,
				VoidNamespaceSerializer.INSTANCE));

		cluster.submitJobDetached(jobGraph);

		executeValueQuery(deadline, client, jobId, queryableState,
			expected);
	} finally {
		// Free cluster resources
		if (jobId != null) {
			Future<CancellationSuccess> cancellation = cluster
				.getLeaderGateway(deadline.timeLeft())
				.ask(new JobManagerMessages.CancelJob(jobId), deadline.timeLeft())
				.mapTo(ClassTag$.MODULE$.<CancellationSuccess>apply(CancellationSuccess.class));

			Await.ready(cancellation, deadline.timeLeft());
		}

		client.shutDown();
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:73,代碼來源:QueryableStateITCase.java

示例12: testReducingState

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Tests simple reducing state queryable state instance. Each source emits
 * (subtaskIndex, 0)..(subtaskIndex, numElements) tuples, which are then
 * queried. The reducing state instance sums these up. The test succeeds
 * after each subtask index is queried with result n*(n+1)/2.
 */
@Test
public void testReducingState() throws Exception {
	// Config
	final Deadline deadline = TEST_TIMEOUT.fromNow();

	final int numElements = 1024;

	final QueryableStateClient client = new QueryableStateClient(cluster.configuration());

	JobID jobId = null;
	try {
		StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
		env.setParallelism(NUM_SLOTS);
		// Very important, because cluster is shared between tests and we
		// don't explicitly check that all slots are available before
		// submitting.
		env.setRestartStrategy(RestartStrategies.fixedDelayRestart(Integer.MAX_VALUE, 1000));

		DataStream<Tuple2<Integer, Long>> source = env
				.addSource(new TestAscendingValueSource(numElements));

		// Reducing state
		ReducingStateDescriptor<Tuple2<Integer, Long>> reducingState =
				new ReducingStateDescriptor<>(
						"any",
						new SumReduce(),
						source.getType());

		QueryableStateStream<Integer, Tuple2<Integer, Long>> queryableState =
				source.keyBy(new KeySelector<Tuple2<Integer, Long>, Integer>() {
					@Override
					public Integer getKey(Tuple2<Integer, Long> value) throws Exception {
						return value.f0;
					}
				}).asQueryableState("jungle", reducingState);

		// Submit the job graph
		JobGraph jobGraph = env.getStreamGraph().getJobGraph();
		jobId = jobGraph.getJobID();

		cluster.submitJobDetached(jobGraph);

		// Wait until job is running

		// Now query
		long expected = numElements * (numElements + 1) / 2;

		executeValueQuery(deadline, client, jobId, queryableState,
			expected);
	} finally {
		// Free cluster resources
		if (jobId != null) {
			Future<CancellationSuccess> cancellation = cluster
					.getLeaderGateway(deadline.timeLeft())
					.ask(new JobManagerMessages.CancelJob(jobId), deadline.timeLeft())
					.mapTo(ClassTag$.MODULE$.<CancellationSuccess>apply(CancellationSuccess.class));

			Await.ready(cancellation, deadline.timeLeft());
		}

		client.shutDown();
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:70,代碼來源:QueryableStateITCase.java

示例13: testQueryNonStartedJobState

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Similar tests as {@link #testValueState()} but before submitting the
 * job, we already issue one request which fails.
 */
@Test
public void testQueryNonStartedJobState() throws Exception {

	final Deadline deadline = TEST_TIMEOUT.fromNow();
	final long numElements = 1024L;

	StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
	env.setStateBackend(stateBackend);
	env.setParallelism(maxParallelism);
	// Very important, because cluster is shared between tests and we
	// don't explicitly check that all slots are available before
	// submitting.
	env.setRestartStrategy(RestartStrategies.fixedDelayRestart(Integer.MAX_VALUE, 1000L));

	DataStream<Tuple2<Integer, Long>> source = env.addSource(new TestAscendingValueSource(numElements));

	ValueStateDescriptor<Tuple2<Integer, Long>> valueState = new ValueStateDescriptor<>(
		"any", source.getType(), 	null);

	QueryableStateStream<Integer, Tuple2<Integer, Long>> queryableState =
			source.keyBy(new KeySelector<Tuple2<Integer, Long>, Integer>() {

				private static final long serialVersionUID = 7480503339992214681L;

				@Override
				public Integer getKey(Tuple2<Integer, Long> value) {
					return value.f0;
				}
			}).asQueryableState("hakuna", valueState);

	try (AutoCancellableJob autoCancellableJob = new AutoCancellableJob(cluster, env, deadline)) {

		final JobID jobId = autoCancellableJob.getJobId();
		final JobGraph jobGraph = autoCancellableJob.getJobGraph();

		long expected = numElements;

		// query once
		client.getKvState(
				autoCancellableJob.getJobId(),
				queryableState.getQueryableStateName(),
				0,
				BasicTypeInfo.INT_TYPE_INFO,
				valueState);

		cluster.submitJobDetached(jobGraph);

		executeValueQuery(deadline, client, jobId, "hakuna", valueState, expected);
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:55,代碼來源:AbstractQueryableStateTestBase.java

示例14: testValueStateDefault

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Tests simple value state queryable state instance with a default value
 * set. Each source emits (subtaskIndex, 0)..(subtaskIndex, numElements)
 * tuples, the key is mapped to 1 but key 0 is queried which should throw
 * a {@link UnknownKeyOrNamespaceException} exception.
 *
 * @throws UnknownKeyOrNamespaceException thrown due querying a non-existent key
 */
@Test(expected = UnknownKeyOrNamespaceException.class)
public void testValueStateDefault() throws Throwable {

	final Deadline deadline = TEST_TIMEOUT.fromNow();
	final long numElements = 1024L;

	StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
	env.setStateBackend(stateBackend);
	env.setParallelism(maxParallelism);
	// Very important, because cluster is shared between tests and we
	// don't explicitly check that all slots are available before
	// submitting.
	env.setRestartStrategy(RestartStrategies.fixedDelayRestart(Integer.MAX_VALUE, 1000L));

	DataStream<Tuple2<Integer, Long>> source = env.addSource(new TestAscendingValueSource(numElements));

	ValueStateDescriptor<Tuple2<Integer, Long>> valueState = new ValueStateDescriptor<>(
			"any", source.getType(), 	Tuple2.of(0, 1337L));

	// only expose key "1"
	QueryableStateStream<Integer, Tuple2<Integer, Long>> queryableState = source.keyBy(
			new KeySelector<Tuple2<Integer, Long>, Integer>() {
				private static final long serialVersionUID = 4509274556892655887L;

				@Override
				public Integer getKey(Tuple2<Integer, Long> value) {
					return 1;
				}
			}).asQueryableState("hakuna", valueState);

	try (AutoCancellableJob autoCancellableJob = new AutoCancellableJob(cluster, env, deadline)) {

		final JobID jobId = autoCancellableJob.getJobId();
		final JobGraph jobGraph = autoCancellableJob.getJobGraph();

		cluster.submitJobDetached(jobGraph);

		// Now query
		int key = 0;
		CompletableFuture<ValueState<Tuple2<Integer, Long>>> future = getKvState(
				deadline,
				client,
				jobId,
				queryableState.getQueryableStateName(),
				key,
				BasicTypeInfo.INT_TYPE_INFO,
				valueState,
				true,
				executor);

		try {
			future.get(deadline.timeLeft().toMillis(), TimeUnit.MILLISECONDS);
		} catch (ExecutionException | CompletionException e) {
			// get() on a completedExceptionally future wraps the
			// exception in an ExecutionException.
			throw e.getCause();
		}
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:68,代碼來源:AbstractQueryableStateTestBase.java

示例15: testValueState

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Tests simple value state queryable state instance. Each source emits
 * (subtaskIndex, 0)..(subtaskIndex, numElements) tuples, which are then
 * queried. The tests succeeds after each subtask index is queried with
 * value numElements (the latest element updated the state).
 */
@Test
public void testValueState() throws Exception {
	// Config
	final Deadline deadline = TEST_TIMEOUT.fromNow();

	final long numElements = 1024L;

	final QueryableStateClient client = new QueryableStateClient(
			"localhost",
			Integer.parseInt(QueryableStateOptions.PROXY_PORT_RANGE.defaultValue()));

	JobID jobId = null;
	try {
		StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
		env.setStateBackend(stateBackend);
		env.setParallelism(maxParallelism);
		// Very important, because cluster is shared between tests and we
		// don't explicitly check that all slots are available before
		// submitting.
		env.setRestartStrategy(RestartStrategies.fixedDelayRestart(Integer.MAX_VALUE, 1000L));

		DataStream<Tuple2<Integer, Long>> source = env
				.addSource(new TestAscendingValueSource(numElements));

		// Value state
		ValueStateDescriptor<Tuple2<Integer, Long>> valueState = new ValueStateDescriptor<>(
				"any",
				source.getType());

		source.keyBy(new KeySelector<Tuple2<Integer, Long>, Integer>() {
			private static final long serialVersionUID = 7662520075515707428L;

			@Override
			public Integer getKey(Tuple2<Integer, Long> value) throws Exception {
				return value.f0;
			}
		}).asQueryableState("hakuna", valueState);

		// Submit the job graph
		JobGraph jobGraph = env.getStreamGraph().getJobGraph();
		jobId = jobGraph.getJobID();

		cluster.submitJobDetached(jobGraph);

		executeValueQuery(deadline, client, jobId, "hakuna", valueState, numElements);
	} finally {
		// Free cluster resources
		if (jobId != null) {
			CompletableFuture<CancellationSuccess> cancellation = FutureUtils.toJava(cluster
					.getLeaderGateway(deadline.timeLeft())
					.ask(new JobManagerMessages.CancelJob(jobId), deadline.timeLeft())
					.mapTo(ClassTag$.MODULE$.<CancellationSuccess>apply(CancellationSuccess.class)));

			cancellation.get(deadline.timeLeft().toMillis(), TimeUnit.MILLISECONDS);
		}

		client.shutdown();
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:66,代碼來源:AbstractQueryableStateITCase.java


注:本文中的org.apache.flink.streaming.api.datastream.DataStream.getType方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。