当前位置: 首页>>代码示例>>Java>>正文


Java DataStream.transform方法代码示例

本文整理汇总了Java中org.apache.flink.streaming.api.datastream.DataStream.transform方法的典型用法代码示例。如果您正苦于以下问题:Java DataStream.transform方法的具体用法?Java DataStream.transform怎么用?Java DataStream.transform使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.flink.streaming.api.datastream.DataStream的用法示例。


在下文中一共展示了DataStream.transform方法的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testOutputTypeConfigurationWithOneInputTransformation

import org.apache.flink.streaming.api.datastream.DataStream; //导入方法依赖的package包/类
/**
 * Test whether an {@link OutputTypeConfigurable} implementation gets called with the correct
 * output type. In this test case the output type must be BasicTypeInfo.INT_TYPE_INFO.
 *
 * @throws Exception
 */
@Test
public void testOutputTypeConfigurationWithOneInputTransformation() throws Exception {
	StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

	DataStream<Integer> source = env.fromElements(1, 10);

	OutputTypeConfigurableOperationWithOneInput outputTypeConfigurableOperation = new OutputTypeConfigurableOperationWithOneInput();

	DataStream<Integer> result = source.transform(
		"Single input and output type configurable operation",
		BasicTypeInfo.INT_TYPE_INFO,
		outputTypeConfigurableOperation);

	result.addSink(new DiscardingSink<Integer>());

	env.getStreamGraph();

	assertEquals(BasicTypeInfo.INT_TYPE_INFO, outputTypeConfigurableOperation.getTypeInformation());
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:26,代码来源:StreamGraphGeneratorTest.java

示例2: testOperatorChainedToSource

import org.apache.flink.streaming.api.datastream.DataStream; //导入方法依赖的package包/类
/**
 * Note: this test fails if we don't check for exceptions in the source contexts and do not
 * synchronize in the source contexts.
 */
@Test
public void testOperatorChainedToSource() throws Exception {

	StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
	env.setStreamTimeCharacteristic(timeCharacteristic);
	env.setParallelism(1);

	DataStream<String> source = env.addSource(new InfiniteTestSource());

	source.transform("Custom Operator", BasicTypeInfo.STRING_TYPE_INFO, new TimerOperator(ChainingStrategy.ALWAYS));

	boolean testSuccess = false;
	try {
		env.execute("Timer test");
	} catch (JobExecutionException e) {
		if (e.getCause() instanceof TimerException) {
			TimerException te = (TimerException) e.getCause();
			if (te.getCause() instanceof RuntimeException) {
				RuntimeException re = (RuntimeException) te.getCause();
				if (re.getMessage().equals("TEST SUCCESS")) {
					testSuccess = true;
				} else {
					throw e;
				}
			} else {
				throw e;
			}
		} else {
			throw e;
		}
	}
	Assert.assertTrue(testSuccess);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:38,代码来源:StreamTaskTimerITCase.java

示例3: testOneInputOperatorWithoutChaining

import org.apache.flink.streaming.api.datastream.DataStream; //导入方法依赖的package包/类
/**
 * Note: this test fails if we don't check for exceptions in the source contexts and do not
 * synchronize in the source contexts.
 */
@Test
public void testOneInputOperatorWithoutChaining() throws Exception {
	StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
	env.setStreamTimeCharacteristic(timeCharacteristic);
	env.setParallelism(1);

	DataStream<String> source = env.addSource(new InfiniteTestSource());

	source.transform("Custom Operator", BasicTypeInfo.STRING_TYPE_INFO, new TimerOperator(ChainingStrategy.NEVER));

	boolean testSuccess = false;
	try {
		env.execute("Timer test");
	} catch (JobExecutionException e) {
		if (e.getCause() instanceof TimerException) {
			TimerException te = (TimerException) e.getCause();
			if (te.getCause() instanceof RuntimeException) {
				RuntimeException re = (RuntimeException) te.getCause();
				if (re.getMessage().equals("TEST SUCCESS")) {
					testSuccess = true;
				} else {
					throw e;
				}
			} else {
				throw e;
			}
		} else {
			throw e;
		}
	}
	Assert.assertTrue(testSuccess);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:37,代码来源:StreamTaskTimerITCase.java

示例4: createDataStream

import org.apache.flink.streaming.api.datastream.DataStream; //导入方法依赖的package包/类
@SuppressWarnings("unchecked")
public static <OUT> DataStream<OUT> createDataStream(SiddhiOperatorContext context, DataStream<Tuple2<String, Object>> namedStream) {
	return namedStream.transform(context.getName(), context.getOutputStreamType(), new SiddhiStreamOperator(context));
}
 
开发者ID:haoch,项目名称:flink-siddhi,代码行数:5,代码来源:SiddhiStreamFactory.java

示例5: createDataStream

import org.apache.flink.streaming.api.datastream.DataStream; //导入方法依赖的package包/类
@SuppressWarnings("unchecked")
public static <OUT> DataStream<OUT> createDataStream(SiddhiOperatorContext context, DataStream<Tuple2<String, Object>> namedStream) {
    return namedStream.transform(context.getName(), context.getOutputStreamType(), new SiddhiStreamOperator(context));
}
 
开发者ID:apache,项目名称:bahir-flink,代码行数:5,代码来源:SiddhiStreamFactory.java

示例6: writeToKafkaWithTimestamps

import org.apache.flink.streaming.api.datastream.DataStream; //导入方法依赖的package包/类
/**
 * Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to
 * the topic.
 *
 * This constructor allows writing timestamps to Kafka, it follow approach (b) (see above)
 *
 *  @param inStream The stream to write to Kafka
 *  @param topicId The name of the target topic
 *  @param serializationSchema A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messages
 *  @param producerConfig Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
 *  @param customPartitioner A serializable partitioner for assigning messages to Kafka partitions.
 */
public static <T> FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(DataStream<T> inStream,
																				String topicId,
																				KeyedSerializationSchema<T> serializationSchema,
																				Properties producerConfig,
																				KafkaPartitioner<T> customPartitioner) {

	GenericTypeInfo<Object> objectTypeInfo = new GenericTypeInfo<>(Object.class);
	FlinkKafkaProducer010<T> kafkaProducer = new FlinkKafkaProducer010<>(topicId, serializationSchema, producerConfig, customPartitioner);
	SingleOutputStreamOperator<Object> transformation = inStream.transform("FlinKafkaProducer 0.10.x", objectTypeInfo, kafkaProducer);
	return new FlinkKafkaProducer010Configuration<>(transformation, kafkaProducer);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:24,代码来源:FlinkKafkaProducer010.java


注:本文中的org.apache.flink.streaming.api.datastream.DataStream.transform方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。