當前位置: 首頁>>代碼示例>>Java>>正文


Java DataStream.assignTimestampsAndWatermarks方法代碼示例

本文整理匯總了Java中org.apache.flink.streaming.api.datastream.DataStream.assignTimestampsAndWatermarks方法的典型用法代碼示例。如果您正苦於以下問題:Java DataStream.assignTimestampsAndWatermarks方法的具體用法?Java DataStream.assignTimestampsAndWatermarks怎麽用?Java DataStream.assignTimestampsAndWatermarks使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.flink.streaming.api.datastream.DataStream的用法示例。


在下文中一共展示了DataStream.assignTimestampsAndWatermarks方法的8個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: setupKayedRawMessagesStream

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/***
 * Setup the kayed stream of a raw stream.
 * 
 * @param env
 * @param streamSource
 * @param parsingConfig
 * @return
 */
private static KeyedStream<Tuple3<String, Long, String>, Tuple> setupKayedRawMessagesStream(
    final StreamExecutionEnvironment env, String parsingConfig) {
  DataStream<Tuple3<String, Long, String>> rawStream =
      env.addSource(
          new FileLinesStreamSource(configs.getStringProp("aisDataSetFilePath"), parsingConfig,true))
          .flatMap(new RawStreamMapper(parsingConfig)).setParallelism(1);

  // assign the timestamp of the AIS messages based on their timestamps
  DataStream<Tuple3<String, Long, String>> rawStreamWithTimeStamp =
      rawStream.assignTimestampsAndWatermarks(new RawMessageTimestampAssigner());

  // Construct the keyed stream (i.e., trajectories stream) of the raw messages by grouping them
  // based on the message ID (MMSI for vessels)
  KeyedStream<Tuple3<String, Long, String>, Tuple> kaydAisMessagesStream =
      rawStreamWithTimeStamp.keyBy(0).process(new RawMessagesSorter()).keyBy(0);
  return kaydAisMessagesStream;
}
 
開發者ID:ehabqadah,項目名稱:in-situ-processing-datAcron,代碼行數:26,代碼來源:RawStreamSimulator.java

示例2: getEvents

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
public static DataStream<EventPostComment> getEvents(StreamExecutionEnvironment env, AppConfiguration config) {
	String postSource = config.getPosts();
	String commentSource = config.getComments();
	
	DataStream<EventPostComment> events = null;
	
	if (postSource == null || commentSource == null) {
		List<EventPostComment> list = EventPostCommentStreamgen.getDefault();
		events = env.fromCollection(list); 
	} else {
		events = env.addSource(new EventPostCommentSource(postSource, commentSource), "events-pc-source");
	}			
	
	events.assignTimestampsAndWatermarks(new AscendingTimestamper<EventPostComment>());
	
	return events;		
}
 
開發者ID:3Cores,項目名稱:sostream,代碼行數:18,代碼來源:EventPostCommentStreamgen.java

示例3: getEvents

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
public static DataStream<EventCommentFriendshipLike> getEvents(StreamExecutionEnvironment env, AppConfiguration config) {
	String commentSource = config.getComments();
	String friendshipSource = config.getFriendships();
	String likeSource = config.getLikes();		
	
	DataStream<EventCommentFriendshipLike> events = null;
	
	if (commentSource == null || friendshipSource == null || likeSource == null) {
		List<EventCommentFriendshipLike> list = EventCommentFriendshipLikeStreamgen.getDefault();
		events = env.fromCollection(list); 
	} else {
		events = env.addSource(new EventCommentFriendshipLikeSource(commentSource, friendshipSource, likeSource), "events-cfl-source");
	}			
	
	events.assignTimestampsAndWatermarks(new AscendingTimestamper<EventCommentFriendshipLike>());
	
	return events;		
}
 
開發者ID:3Cores,項目名稱:sostream,代碼行數:19,代碼來源:EventCommentFriendshipLikeStreamgen.java

示例4: testUnboundedPojoStreamAndReturnPojo

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
@Test
public void testUnboundedPojoStreamAndReturnPojo() throws Exception {
	StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
	DataStream<Event> input = env.addSource(new RandomEventSource(5));
	input.assignTimestampsAndWatermarks(new AscendingTimestampExtractor<Event>() {
		@Override
		public long extractAscendingTimestamp(Event element) {
			return element.getTimestamp();
		}
	});

	DataStream<Event> output = SiddhiCEP
		.define("inputStream", input, "id", "name", "price", "timestamp")
		.cql("from inputStream select timestamp, id, name, price insert into  outputStream")
		.returns("outputStream", Event.class);

	String resultPath = tempFolder.newFile().toURI().toString();
	output.writeAsText(resultPath, FileSystem.WriteMode.OVERWRITE);
	env.execute();
	assertEquals(5, getLineCount(resultPath));
}
 
開發者ID:haoch,項目名稱:flink-siddhi,代碼行數:22,代碼來源:SiddhiCEPITCase.java

示例5: testUnboundedPojoStreamAndReturnPojo

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
@Test
public void testUnboundedPojoStreamAndReturnPojo() throws Exception {
    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    DataStream<Event> input = env.addSource(new RandomEventSource(5));
    input.assignTimestampsAndWatermarks(new AscendingTimestampExtractor<Event>() {
        @Override
        public long extractAscendingTimestamp(Event element) {
            return element.getTimestamp();
        }
    });

    DataStream<Event> output = SiddhiCEP
        .define("inputStream", input, "id", "name", "price", "timestamp")
        .cql("from inputStream select timestamp, id, name, price insert into  outputStream")
        .returns("outputStream", Event.class);

    String resultPath = tempFolder.newFile().toURI().toString();
    output.writeAsText(resultPath, FileSystem.WriteMode.OVERWRITE);
    env.execute();
    assertEquals(5, getLineCount(resultPath));
}
 
開發者ID:apache,項目名稱:bahir-flink,代碼行數:22,代碼來源:SiddhiCEPITCase.java

示例6: getAisMessagesStream

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * Get the AIS messages stream from file or kafka stream
 * 
 * @param env
 * @param streamSource
 * @param filePathOrTopicProperty the data file path or the topic name of the input kafka stream
 * @param parsingConfig
 * @param areas
 * @return
 */
public static DataStream<AisMessage> getAisMessagesStream(StreamExecutionEnvironment env,
    StreamSourceType streamSource, String filePathOrTopicProperty, String parsingConfig,
    String outputLineDelimiter) {
  DataStream<AisMessage> aisMessagesStream = null;
  String fileOrTopicName = configs.getStringProp(filePathOrTopicProperty);
  switch (streamSource) {
    case KAFKA:
      Properties kafakaProps = getKafkaConsumerProperties();
      // create a Kafka consumer
      FlinkKafkaConsumer010<AisMessage> kafkaConsumer =
          new FlinkKafkaConsumer010<AisMessage>(fileOrTopicName, new AisMessageCsvSchema(
              parsingConfig, outputLineDelimiter), kafakaProps);

      kafkaConsumer.assignTimestampsAndWatermarks(new AisMessagesTimeAssigner());
      aisMessagesStream = env.addSource(kafkaConsumer);
      break;
    case FILE:

      DataStream<AisMessage> aisMessagesStreamWithoutTime =
          env.addSource(new FileLinesStreamSource(fileOrTopicName, parsingConfig))
              .flatMap(new CsvLineToAisMessageMapper(parsingConfig)).setParallelism(1);

      // Assign the timestamp of the AIS messages based on their timestamps
      aisMessagesStream =
          aisMessagesStreamWithoutTime
              .assignTimestampsAndWatermarks(new AisMessagesTimeAssigner());

      break;

    case HDFS:
      aisMessagesStream =
          env.readTextFile(fileOrTopicName).flatMap(new CsvLineToAisMessageMapper(parsingConfig))
              .assignTimestampsAndWatermarks(new AisMessagesTimeAssigner());
      break;
    default:
      return null;
  }
  return aisMessagesStream;
}
 
開發者ID:ehabqadah,項目名稱:in-situ-processing-datAcron,代碼行數:50,代碼來源:AppUtils.java

示例7: main

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
public static void main(String[] args) throws Exception {
    ParameterTool params = ParameterTool.fromArgs(args);
    FlinkPravegaParams helper = new FlinkPravegaParams(params);
    StreamId stream = helper.createStreamFromParam("input", "examples/turbineHeatTest");

    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

    // 1. read and decode the sensor events from a Pravega stream
    long startTime = params.getLong("start", 0L);
    FlinkPravegaReader<String> reader = helper.newReader(stream, startTime, String.class);
    DataStream<SensorEvent> events = env.addSource(reader, "input").map(new SensorMapper()).name("events");

    // 2. extract timestamp information to support 'event-time' processing
    SingleOutputStreamOperator<SensorEvent> timestamped = events.assignTimestampsAndWatermarks(
            new BoundedOutOfOrdernessTimestampExtractor<SensorEvent>(Time.seconds(10)) {
        @Override
        public long extractTimestamp(SensorEvent element) {
            return element.getTimestamp();
        }
    });
    timestamped.print();

    // 3. summarize the temperature data for each sensor
    SingleOutputStreamOperator<SensorAggregate> summaries = timestamped
            .keyBy("sensorId")
            .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(8)))
            .fold(null, new SensorAggregator()).name("summaries");

    // 4. save to HDFS and print to stdout.  Refer to the TaskManager's 'Stdout' view in the Flink UI.
    summaries.print().name("stdout");
    if (params.has("output")) {
        summaries.writeAsCsv(params.getRequired("output"), FileSystem.WriteMode.OVERWRITE);
    }

    env.execute("TurbineHeatProcessor_" + stream);
}
 
開發者ID:pravega,項目名稱:pravega-samples,代碼行數:38,代碼來源:TurbineHeatProcessor.java

示例8: testTimestampExtractorWithAutoInterval

import org.apache.flink.streaming.api.datastream.DataStream; //導入方法依賴的package包/類
/**
 * This tests whether timestamps are properly extracted in the timestamp
 * extractor and whether watermarks are also correctly forwarded from this with the auto watermark
 * interval.
 */
@Test
public void testTimestampExtractorWithAutoInterval() throws Exception {
	final int numElements = 10;

	StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

	env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
	env.getConfig().setAutoWatermarkInterval(10);
	env.setParallelism(1);
	env.getConfig().disableSysoutLogging();

	DataStream<Integer> source1 = env.addSource(new SourceFunction<Integer>() {
		@Override
		public void run(SourceContext<Integer> ctx) throws Exception {
			int index = 1;
			while (index <= numElements) {
				ctx.collect(index);
				latch.await();
				index++;
			}
		}

		@Override
		public void cancel() {}
	});

	DataStream<Integer> extractOp = source1.assignTimestampsAndWatermarks(
			new AscendingTimestampExtractor<Integer>() {
				@Override
				public long extractAscendingTimestamp(Integer element) {
					return element;
				}
			});

	extractOp
			.transform("Watermark Check", BasicTypeInfo.INT_TYPE_INFO, new CustomOperator(true))
			.transform("Timestamp Check",
					BasicTypeInfo.INT_TYPE_INFO,
					new TimestampCheckingOperator());

	// verify that extractor picks up source parallelism
	Assert.assertEquals(extractOp.getTransformation().getParallelism(), source1.getTransformation().getParallelism());

	env.execute();

	// verify that we get NUM_ELEMENTS watermarks
	for (int j = 0; j < numElements; j++) {
		if (!CustomOperator.finalWatermarks[0].get(j).equals(new Watermark(j))) {
			long wm = CustomOperator.finalWatermarks[0].get(j).getTimestamp();
			Assert.fail("Wrong watermark. Expected: " + j + " Found: " + wm + " All: " + CustomOperator.finalWatermarks[0]);
		}
	}

	// the input is finite, so it should have a MAX Watermark
	assertEquals(Watermark.MAX_WATERMARK,
			CustomOperator.finalWatermarks[0].get(CustomOperator.finalWatermarks[0].size() - 1));
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:63,代碼來源:TimestampITCase.java


注:本文中的org.apache.flink.streaming.api.datastream.DataStream.assignTimestampsAndWatermarks方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。