本文整理汇总了Java中org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09.assignTimestampsAndWatermarks方法的典型用法代码示例。如果您正苦于以下问题:Java FlinkKafkaConsumer09.assignTimestampsAndWatermarks方法的具体用法?Java FlinkKafkaConsumer09.assignTimestampsAndWatermarks怎么用?Java FlinkKafkaConsumer09.assignTimestampsAndWatermarks使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09
的用法示例。
在下文中一共展示了FlinkKafkaConsumer09.assignTimestampsAndWatermarks方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: main
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09; //导入方法依赖的package包/类
public static void main(String[] args) throws Exception {
final int popThreshold = 20; // threshold for popular places
// set up streaming execution environment
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.getConfig().setAutoWatermarkInterval(1000);
// configure the Kafka consumer
Properties kafkaProps = new Properties();
kafkaProps.setProperty("zookeeper.connect", LOCAL_ZOOKEEPER_HOST);
kafkaProps.setProperty("bootstrap.servers", LOCAL_KAFKA_BROKER);
kafkaProps.setProperty("group.id", RIDE_SPEED_GROUP);
// always read the Kafka topic from the start
kafkaProps.setProperty("auto.offset.reset", "earliest");
// create a Kafka consumer
FlinkKafkaConsumer09<TaxiRide> consumer = new FlinkKafkaConsumer09<>(
"cleansedRides",
new TaxiRideSchema(),
kafkaProps);
// assign a timestamp extractor to the consumer
consumer.assignTimestampsAndWatermarks(new TaxiRideTSExtractor());
// create a TaxiRide data stream
DataStream<TaxiRide> rides = env.addSource(consumer);
// find popular places
DataStream<Tuple5<Float, Float, Long, Boolean, Integer>> popularPlaces = rides
// match ride to grid cell and event type (start or end)
.map(new GridCellMatcher())
// partition by cell id and event type
.keyBy(0, 1)
// build sliding window
.timeWindow(Time.minutes(15), Time.minutes(5))
// count ride events in window
.apply(new RideCounter())
// filter by popularity threshold
.filter(new FilterFunction<Tuple4<Integer, Long, Boolean, Integer>>() {
@Override
public boolean filter(Tuple4<Integer, Long, Boolean, Integer> count) throws Exception {
return count.f3 >= popThreshold;
}
})
// map grid cell to coordinates
.map(new GridToCoordinates());
//popularPlaces.print();
popularPlaces.writeAsText("file:\\\\C:\\Users\\ht\\kafka_java.txt");
// execute the transformation pipeline
env.execute("Popular Places from Kafka");
}
示例2: main
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09; //导入方法依赖的package包/类
public static void main(String[] args) throws Exception {
// set up the streaming execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// env.enableCheckpointing(5000);
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("zookeeper.connect", "localhost:2181");
properties.setProperty("group.id", "test");
FlinkKafkaConsumer09<String> myConsumer = new FlinkKafkaConsumer09<>("temp", new SimpleStringSchema(),
properties);
myConsumer.assignTimestampsAndWatermarks(new CustomWatermarkEmitter());
DataStream<Tuple2<String, Double>> keyedStream = env.addSource(myConsumer).flatMap(new Splitter()).keyBy(0)
.timeWindow(Time.seconds(300))
.apply(new WindowFunction<Tuple2<String, Double>, Tuple2<String, Double>, Tuple, TimeWindow>() {
@Override
public void apply(Tuple key, TimeWindow window, Iterable<Tuple2<String, Double>> input,
Collector<Tuple2<String, Double>> out) throws Exception {
double sum = 0L;
int count = 0;
for (Tuple2<String, Double> record : input) {
sum += record.f1;
count++;
}
Tuple2<String, Double> result = input.iterator().next();
result.f1 = (sum/count);
out.collect(result);
}
});
keyedStream.print();
// execute program
env.execute("Flink Streaming Java API Skeleton");
}