本文整理汇总了Java中org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.writeToKafkaWithTimestamps方法的典型用法代码示例。如果您正苦于以下问题:Java FlinkKafkaProducer010.writeToKafkaWithTimestamps方法的具体用法?Java FlinkKafkaProducer010.writeToKafkaWithTimestamps怎么用?Java FlinkKafkaProducer010.writeToKafkaWithTimestamps使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
的用法示例。
在下文中一共展示了FlinkKafkaProducer010.writeToKafkaWithTimestamps方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: writeEnrichedStream
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010; //导入方法依赖的package包/类
private static void writeEnrichedStream(DataStream<AisMessage> enrichedAisMessagesStream,
String parsingConfig, boolean writeOutputStreamToFile, String outputLineDelimiter,
String outputPath, String outputStreamTopic) throws IOException {
if (writeOutputStreamToFile) {
enrichedAisMessagesStream.map(new AisMessagesToCsvMapper(outputLineDelimiter)).writeAsText(
outputPath, WriteMode.OVERWRITE);
return;
}
// Write to Kafka
Properties producerProps = AppUtils.getKafkaProducerProperties();
FlinkKafkaProducer010Configuration<AisMessage> myProducerConfig =
FlinkKafkaProducer010.writeToKafkaWithTimestamps(enrichedAisMessagesStream,
outputStreamTopic, new AisMessageCsvSchema(parsingConfig, outputLineDelimiter),
producerProps);
myProducerConfig.setLogFailuresOnly(false);
myProducerConfig.setFlushOnCheckpoint(true);
}
示例2: configuration
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010; //导入方法依赖的package包/类
public static void configuration(DataStream<String> stream, String topic, Properties properties) {
// using Apache Kafka as a sink for serialized generic output
FlinkKafkaProducer010.FlinkKafkaProducer010Configuration kafkaConfig = FlinkKafkaProducer010
.writeToKafkaWithTimestamps(
stream,
topic,
new SimpleStringSchema(),
properties
);
kafkaConfig.setLogFailuresOnly(false);
kafkaConfig.setFlushOnCheckpoint(true);
}
示例3: configuration
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010; //导入方法依赖的package包/类
public static void configuration(DataStream<StreetLamp> stream, Properties properties) {
// using Apache Kafka as
FlinkKafkaProducer010.FlinkKafkaProducer010Configuration kafkaConfig = FlinkKafkaProducer010
.writeToKafkaWithTimestamps(
stream,
"control",
new ControlSerializationSchema(),
properties
);
kafkaConfig.setLogFailuresOnly(false);
kafkaConfig.setFlushOnCheckpoint(true);
}
示例4: main
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010; //导入方法依赖的package包/类
/**
* The main entry method
*
*/
public static void main(String[] args) throws Exception {
String cehkPointsPath =
Paths.get(configs.getStringProp("flinkCheckPointsPath") + "/" + System.currentTimeMillis())
.toUri().toString();
int parallelism = configs.getIntProp("parallelism");
String inputHdfsFile = configs.getStringProp("inputHDFSFilePath");
String outputTopicName = configs.getStringProp("outputHDFSKafkaTopic");
// Set up the execution environment
final StreamExecutionEnvironment env =
new StreamExecutionEnvBuilder().setParallelism(parallelism).setStateBackend(cehkPointsPath)
.build();
// Read the HDFS file
DataStreamSource<String> inputTextStream =
env.readTextFile(inputHdfsFile).setParallelism(parallelism);
FlinkKafkaProducer010Configuration<String> myProducerConfig =
FlinkKafkaProducer010.writeToKafkaWithTimestamps(inputTextStream, outputTopicName,
new SimpleStringSchema(), AppUtils.getKafkaProducerProperties());
myProducerConfig.setLogFailuresOnly(false);
myProducerConfig.setFlushOnCheckpoint(true);
System.out.println(env.getExecutionPlan());
JobExecutionResult executionResult = null;
try {
executionResult = env.execute(" HDFS to Kafka stream producer");
} catch (Exception e) {
System.out.println(e.getMessage());
}
System.out.println("Full execution time=" + executionResult.getNetRuntime(TimeUnit.MINUTES));
}