当前位置: 首页>>代码示例>>Java>>正文


Java FlinkKafkaProducer010.writeToKafkaWithTimestamps方法代码示例

本文整理汇总了Java中org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.writeToKafkaWithTimestamps方法的典型用法代码示例。如果您正苦于以下问题:Java FlinkKafkaProducer010.writeToKafkaWithTimestamps方法的具体用法?Java FlinkKafkaProducer010.writeToKafkaWithTimestamps怎么用?Java FlinkKafkaProducer010.writeToKafkaWithTimestamps使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010的用法示例。


在下文中一共展示了FlinkKafkaProducer010.writeToKafkaWithTimestamps方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: writeEnrichedStream

import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010; //导入方法依赖的package包/类
private static void writeEnrichedStream(DataStream<AisMessage> enrichedAisMessagesStream,
    String parsingConfig, boolean writeOutputStreamToFile, String outputLineDelimiter,
    String outputPath, String outputStreamTopic) throws IOException {

  if (writeOutputStreamToFile) {
    enrichedAisMessagesStream.map(new AisMessagesToCsvMapper(outputLineDelimiter)).writeAsText(
        outputPath, WriteMode.OVERWRITE);
    return;
  }

  // Write to Kafka
  Properties producerProps = AppUtils.getKafkaProducerProperties();

  FlinkKafkaProducer010Configuration<AisMessage> myProducerConfig =
      FlinkKafkaProducer010.writeToKafkaWithTimestamps(enrichedAisMessagesStream,
          outputStreamTopic, new AisMessageCsvSchema(parsingConfig, outputLineDelimiter),
          producerProps);
  myProducerConfig.setLogFailuresOnly(false);
  myProducerConfig.setFlushOnCheckpoint(true);

}
 
开发者ID:ehabqadah,项目名称:in-situ-processing-datAcron,代码行数:22,代码来源:InSituProcessingApp.java

示例2: configuration

import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010; //导入方法依赖的package包/类
public static void configuration(DataStream<String> stream, String topic, Properties properties) {

        // using Apache Kafka as a sink for serialized generic output
        FlinkKafkaProducer010.FlinkKafkaProducer010Configuration kafkaConfig = FlinkKafkaProducer010
                .writeToKafkaWithTimestamps(
                        stream,
                        topic,
                        new SimpleStringSchema(),
                        properties
        );
        kafkaConfig.setLogFailuresOnly(false);
        kafkaConfig.setFlushOnCheckpoint(true);
    }
 
开发者ID:ProjectEmber,项目名称:project-ember,代码行数:14,代码来源:EmberKafkaProducer.java

示例3: configuration

import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010; //导入方法依赖的package包/类
public static void configuration(DataStream<StreetLamp> stream, Properties properties) {

        // using Apache Kafka as
        FlinkKafkaProducer010.FlinkKafkaProducer010Configuration kafkaConfig = FlinkKafkaProducer010
                .writeToKafkaWithTimestamps(
                        stream,
                        "control",
                        new ControlSerializationSchema(),
                        properties
                );

        kafkaConfig.setLogFailuresOnly(false);
        kafkaConfig.setFlushOnCheckpoint(true);
    }
 
开发者ID:ProjectEmber,项目名称:project-ember,代码行数:15,代码来源:EmberKafkaControlSink.java

示例4: main

import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010; //导入方法依赖的package包/类
/**
 * The main entry method
 * 
 */
public static void main(String[] args) throws Exception {

  String cehkPointsPath =
      Paths.get(configs.getStringProp("flinkCheckPointsPath") + "/" + System.currentTimeMillis())
          .toUri().toString();


  int parallelism = configs.getIntProp("parallelism");
  String inputHdfsFile = configs.getStringProp("inputHDFSFilePath");
  String outputTopicName = configs.getStringProp("outputHDFSKafkaTopic");

  // Set up the execution environment
  final StreamExecutionEnvironment env =
      new StreamExecutionEnvBuilder().setParallelism(parallelism).setStateBackend(cehkPointsPath)
          .build();
  // Read the HDFS file
  DataStreamSource<String> inputTextStream =
      env.readTextFile(inputHdfsFile).setParallelism(parallelism);

  FlinkKafkaProducer010Configuration<String> myProducerConfig =
      FlinkKafkaProducer010.writeToKafkaWithTimestamps(inputTextStream, outputTopicName,
          new SimpleStringSchema(), AppUtils.getKafkaProducerProperties());


  myProducerConfig.setLogFailuresOnly(false);
  myProducerConfig.setFlushOnCheckpoint(true);


  System.out.println(env.getExecutionPlan());

  JobExecutionResult executionResult = null;

  try {
    executionResult = env.execute(" HDFS to Kafka stream producer");
  } catch (Exception e) {
    System.out.println(e.getMessage());
  }

  System.out.println("Full execution time=" + executionResult.getNetRuntime(TimeUnit.MINUTES));
}
 
开发者ID:ehabqadah,项目名称:in-situ-processing-datAcron,代码行数:45,代码来源:HdfsToKafkaProducer.java


注:本文中的org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.writeToKafkaWithTimestamps方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。