当前位置: 首页>>代码示例>>Java>>正文


Java KafkaProducer.flush方法代码示例

本文整理汇总了Java中org.apache.kafka.clients.producer.KafkaProducer.flush方法的典型用法代码示例。如果您正苦于以下问题:Java KafkaProducer.flush方法的具体用法?Java KafkaProducer.flush怎么用?Java KafkaProducer.flush使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.clients.producer.KafkaProducer的用法示例。


在下文中一共展示了KafkaProducer.flush方法的8个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: publishDummyData

import org.apache.kafka.clients.producer.KafkaProducer; //导入方法依赖的package包/类
public void publishDummyData() {
    final String topic = "TestTopic";

    // Create publisher
    final Map<String, Object> config = new HashMap<>();
    config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");

    final KafkaProducer<String, String> producer = new KafkaProducer<>(config);
    for (int charCode = 65; charCode < 91; charCode++) {
        final char[] key = new char[1];
        key[0] = (char) charCode;

        producer.send(new ProducerRecord<>(topic, new String(key), new String(key)));
    }
    producer.flush();
    producer.close();
}
 
开发者ID:SourceLabOrg,项目名称:kafka-webview,代码行数:20,代码来源:WebKafkaConsumerTest.java

示例2: publishDummyDataNumbers

import org.apache.kafka.clients.producer.KafkaProducer; //导入方法依赖的package包/类
public void publishDummyDataNumbers() {
    final String topic = "NumbersTopic";

    // Create publisher
    final Map<String, Object> config = new HashMap<>();
    config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
    config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
    config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");

    final KafkaProducer<Integer, Integer> producer = new KafkaProducer<>(config);
    for (int value = 0; value < 10000; value++) {
        producer.send(new ProducerRecord<>(topic, value, value));
    }
    producer.flush();
    producer.close();
}
 
开发者ID:SourceLabOrg,项目名称:kafka-webview,代码行数:17,代码来源:WebKafkaConsumerTest.java

示例3: flushToTopic

import org.apache.kafka.clients.producer.KafkaProducer; //导入方法依赖的package包/类
private void flushToTopic(final StatAggregator statAggregator,
                          final String topic,
                          final EventStoreTimeIntervalEnum newInterval,
                          final KafkaProducer<StatEventKey, StatAggregate> producer) {

    Preconditions.checkNotNull(statAggregator);
    Preconditions.checkNotNull(producer);

    //as our logger uses a lambda we need to assign this and make it final which is not ideal if
    //we are not in debug mode
    final Instant startTime = Instant.now();

    //Uplift the statkey to the new aggregationInterval and put it on the topic
    //We will not be trying to uplift the statKey if we are already at the highest aggregationInterval
    //so the RTE that cloneAndChangeInterval can throw should never happen
    statAggregator.getAggregates().entrySet().stream()
            .map(entry -> new ProducerRecord<>(
                    topic,
                    entry.getKey().cloneAndChangeInterval(newInterval),
                    entry.getValue()))
            .peek(producerRecord -> LOGGER.trace("Putting record {} on topic {}", producerRecord, topic))
            .forEach(producer::send);

    LOGGER.debug(() -> String.format("Flushed %s records from interval %s with new interval %s to topic %s in %sms",
            statAggregator.size(), statAggregator.getAggregationInterval(), newInterval, topic, Duration.between(startTime, Instant.now()).toMillis()));

    producer.flush();
}
 
开发者ID:gchq,项目名称:stroom-stats,代码行数:29,代码来源:StatisticsAggregationProcessor.java

示例4: testOffsetsNotCommittedOnStop

import org.apache.kafka.clients.producer.KafkaProducer; //导入方法依赖的package包/类
@Test
public void testOffsetsNotCommittedOnStop() throws Exception {
  String message = "testOffsetsNotCommittedOnStop-" + System.nanoTime();

  KafkaChannel channel = startChannel(false);

  KafkaProducer<String, byte[]> producer =
      new KafkaProducer<String, byte[]>(channel.getProducerProps());
  ProducerRecord<String, byte[]> data =
      new ProducerRecord<String, byte[]>(topic, "header-" + message, message.getBytes());
  producer.send(data).get();
  producer.flush();
  producer.close();

  Event event = takeEventWithoutCommittingTxn(channel);
  Assert.assertNotNull(event);
  Assert.assertTrue(Arrays.equals(message.getBytes(), event.getBody()));

  // Stop the channel without committing the transaction
  channel.stop();

  channel = startChannel(false);

  // Message should still be available
  event = takeEventWithoutCommittingTxn(channel);
  Assert.assertNotNull(event);
  Assert.assertTrue(Arrays.equals(message.getBytes(), event.getBody()));
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:29,代码来源:TestKafkaChannel.java

示例5: testProduce

import org.apache.kafka.clients.producer.KafkaProducer; //导入方法依赖的package包/类
public void testProduce() throws Exception {
    Properties producerProps = new Properties();
    producerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, testConfig.bootstrapServer);
    ByteArraySerializer serializer = new ByteArraySerializer();
    KafkaProducer<byte[], byte[]> producer = new KafkaProducer<>(producerProps, serializer, serializer);
    ProducerRecord<byte[], byte[]> record1 = new ProducerRecord<>(testConfig.topic, message1);
    Future<RecordMetadata> future1 = producer.send(record1);
    ProducerRecord<byte[], byte[]> record2 = new ProducerRecord<>(testConfig.topic, message2);
    Future<RecordMetadata> future2 = producer.send(record2);
    producer.flush();
    future1.get();
    future2.get();
    producer.close();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:15,代码来源:ClientCompatibilityTest.java

示例6: main

import org.apache.kafka.clients.producer.KafkaProducer; //导入方法依赖的package包/类
public static void main(String[] args) throws IOException {
    // set up the producer
    KafkaProducer<String, String> producer;
    try (InputStream props = Resources.getResource("producer.props").openStream()) {
        Properties properties = new Properties();
        properties.load(props);
        producer = new KafkaProducer<>(properties);
    }

    try {
          int i =0;
          File file = new File("/home/leiming/DataFlow/imply-2.2.3/quickstart/wikiticker-2016-06-27-sampled.json");
          BufferedReader br = new BufferedReader(new FileReader(file));
          String st;
          while((st=br.readLine())!=null) {
              // send lots of messages
              /**
              producer.send(new ProducerRecord<String, String>(
                      "fast-messages",
                      String.format("{\"type\":\"test\", \"t\":%.3f, \"k\":%d}", System.nanoTime() * 1e-9, i)));
                        **/
              // every so often send to a different topic

                  producer.send(new ProducerRecord<String, String>(
                          "leidaxia",
                       //   String.format("{\"type\":\"marker\", \"t\":%.3f, \"k\":%d}", System.nanoTime() * 1e-9, i)));
                   st));
                 /**producer.send(new ProducerRecord<String, String>(
                          "summary-markers",
                          String.format("{\"type\":\"other\", \"t\":%.3f, \"k\":%d}", System.nanoTime() * 1e-9, i)));
                **/
                  producer.flush();
                  System.out.println("Sent msg num" + i);
                  System.out.println("Sent msg " + st);
                  i =i + 1;
              }

    } catch (Throwable throwable) {
        System.out.printf("%s", throwable.getStackTrace());
    } finally {
        producer.close();
    }

}
 
开发者ID:leidaxia,项目名称:kafka-stream-druid,代码行数:45,代码来源:Producer.java

示例7: produceRecords

import org.apache.kafka.clients.producer.KafkaProducer; //导入方法依赖的package包/类
/**
 * Produce some records into the defined kafka namespace.
 *
 * @param keysAndValues Records you want to produce.
 * @param topicName the namespace name to produce into.
 * @param partitionId the partition to produce into.
 * @return List of ProducedKafkaRecords.
 */
public List<ProducedKafkaRecord<byte[], byte[]>> produceRecords(
    final Map<byte[], byte[]> keysAndValues,
    final String topicName,
    final int partitionId
) {
    // This holds the records we produced
    List<ProducerRecord<byte[], byte[]>> producedRecords = Lists.newArrayList();

    // This holds futures returned
    List<Future<RecordMetadata>> producerFutures = Lists.newArrayList();

    KafkaProducer<byte[], byte[]> producer = kafkaTestServer.getKafkaProducer(
        ByteArraySerializer.class,
        ByteArraySerializer.class
    );
    for (Map.Entry<byte[], byte[]> entry: keysAndValues.entrySet()) {
        // Construct filter
        ProducerRecord<byte[], byte[]> record = new ProducerRecord<>(topicName, partitionId, entry.getKey(), entry.getValue());
        producedRecords.add(record);

        // Send it.
        producerFutures.add(producer.send(record));
    }

    // Publish to the namespace and close.
    producer.flush();
    logger.info("Produce completed");
    producer.close();

    // Loop thru the futures, and build KafkaRecord objects
    List<ProducedKafkaRecord<byte[], byte[]>> kafkaRecords = Lists.newArrayList();
    try {
        for (int x = 0; x < keysAndValues.size(); x++) {
            final RecordMetadata metadata = producerFutures.get(x).get();
            final ProducerRecord<byte[], byte[]> producerRecord = producedRecords.get(x);

            kafkaRecords.add(ProducedKafkaRecord.newInstance(metadata, producerRecord));
        }
    } catch (InterruptedException | ExecutionException e) {
        e.printStackTrace();
        throw new RuntimeException(e);
    }

    return kafkaRecords;
}
 
开发者ID:salesforce,项目名称:kafka-junit,代码行数:54,代码来源:KafkaTestUtils.java

示例8: testProducerAndConsumer

import org.apache.kafka.clients.producer.KafkaProducer; //导入方法依赖的package包/类
/**
 * Test that KafkaServer works as expected!
 *
 * This also serves as a decent example of how to use the producer and consumer.
 */
@Test
public void testProducerAndConsumer() throws Exception {
    final int partitionId = 0;

    // Define our message
    final String expectedKey = "my-key";
    final String expectedValue = "my test message";

    // Define the record we want to produce
    ProducerRecord<String, String> producerRecord = new ProducerRecord<>(topicName, partitionId, expectedKey, expectedValue);

    // Create a new producer
    KafkaProducer<String, String> producer = getKafkaTestServer().getKafkaProducer(StringSerializer.class, StringSerializer.class);

    // Produce it & wait for it to complete.
    Future<RecordMetadata> future = producer.send(producerRecord);
    producer.flush();
    while (!future.isDone()) {
        Thread.sleep(500L);
    }
    logger.info("Produce completed");

    // Close producer!
    producer.close();

    KafkaConsumer<String, String> kafkaConsumer =
        getKafkaTestServer().getKafkaConsumer(StringDeserializer.class, StringDeserializer.class);

    final List<TopicPartition> topicPartitionList = Lists.newArrayList();
    for (final PartitionInfo partitionInfo: kafkaConsumer.partitionsFor(topicName)) {
        topicPartitionList.add(new TopicPartition(partitionInfo.topic(), partitionInfo.partition()));
    }
    kafkaConsumer.assign(topicPartitionList);
    kafkaConsumer.seekToBeginning(topicPartitionList);

    // Pull records from kafka, keep polling until we get nothing back
    ConsumerRecords<String, String> records;
    do {
        records = kafkaConsumer.poll(2000L);
        logger.info("Found {} records in kafka", records.count());
        for (ConsumerRecord<String, String> record: records) {
            // Validate
            assertEquals("Key matches expected", expectedKey, record.key());
            assertEquals("value matches expected", expectedValue, record.value());
        }
    }
    while (!records.isEmpty());

    // close consumer
    kafkaConsumer.close();
}
 
开发者ID:salesforce,项目名称:kafka-junit,代码行数:57,代码来源:KafkaTestServerTest.java


注:本文中的org.apache.kafka.clients.producer.KafkaProducer.flush方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。