當前位置: 首頁>>代碼示例>>Java>>正文


Java Producer.close方法代碼示例

本文整理匯總了Java中org.apache.kafka.clients.producer.Producer.close方法的典型用法代碼示例。如果您正苦於以下問題:Java Producer.close方法的具體用法?Java Producer.close怎麽用?Java Producer.close使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.kafka.clients.producer.Producer的用法示例。


在下文中一共展示了Producer.close方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: main

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
public static void main(String[] args) throws InterruptedException {

        Properties props = new Properties();
        props.put(BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put(ACKS_CONFIG, "all");
        props.put(RETRIES_CONFIG, 0);
        props.put(BATCH_SIZE_CONFIG, 32000);
        props.put(LINGER_MS_CONFIG, 100);
        props.put(BUFFER_MEMORY_CONFIG, 33554432);
        props.put(KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.LongSerializer");
        props.put(VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.LongSerializer");

        Producer<Long, Long> producer = new KafkaProducer<>(props);

        long t1 = System.currentTimeMillis();

        long i = 0;
        for(; i < 1000000; i++) {

            producer.send(new ProducerRecord<>("produktion", i, i));
        }
        producer.send(new ProducerRecord<Long,Long>("produktion", (long) -1, (long)-1));
        System.out.println("fertig " + i  + " Nachrichten in " + (System.currentTimeMillis() - t1 + " ms"));

        producer.close();
    }
 
開發者ID:predic8,項目名稱:apache-kafka-demos,代碼行數:27,代碼來源:PerformanceProducer.java

示例2: nullKey

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
@Test
public void nullKey() throws Exception {
  Producer<Integer, String> producer = createProducer();

  ProducerRecord<Integer, String> record = new ProducerRecord<>("messages", "test");
  producer.send(record);

  final Map<String, Object> consumerProps = KafkaTestUtils
      .consumerProps("sampleRawConsumer", "false", embeddedKafka);
  consumerProps.put("auto.offset.reset", "earliest");

  final CountDownLatch latch = new CountDownLatch(1);
  createConsumer(latch, null);

  producer.close();
}
 
開發者ID:opentracing-contrib,項目名稱:java-kafka-client,代碼行數:17,代碼來源:TracingKafkaTest.java

示例3: produceRecords

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
private static void produceRecords() {
    Properties properties = new Properties();
    properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class.getName());
    properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getName());

    Producer<Integer, byte[]> producer = new KafkaProducer<>(properties);

    IntStream.rangeClosed(1, 10000).boxed()
            .map(number ->
                    new ProducerRecord<>(
                            TOPIC,
                            1, //Key
                            KafkaProducerUtil.createMessage(1000))) //Value
            .forEach(record -> {
                producer.send(record);
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            });
    producer.close();
}
 
開發者ID:jeqo,項目名稱:talk-kafka-messaging-logs,代碼行數:25,代碼來源:Compaction.java

示例4: publishDataToKafka

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
/**
 * Publish 'numMessages' arbitrary events from live users with the provided delay, to a
 * Kafka topic.
 */
public static void publishDataToKafka(int numMessages, int delayInMillis)
    throws IOException {

  Producer<String, String> producer = new KafkaProducer<>(kafkaProps);

  for (int i = 0; i < Math.max(1, numMessages); i++) {
    Long currTime = System.currentTimeMillis();
    String message = generateEvent(currTime, delayInMillis);
    producer.send(new ProducerRecord<String, String>("game", null, message)); //TODO(fjp): Generalize
    // TODO(fjp): How do we get late data working?
    // if (delayInMillis != 0) {
    //   System.out.println(pubsubMessage.getAttributes());
    //   System.out.println("late data for: " + message);
    // }
    // pubsubMessages.add(pubsubMessage);
  }

  producer.close();
}
 
開發者ID:davorbonaci,項目名稱:beam-portability-demo,代碼行數:24,代碼來源:Injector.java

示例5: produceRecords

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
private static void produceRecords(String bootstrapServers) {
    Properties properties = new Properties();
    properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class.getName());
    properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());

    Producer<Integer, String> producer = new KafkaProducer<>(properties);

    IntStream.rangeClosed(1, 10000).boxed()
            .map(number ->
                    new ProducerRecord<>(
                            TOPIC,
                            1, //Key
                            String.format("record-%s", number))) //Value
            .forEach(record -> producer.send(record));
    producer.close();
}
 
開發者ID:jeqo,項目名稱:talk-kafka-messaging-logs,代碼行數:18,代碼來源:Compaction.java

示例6: produceRecords

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
private static void produceRecords(String bootstrapServers) {
    Properties properties = new Properties();
    properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class.getName());
    properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());

    Producer<Integer, String> producer = new KafkaProducer<>(properties);

    IntStream
            .rangeClosed(1, 100000).boxed()
            .map(number ->
                    new ProducerRecord<>(
                            TOPIC,
                            1, //Key
                            String.format("record-%s", number))) //Value
            .forEach(record -> producer.send(record));
    producer.close();
}
 
開發者ID:jeqo,項目名稱:talk-kafka-messaging-logs,代碼行數:19,代碼來源:Retention.java

示例7: sendStringMessage

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
public static void  sendStringMessage() throws Exception{
	Properties props = new Properties();
	props.put("bootstrap.servers", servers);
	props.put("acks", "all");
	props.put("retries", 0);
	props.put("batch.size", 16384);
	props.put("linger.ms", 1);
	props.put("buffer.memory", 33554432);
	props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
	props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

	Producer<String, String> producer = new org.apache.kafka.clients.producer.KafkaProducer<>(props);

	//沒有任何分區,默認1個分區,發送消息
	int i=0;
	while(i<1000){
		Thread.sleep(1000L);
		String message = "zhangsan"+i;
		producer.send(new ProducerRecord<>("NL_U_APP_ALARM_APP_STRING",message));
		i++;
		producer.flush();
	}
	producer.close();
}
 
開發者ID:jacktomcat,項目名稱:spark2.0,代碼行數:25,代碼來源:KafkaSendMessage.java

示例8: produceRecords

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
private static void produceRecords(String bootstrapServers) {
    Properties properties = new Properties();
    properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class.getName());
    properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getName());

    Producer<Integer, byte[]> producer = new KafkaProducer<>(properties);

    IntStream.rangeClosed(1, 10000).boxed()
            .map(number ->
                    new ProducerRecord<>(
                            TOPIC,
                            1, //Key
                            KafkaProducerUtil.createMessage(1000))) //Value
            .forEach(record -> {
                producer.send(record);
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            });
    producer.close();
}
 
開發者ID:jeqo,項目名稱:talk-kafka-messaging-logs,代碼行數:25,代碼來源:Retention.java

示例9: sendData

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
public void sendData(String data) {

		Properties props = new Properties();
		props.put("bootstrap.servers", "localhost:9092");
		props.put("acks", "all");
		props.put("retries", 0);
		props.put("batch.size", 16384);
		props.put("linger.ms", 1);
		props.put("buffer.memory", 33554432);
		props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
		props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

		Producer<String, String> producer = new KafkaProducer<>(props);

		Map<MetricName, ? extends Metric> metrics = producer.metrics();
		System.out.println(metrics);

		for (int i = 0; i < 100; i++)
			producer.send(new ProducerRecord<String, String>("video_view", data));

		producer.close();

	}
 
開發者ID:alokawi,項目名稱:spark-cassandra-poc,代碼行數:24,代碼來源:KafkaDataProducer.java

示例10: produceRecords

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
private static void produceRecords(String bootstrapServers) {
    Properties properties = new Properties();
    properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    properties.put(ProducerConfig.ACKS_CONFIG, "0");
    properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class.getName());
    properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());

    Producer<Integer, String> producer = new KafkaProducer<>(properties);

    RecordsProducer.produce("kafka_producer_ack_zero_latency", producer, TOPIC);

    producer.close();
}
 
開發者ID:jeqo,項目名稱:talk-kafka-messaging-logs,代碼行數:14,代碼來源:ProducerAckZero.java

示例11: with_parent

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
@Test
public void with_parent() throws Exception {
  Producer<Integer, String> producer = createProducer();

  try (Scope activeSpan = mockTracer.buildSpan("parent").startActive(true)) {
    producer.send(new ProducerRecord<>("messages", 1, "test"));
  }

  final CountDownLatch latch = new CountDownLatch(1);
  createConsumer(latch, 1);

  producer.close();

  List<MockSpan> mockSpans = mockTracer.finishedSpans();
  assertEquals(3, mockSpans.size());

  MockSpan parent = getByOperationName(mockSpans, "parent");
  assertNotNull(parent);

  for (MockSpan span : mockSpans) {
    assertEquals(parent.context().traceId(), span.context().traceId());
  }

  MockSpan sendSpan = getByOperationName(mockSpans, "send");
  assertNotNull(sendSpan);

  MockSpan receiveSpan = getByOperationName(mockSpans, "receive");
  assertNotNull(receiveSpan);

  assertEquals(sendSpan.context().spanId(), receiveSpan.parentId());
  assertEquals(parent.context().spanId(), sendSpan.parentId());

  assertNull(mockTracer.activeSpan());
}
 
開發者ID:opentracing-contrib,項目名稱:java-kafka-client,代碼行數:35,代碼來源:TracingKafkaTest.java

示例12: test

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
@Test
public void test() throws Exception {
  Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);

  Properties config = new Properties();
  config.put(StreamsConfig.APPLICATION_ID_CONFIG, "stream-app");
  config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, senderProps.get("bootstrap.servers"));
  config.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.Integer().getClass());
  config.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

  Producer<Integer, String> producer = createProducer();
  ProducerRecord<Integer, String> record = new ProducerRecord<>("stream-test", 1, "test");
  producer.send(record);

  final Serde<String> stringSerde = Serdes.String();
  final Serde<Integer> intSerde = Serdes.Integer();

  KStreamBuilder builder = new KStreamBuilder();
  KStream<Integer, String> kStream = builder
      .stream(intSerde, stringSerde, "stream-test");

  kStream.map((key, value) -> new KeyValue<>(key, value + "map")).to("stream-out");

  KafkaStreams streams = new KafkaStreams(builder, new StreamsConfig(config),
      new TracingKafkaClientSupplier(mockTracer));
  streams.start();

  await().atMost(15, TimeUnit.SECONDS).until(reportedSpansSize(), equalTo(3));

  streams.close();
  producer.close();

  List<MockSpan> spans = mockTracer.finishedSpans();
  assertEquals(3, spans.size());
  checkSpans(spans);

  assertNull(mockTracer.activeSpan());
}
 
開發者ID:opentracing-contrib,項目名稱:java-kafka-client,代碼行數:39,代碼來源:TracingKafkaStreamsTest.java

示例13: main

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
public static void main(String[] args) throws IOException, ParseException {
    //Kafka Part
    Properties properties = new Properties();

    //set the kafka boostrap Server
    properties.setProperty("bootstrap.servers", KafkaProperties.KAFKA_SERVER_URL);
    //tell the client if the key and value is a string or something else
    properties.setProperty("key.serializer", StringSerializer.class.getName());
    properties.setProperty("value.serializer", StringSerializer.class.getName());
    //set the acknowledge of the producer to -1, 0, 1
    properties.setProperty("acks", "1");
    //if there is no connection how often the client should retry it until it stops
    properties.setProperty("retries", "3");
    //it will send ever ms a message otherwise use producer.flush() below where marked
    properties.setProperty("linger.ms", "1");
    //use a truststore and https
    properties.setProperty("security.protocol",KafkaProperties.SECURITY_PROTOCOL);
    properties.setProperty("ssl.truststore.location", KafkaProperties.TRUSTSTORE_LOCATION);
    properties.setProperty("ssl.truststore.password",KafkaProperties.TRUSTSTORE_PASSWORD);
    properties.setProperty("ssl.endpoint.identification.algorithm",KafkaProperties.ENDPOINT_ALGORITHM);



    Producer<String, String> producer = new org.apache.kafka.clients.producer.KafkaProducer<String, String>(properties);

    //Simple Message Producer instead of the for loop => ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>("foobar", "2", "Huh!");
    for (int key=0; key < 10; key++){
        //change here the topic
        ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>(KafkaProperties.TOPIC, Integer.toString(key), "My new keys are here: "+ Integer.toString(key));
        producer.send(producerRecord);
    }

    //here you could use also producer.flush() to send the message
    producer.close();
}
 
開發者ID:koerbaecher,項目名稱:docker-kafka-demo,代碼行數:36,代碼來源:KafkaProducer.java

示例14: main

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
public static void main(String[] args) throws InterruptedException {

        Properties props = new Properties();

        props.put(BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put(ACKS_CONFIG, "all");
        props.put(RETRIES_CONFIG, 0);
        props.put(BATCH_SIZE_CONFIG, 16000);
        props.put(LINGER_MS_CONFIG, 100);
        props.put(BUFFER_MEMORY_CONFIG, 33554432);
        props.put(KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        props.put(VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");

        Producer<String, String> producer = new KafkaProducer<>(props);

        long t1 = System.currentTimeMillis();

        int i = 0;
        for(; i < 10; i++) {

            String key = String.valueOf(round(random() * 1000));
            double value = new Double(round(random()*10000000L)).intValue()/1000.0;

            JsonObject json = Json.createObjectBuilder()
                    .add("windrad", key)
                    .add("kw",value)
                    .build();

            producer.send(new ProducerRecord<>("produktion", key, json.toString()));
        }
        System.out.println("fertig " + i + " Nachrichten in " + (System.currentTimeMillis() - t1 + " ms"));

        producer.close();
    }
 
開發者ID:predic8,項目名稱:apache-kafka-demos,代碼行數:35,代碼來源:SimpleProducer.java

示例15: sendWrapperMessage

import org.apache.kafka.clients.producer.Producer; //導入方法依賴的package包/類
public static void  sendWrapperMessage() throws Exception {
	Properties props = new Properties();
	props.put("bootstrap.servers", servers);
	props.put("acks", "all");
	props.put("retries", 0);
	props.put("batch.size", 16384);
	props.put("linger.ms", 1);
	props.put("buffer.memory", 33554432);
	props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
	props.put("value.serializer", "com.gochinatv.spark.kafka.SerializedMessage");
	Producer<String, WrapperAppMessage> producer = new org.apache.kafka.clients.producer.KafkaProducer<>(props);

	//case 1:
	//沒有任何分區,默認1個分區,發送消息
	int i=0;
	while(i<1000){
		Thread.sleep(1000L);
		WrapperAppMessage message = new WrapperAppMessage();
		message.setAgreeId((i+1)%5);
		message.setCityId((i+1)%3);
		message.setConnectType((i+1)%4);
		message.setCount((i+100)%10);
		message.setInstanceId((i+1)%6);
		message.setProvinceId((i+1)%4);
		message.setTimestamp(System.currentTimeMillis());
		message.setValue((float)((i+200)%4));
		producer.send(new ProducerRecord<>("NL_U_APP_ALARM_APP",message));
		System.out.println(message.toString());
		i++;
		producer.flush();
	}
	producer.close();
}
 
開發者ID:jacktomcat,項目名稱:spark2.0,代碼行數:34,代碼來源:KafkaSendMessage.java


注:本文中的org.apache.kafka.clients.producer.Producer.close方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。