當前位置: 首頁>>代碼示例>>Java>>正文


Java KafkaConsumer.close方法代碼示例

本文整理匯總了Java中org.apache.kafka.clients.consumer.KafkaConsumer.close方法的典型用法代碼示例。如果您正苦於以下問題:Java KafkaConsumer.close方法的具體用法?Java KafkaConsumer.close怎麽用?Java KafkaConsumer.close使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.kafka.clients.consumer.KafkaConsumer的用法示例。


在下文中一共展示了KafkaConsumer.close方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: receive

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
public List<String> receive() {
    KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(properties);
    consumer.subscribe(Arrays.asList(properties.getProperty("topic")));
    List<String> buffer = new ArrayList<String>();
    String msg = "";
    while (true) {
        System.err.println("consumer receive------------------");
        ConsumerRecords<String, String> records = consumer.poll(100);
        for (ConsumerRecord<String, String> record : records) {
            buffer.add(record.value());
        }
        consumer.close();
        return buffer;
    }


}
 
開發者ID:wanghan0501,項目名稱:WiFiProbeAnalysis,代碼行數:18,代碼來源:KafkaConsumers.java

示例2: readKafkaTopic

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
@GET
@Path("/readKafkaTopic")
public Response readKafkaTopic(Map<String, Object > map) {
    try {
        Properties properties = PropertiesUtils.getProps("consumer.properties");
        properties.setProperty("client.id","readKafkaTopic");
        properties.setProperty("group.id","readKafkaTopic");
        //properties.setProperty("bootstrap.servers", "localhost:9092");
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);
        String topic = map.get("topic").toString();
        //System.out.println("topic="+topic);
        TopicPartition topicPartition = new TopicPartition(topic, 0);
        List<TopicPartition> topics = Arrays.asList(topicPartition);
        consumer.assign(topics);
        consumer.seekToEnd(topics);
        long current = consumer.position(topicPartition);
        long end = current;
        current -= 1000;
        if(current < 0) current = 0;
        consumer.seek(topicPartition, current);
        List<String> result = new ArrayList<>();
        while (current < end) {
            //System.out.println("topic position = "+current);
            ConsumerRecords<String, String> records = consumer.poll(1000);
            for (ConsumerRecord<String, String> record : records) {
                result.add(record.value());
                //System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
            }
            current = consumer.position(topicPartition);
        }
        consumer.close();
        return Response.ok().entity(result).build();
    } catch (Exception e) {
        logger.error("Error encountered while readKafkaTopic with parameter:{}", JSON.toJSONString(map), e);
        return Response.status(204).entity(new Result(-1, e.getMessage())).build();
    }
}
 
開發者ID:BriData,項目名稱:DBus,代碼行數:38,代碼來源:DataTableResource.java

示例3: commitInvalidOffsets

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
private void commitInvalidOffsets() {
    final KafkaConsumer consumer = new KafkaConsumer(TestUtils.consumerConfig(
        CLUSTER.bootstrapServers(),
        streamsConfiguration.getProperty(StreamsConfig.APPLICATION_ID_CONFIG),
        StringDeserializer.class,
        StringDeserializer.class));

    final Map<TopicPartition, OffsetAndMetadata> invalidOffsets = new HashMap<>();
    invalidOffsets.put(new TopicPartition(TOPIC_1_2, 0), new OffsetAndMetadata(5, null));
    invalidOffsets.put(new TopicPartition(TOPIC_2_2, 0), new OffsetAndMetadata(5, null));
    invalidOffsets.put(new TopicPartition(TOPIC_A_2, 0), new OffsetAndMetadata(5, null));
    invalidOffsets.put(new TopicPartition(TOPIC_C_2, 0), new OffsetAndMetadata(5, null));
    invalidOffsets.put(new TopicPartition(TOPIC_Y_2, 0), new OffsetAndMetadata(5, null));
    invalidOffsets.put(new TopicPartition(TOPIC_Z_2, 0), new OffsetAndMetadata(5, null));

    consumer.commitSync(invalidOffsets);

    consumer.close();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:20,代碼來源:KStreamsFineGrainedAutoResetIntegrationTest.java

示例4: getCount

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
/**
 * Gets the total message count for the topic.
 * <b>WARNING: Don't use with compacted topics</b>
 */
@SuppressWarnings("unchecked")
public long getCount(String kafkaBrokers, String topic) {
    KafkaConsumer consumer = buildConsumer(kafkaBrokers);
    try {
        @SuppressWarnings("unchecked")
        Map<String, List<PartitionInfo>> topics = consumer.listTopics();
        List<PartitionInfo> partitionInfos = topics.get(topic);
        if (partitionInfos == null) {
            logger.warn("Partition information was not found for topic {}", topic);
            return 0;
        } else {
            Collection<TopicPartition> partitions = new ArrayList<>();
            for (PartitionInfo partitionInfo : partitionInfos) {
                TopicPartition partition = new TopicPartition(topic, partitionInfo.partition());
                partitions.add(partition);
            }
            Map<TopicPartition, Long> endingOffsets = consumer.endOffsets(partitions);
            Map<TopicPartition, Long> beginningOffsets = consumer.beginningOffsets(partitions);
            return diffOffsets(beginningOffsets, endingOffsets);
        }
    } finally {
        consumer.close();
    }
}
 
開發者ID:Sixt,項目名稱:ja-micro,代碼行數:29,代碼來源:TopicMessageCounter.java

示例5: loopUntilRecordReceived

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
private static void loopUntilRecordReceived(final String kafka, final boolean eosEnabled) {
    final Properties consumerProperties = new Properties();
    consumerProperties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka);
    consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "broker-compatibility-consumer");
    consumerProperties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    consumerProperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    consumerProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    if (eosEnabled) {
        consumerProperties.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, IsolationLevel.READ_COMMITTED.name().toLowerCase(Locale.ROOT));
    }

    final KafkaConsumer<String, String> consumer = new KafkaConsumer<>(consumerProperties);
    consumer.subscribe(Collections.singletonList(SINK_TOPIC));

    while (true) {
        final ConsumerRecords<String, String> records = consumer.poll(100);
        for (final ConsumerRecord<String, String> record : records) {
            if (record.key().equals("key") && record.value().equals("value")) {
                consumer.close();
                return;
            }
        }
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:25,代碼來源:BrokerCompatibilityTest.java

示例6: consumeAllRecordsFromTopic

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
/**
 * This will consume all records from only the partitions given.
 * @param topic Topic to consume from.
 * @param partitionIds Collection of PartitionIds to consume.
 * @return List of ConsumerRecords consumed.
 */
public List<ConsumerRecord<byte[], byte[]>> consumeAllRecordsFromTopic(final String topic, Collection<Integer> partitionIds) {
    // Create topic Partitions
    List<TopicPartition> topicPartitions = new ArrayList<>();
    for (Integer partitionId: partitionIds) {
        topicPartitions.add(new TopicPartition(topic, partitionId));
    }

    // Connect Consumer
    KafkaConsumer<byte[], byte[]> kafkaConsumer =
        kafkaTestServer.getKafkaConsumer(ByteArrayDeserializer.class, ByteArrayDeserializer.class);

    // Assign topic partitions & seek to head of them
    kafkaConsumer.assign(topicPartitions);
    kafkaConsumer.seekToBeginning(topicPartitions);

    // Pull records from kafka, keep polling until we get nothing back
    final List<ConsumerRecord<byte[], byte[]>> allRecords = new ArrayList<>();
    ConsumerRecords<byte[], byte[]> records;
    do {
        // Grab records from kafka
        records = kafkaConsumer.poll(2000L);
        logger.info("Found {} records in kafka", records.count());

        // Add to our array list
        records.forEach(allRecords::add);

    }
    while (!records.isEmpty());

    // close consumer
    kafkaConsumer.close();

    // return all records
    return allRecords;
}
 
開發者ID:salesforce,項目名稱:kafka-junit,代碼行數:42,代碼來源:KafkaTestUtils.java

示例7: retrieveRecordsFromPartitions

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
/**
 * Return a map containing one List of records per partition.
 * This internally creates a Kafka Consumer using the provided consumer properties.
 *
 * @param numPtns
 * @param consumerProperties
 * @return A Map of Partitions(Integer) and the resulting List of messages (byte[]) retrieved
 */
public static Map<Integer, List<byte[]>> retrieveRecordsFromPartitions(String topic, int numPtns,
                                                                 Properties consumerProperties) {

  Map<Integer, List<byte[]>> resultsMap = new HashMap<Integer, List<byte[]>>();
  for (int i = 0; i < numPtns; i++) {
    List<byte[]> partitionResults = new ArrayList<byte[]>();
    resultsMap.put(i, partitionResults);
    KafkaConsumer<String, byte[]> consumer =
        new KafkaConsumer<String, byte[]>(consumerProperties);

    TopicPartition partition = new TopicPartition(topic, i);

    consumer.assign(Arrays.asList(partition));

    ConsumerRecords<String, byte[]> records = consumer.poll(1000);
    for (ConsumerRecord<String, byte[]> record : records) {
      partitionResults.add(record.value());
    }
    consumer.close();
  }
  return resultsMap;
}
 
開發者ID:moueimei,項目名稱:flume-release-1.7.0,代碼行數:31,代碼來源:KafkaPartitionTestUtil.java

示例8: testQuerySubmissionPartitions

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
@Test
public void testQuerySubmissionPartitions() throws IOException, PubSubException {
    BulletConfig config = new BulletConfig("src/test/resources/test_config.yaml");
    config.set(BulletConfig.PUBSUB_CONTEXT_NAME, "QUERY_SUBMISSION");
    KafkaPubSub kafkaPubSub = new KafkaPubSub(new KafkaConfig(config));

    KafkaQueryPublisher publisher = (KafkaQueryPublisher) kafkaPubSub.getPublisher();
    Assert.assertEquals(requestPartitions, publisher.getWritePartitions());
    Assert.assertEquals(responsePartitions, publisher.getReceivePartitions());
    publisher.close();

    KafkaSubscriber subscriber = (KafkaSubscriber) kafkaPubSub.getSubscriber();
    KafkaConsumer<String, byte[]> consumer = subscriber.getConsumer();
    Assert.assertEquals(consumer.assignment(), new HashSet<>(responsePartitions));
    consumer.close();
}
 
開發者ID:yahoo,項目名稱:bullet-kafka,代碼行數:17,代碼來源:KafkaPubSubTest.java

示例9: readKeyValues

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
/**
 * Returns up to `maxMessages` by reading via the provided consumer (the topic(s) to read from are
 * already configured in the consumer).
 *
 * @param topic          Kafka topic to read messages from
 * @param consumerConfig Kafka consumer configuration
 * @param maxMessages    Maximum number of messages to read via the consumer
 * @return The KeyValue elements retrieved via the consumer
 */
public static <K, V> List<KeyValue<K, V>> readKeyValues(String topic, Properties consumerConfig, int maxMessages) {
  KafkaConsumer<K, V> consumer = new KafkaConsumer<>(consumerConfig);
  consumer.subscribe(Collections.singletonList(topic));
  int pollIntervalMs = 100;
  int maxTotalPollTimeMs = 2000;
  int totalPollTimeMs = 0;
  List<KeyValue<K, V>> consumedValues = new ArrayList<>();
  while (totalPollTimeMs < maxTotalPollTimeMs && continueConsuming(consumedValues.size(), maxMessages)) {
    totalPollTimeMs += pollIntervalMs;
    ConsumerRecords<K, V> records = consumer.poll(pollIntervalMs);
    for (ConsumerRecord<K, V> record : records) {
      consumedValues.add(new KeyValue<>(record.key(), record.value()));
    }
  }
  consumer.close();
  return consumedValues;
}
 
開發者ID:kaiwaehner,項目名稱:kafka-streams-machine-learning-examples,代碼行數:27,代碼來源:IntegrationTestUtils.java

示例10: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
public static void main(String[] args) {
    OffsetSetterConfig config = null;

    try {
        config = createOffsetSetterConfig(args);
    } catch (ParseException e) {
        System.err.println(e.getMessage());
        System.exit(1);
    }

    Map<TopicPartition, OffsetAndMetadata> m = new HashMap<>();
    m.put(new TopicPartition(config.kafkaTopic, config.kafkaPartition), new OffsetAndMetadata(config.kafkaOffset));

    System.out.println("Creating Kafka consumer ...");
    KafkaConsumer<String, String> kc = new org.apache.kafka.clients.consumer.KafkaConsumer<>(config.kafkaProperties);
    System.out.println("Committing offset " + config.kafkaOffset + " to topic " + config.kafkaTopic + ", partition " + config.kafkaPartition + " ...");
    kc.commitSync(m);
    System.out.println("Closing Kafka consumer ...");
    kc.close();
    System.out.println("Done!");
}
 
開發者ID:lovromazgon,項目名稱:kafka-offset-setter,代碼行數:22,代碼來源:OffsetSetter.java

示例11: createKafkaTopic

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
private void createKafkaTopic(Properties kafkaProperties, String topicName) {
    Properties localKafkaProperties = (Properties) kafkaProperties.clone();
    localKafkaProperties.put("group.id", "bug-" + UUID.randomUUID());
    localKafkaProperties.put("key.deserializer", "org.apache.kafka.common.serialization.IntegerDeserializer");
    localKafkaProperties.put("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");

    KafkaConsumer consumer = new KafkaConsumer<>(localKafkaProperties);

    // if topic auto create is on then subscription creates the topic
    consumer.subscribe(Collections.singletonList(topicName));
    consumer.poll(100);
    consumer.close();
}
 
開發者ID:Axway,項目名稱:iron,代碼行數:14,代碼來源:KafkaTransactionStoreFactory.java

示例12: run

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
@Override
public TaskReport run(TaskSource taskSource, Schema schema, int taskIndex, PageOutput output) {
    PluginTask task = taskSource.loadTask(PluginTask.class);

    BufferAllocator allocator = task.getBufferAllocator();
    PageBuilder builder = new PageBuilder(allocator, schema, output);
    KafkaInputColumns columns = new KafkaInputColumns(task);

    KafkaProperties props = new KafkaProperties(task);
    KafkaConsumer<?, ?> consumer = new KafkaConsumer<>(props);
    consumer.subscribe(task.getTopics());
    setOffsetPosition(consumer, task);

    long readRecords = 0;
    long showReadRecords = 500;
    while(true) {
        ConsumerRecords<?,?> records = consumer.poll(task.getPollTimeoutSec() * 1000);
        if(records.count() == 0) {
            break;
        }
        readRecords += records.count();
        columns.setOutputRecords(builder, records);
        builder.flush();
        if(readRecords >= showReadRecords) {
            logger.info(String.format("Read %d record(s) in task-%d", readRecords, taskIndex));
            showReadRecords *= 2;
        }
    }
    builder.finish();
    builder.close();
    logger.info(String.format("Finishing task-%d.Total %d record(s) read in this task", taskIndex, readRecords));
    consumer.close();

    return Exec.newTaskReport();
}
 
開發者ID:sasakitoa,項目名稱:embulk-input-kafka,代碼行數:36,代碼來源:KafkaInputPlugin.java

示例13: migrateOffsets

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
private void migrateOffsets() {
  ZkUtils zkUtils = ZkUtils.apply(zookeeperConnect, ZK_SESSION_TIMEOUT, ZK_CONNECTION_TIMEOUT,
      JaasUtils.isZkSecurityEnabled());
  KafkaConsumer<String, byte[]> consumer = new KafkaConsumer<>(consumerProps);
  try {
    Map<TopicPartition, OffsetAndMetadata> kafkaOffsets = getKafkaOffsets(consumer);
    if (!kafkaOffsets.isEmpty()) {
      logger.info("Found Kafka offsets for topic " + topicStr +
          ". Will not migrate from zookeeper");
      logger.debug("Offsets found: {}", kafkaOffsets);
      return;
    }

    logger.info("No Kafka offsets found. Migrating zookeeper offsets");
    Map<TopicPartition, OffsetAndMetadata> zookeeperOffsets = getZookeeperOffsets(zkUtils);
    if (zookeeperOffsets.isEmpty()) {
      logger.warn("No offsets to migrate found in Zookeeper");
      return;
    }

    logger.info("Committing Zookeeper offsets to Kafka");
    logger.debug("Offsets to commit: {}", zookeeperOffsets);
    consumer.commitSync(zookeeperOffsets);
    // Read the offsets to verify they were committed
    Map<TopicPartition, OffsetAndMetadata> newKafkaOffsets = getKafkaOffsets(consumer);
    logger.debug("Offsets committed: {}", newKafkaOffsets);
    if (!newKafkaOffsets.keySet().containsAll(zookeeperOffsets.keySet())) {
      throw new FlumeException("Offsets could not be committed");
    }
  } finally {
    zkUtils.close();
    consumer.close();
  }
}
 
開發者ID:moueimei,項目名稱:flume-release-1.7.0,代碼行數:35,代碼來源:KafkaChannel.java

示例14: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
public static void main(String[] args) throws Exception {
  CommandLine commandLine = parseCommandLine(args);
  String brokerStatsZk = commandLine.getOptionValue(BROKERSTATS_ZOOKEEPER);
  String brokerStatsTopic = commandLine.getOptionValue(BROKERSTATS_TOPIC);
  String brokerName = commandLine.getOptionValue(BROKERNAME);
  Set<String> brokerNames = new HashSet<>();
  brokerNames.add(brokerName);

  KafkaConsumer kafkaConsumer = KafkaUtils.getKafkaConsumer(brokerStatsZk,
      "org.apache.kafka.common.serialization.ByteArrayDeserializer",
      "org.apache.kafka.common.serialization.ByteArrayDeserializer", 1);

  long startTimestampInMillis = System.currentTimeMillis() - 86400 * 1000L;
  Map<TopicPartition, Long> offsets = ReplicaStatsManager.getProcessingStartOffsets(
      kafkaConsumer, brokerStatsTopic, startTimestampInMillis);
  kafkaConsumer.unsubscribe();
  kafkaConsumer.assign(offsets.keySet());
  Map<TopicPartition, Long> latestOffsets = kafkaConsumer.endOffsets(offsets.keySet());
  kafkaConsumer.close();

  Map<Long, BrokerStats> brokerStatsMap = new TreeMap<>();
  for (TopicPartition topicPartition : offsets.keySet()) {
    LOG.info("Start processing {}", topicPartition);
    long startOffset = offsets.get(topicPartition);
    long endOffset = latestOffsets.get(topicPartition);

    List<BrokerStats> statsList = processOnePartition(brokerStatsZk, topicPartition,
        startOffset, endOffset, brokerNames);
    for (BrokerStats brokerStats : statsList) {
      brokerStatsMap.put(brokerStats.getTimestamp(), brokerStats);
    }
    LOG.info("Finished processing {}, retrieved {} records", topicPartition, statsList.size());
  }

  for (Map.Entry<Long, BrokerStats> entry: brokerStatsMap.entrySet()) {
    System.out.println(entry.getKey() + " : " + entry.getValue());
  }
}
 
開發者ID:pinterest,項目名稱:doctorkafka,代碼行數:39,代碼來源:BrokerStatsFilter.java

示例15: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
public static void main(String[] args) {

		ArrayList<String> topicsList = new ArrayList<String>();

		HashMap<String, Object> kafkaProperties = new HashMap<String, Object>();

		topicsList.add("proteus-flatness");
		kafkaProperties.put("bootstrap.servers", "192.168.4.246:6667,192.168.4.247:6667,192.168.4.248:6667");
		kafkaProperties.put("key.deserializer", "org.apache.kafka.common.serialization.IntegerDeserializer");
		kafkaProperties.put("value.deserializer", ProteusSerializer.class.getName());
		kafkaProperties.put("group.id", "proteus");

		KafkaConsumer<Integer, Measurement> kafkaConsumer;

		ProteusSerializer myValueDeserializer = new ProteusSerializer();
		IntegerDeserializer keyDeserializer = new IntegerDeserializer();
		kafkaConsumer = new KafkaConsumer<Integer, Measurement>(kafkaProperties, keyDeserializer, myValueDeserializer);
		kafkaConsumer.subscribe(topicsList);

		try {
			while (true) {
				ConsumerRecords<Integer, Measurement> records = kafkaConsumer.poll(1);
				for (ConsumerRecord<Integer, Measurement> record : records) {
					System.out.println("traza");
					System.out.println(record);
				}

			}
		} finally {
			kafkaConsumer.close();
		}

	}
 
開發者ID:proteus-h2020,項目名稱:proteus-consumer-couchbase,代碼行數:34,代碼來源:ExampleHSM.java


注:本文中的org.apache.kafka.clients.consumer.KafkaConsumer.close方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。