當前位置: 首頁>>代碼示例>>Java>>正文


Java KafkaConsumer.partitionsFor方法代碼示例

本文整理匯總了Java中org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor方法的典型用法代碼示例。如果您正苦於以下問題:Java KafkaConsumer.partitionsFor方法的具體用法?Java KafkaConsumer.partitionsFor怎麽用?Java KafkaConsumer.partitionsFor使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.kafka.clients.consumer.KafkaConsumer的用法示例。


在下文中一共展示了KafkaConsumer.partitionsFor方法的10個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: createConsumerAndSubscribe

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
/**
 * Create a new KafkaConsumer based on the passed in ClientConfig, and subscribe to the appropriate
 * partitions.
 */
public KafkaConsumer createConsumerAndSubscribe(final ClientConfig clientConfig) {
    final KafkaConsumer kafkaConsumer = createConsumer(clientConfig);

    // Determine which partitions to subscribe to, for now do all
    final List<PartitionInfo> partitionInfos = kafkaConsumer.partitionsFor(clientConfig.getTopicConfig().getTopicName());

    // Pull out partitions, convert to topic partitions
    final Collection<TopicPartition> topicPartitions = new ArrayList<>();
    for (final PartitionInfo partitionInfo: partitionInfos) {
        // Skip filtered partitions
        if (!clientConfig.isPartitionFiltered(partitionInfo.partition())) {
            topicPartitions.add(new TopicPartition(partitionInfo.topic(), partitionInfo.partition()));
        }
    }

    // Assign them.
    kafkaConsumer.assign(topicPartitions);

    // Return the kafka consumer.
    return kafkaConsumer;
}
 
開發者ID:SourceLabOrg,項目名稱:kafka-webview,代碼行數:26,代碼來源:KafkaConsumerFactory.java

示例2: consumeAllRecordsFromTopic

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
/**
 * This will consume all records from all partitions on the given topic.
 * @param topic Topic to consume from.
 * @return List of ConsumerRecords consumed.
 */
public List<ConsumerRecord<byte[], byte[]>> consumeAllRecordsFromTopic(final String topic) {
    // Connect to broker to determine what partitions are available.
    KafkaConsumer<byte[], byte[]> kafkaConsumer = kafkaTestServer.getKafkaConsumer(
        ByteArrayDeserializer.class,
        ByteArrayDeserializer.class
    );

    final List<Integer> partitionIds = new ArrayList<>();
    for (PartitionInfo partitionInfo: kafkaConsumer.partitionsFor(topic)) {
        partitionIds.add(partitionInfo.partition());
    }
    kafkaConsumer.close();

    return consumeAllRecordsFromTopic(topic, partitionIds);
}
 
開發者ID:salesforce,項目名稱:kafka-junit,代碼行數:21,代碼來源:KafkaTestUtils.java

示例3: getKafkaOffsets

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
private Map<TopicPartition, OffsetAndMetadata> getKafkaOffsets(
    KafkaConsumer<String, byte[]> client, String topicStr) {
  Map<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>();
  List<PartitionInfo> partitions = client.partitionsFor(topicStr);
  for (PartitionInfo partition : partitions) {
    TopicPartition key = new TopicPartition(topicStr, partition.partition());
    OffsetAndMetadata offsetAndMetadata = client.committed(key);
    if (offsetAndMetadata != null) {
      offsets.put(key, offsetAndMetadata);
    }
  }
  return offsets;
}
 
開發者ID:moueimei,項目名稱:flume-release-1.7.0,代碼行數:14,代碼來源:KafkaSource.java

示例4: getKafkaOffsets

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
private Map<TopicPartition, OffsetAndMetadata> getKafkaOffsets(
    KafkaConsumer<String, byte[]> client) {
  Map<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>();
  List<PartitionInfo> partitions = client.partitionsFor(topicStr);
  for (PartitionInfo partition : partitions) {
    TopicPartition key = new TopicPartition(topicStr, partition.partition());
    OffsetAndMetadata offsetAndMetadata = client.committed(key);
    if (offsetAndMetadata != null) {
      offsets.put(key, offsetAndMetadata);
    }
  }
  return offsets;
}
 
開發者ID:moueimei,項目名稱:flume-release-1.7.0,代碼行數:14,代碼來源:KafkaChannel.java

示例5: getAllPartitions

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
private List<TopicPartition> getAllPartitions(KafkaConsumer<?, ?> consumer, String... topics) {
    ArrayList<TopicPartition> partitions = new ArrayList<>();

    for (String topic : topics) {
        for (PartitionInfo info : consumer.partitionsFor(topic)) {
            partitions.add(new TopicPartition(info.topic(), info.partition()));
        }
    }
    return partitions;
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:11,代碼來源:SimpleBenchmark.java

示例6: getAllPartitions

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
private static List<TopicPartition> getAllPartitions(KafkaConsumer<?, ?> consumer, String... topics) {
    ArrayList<TopicPartition> partitions = new ArrayList<>();

    for (String topic : topics) {
        for (PartitionInfo info : consumer.partitionsFor(topic)) {
            partitions.add(new TopicPartition(info.topic(), info.partition()));
        }
    }
    return partitions;
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:11,代碼來源:SmokeTestDriver.java

示例7: getAllPartitions

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
private static List<TopicPartition> getAllPartitions(final KafkaConsumer<?, ?> consumer,
                                                     final String... topics) {
    final ArrayList<TopicPartition> partitions = new ArrayList<>();

    for (final String topic : topics) {
        for (final PartitionInfo info : consumer.partitionsFor(topic)) {
            partitions.add(new TopicPartition(info.topic(), info.partition()));
        }
    }
    return partitions;
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:12,代碼來源:EosTestDriver.java

示例8: testProducerAndConsumer

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
/**
 * Test that KafkaServer works as expected!
 *
 * This also serves as a decent example of how to use the producer and consumer.
 */
@Test
public void testProducerAndConsumer() throws Exception {
    final int partitionId = 0;

    // Define our message
    final String expectedKey = "my-key";
    final String expectedValue = "my test message";

    // Define the record we want to produce
    ProducerRecord<String, String> producerRecord = new ProducerRecord<>(topicName, partitionId, expectedKey, expectedValue);

    // Create a new producer
    KafkaProducer<String, String> producer = getKafkaTestServer().getKafkaProducer(StringSerializer.class, StringSerializer.class);

    // Produce it & wait for it to complete.
    Future<RecordMetadata> future = producer.send(producerRecord);
    producer.flush();
    while (!future.isDone()) {
        Thread.sleep(500L);
    }
    logger.info("Produce completed");

    // Close producer!
    producer.close();

    KafkaConsumer<String, String> kafkaConsumer =
        getKafkaTestServer().getKafkaConsumer(StringDeserializer.class, StringDeserializer.class);

    final List<TopicPartition> topicPartitionList = Lists.newArrayList();
    for (final PartitionInfo partitionInfo: kafkaConsumer.partitionsFor(topicName)) {
        topicPartitionList.add(new TopicPartition(partitionInfo.topic(), partitionInfo.partition()));
    }
    kafkaConsumer.assign(topicPartitionList);
    kafkaConsumer.seekToBeginning(topicPartitionList);

    // Pull records from kafka, keep polling until we get nothing back
    ConsumerRecords<String, String> records;
    do {
        records = kafkaConsumer.poll(2000L);
        logger.info("Found {} records in kafka", records.count());
        for (ConsumerRecord<String, String> record: records) {
            // Validate
            assertEquals("Key matches expected", expectedKey, record.key());
            assertEquals("value matches expected", expectedValue, record.value());
        }
    }
    while (!records.isEmpty());

    // close consumer
    kafkaConsumer.close();
}
 
開發者ID:salesforce,項目名稱:kafka-junit,代碼行數:57,代碼來源:KafkaTestServerTest.java

示例9: getUnderReplicatedPartitions

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
/**
 * Call the kafka api to get the list of under-replicated partitions.
 * When a topic partition loses all of its replicas, it will not have a leader broker.
 * We need to handle this special case in detecting under replicated topic partitions.
 */
public static List<PartitionInfo> getUnderReplicatedPartitions(
    String zkUrl, List<String> topics,
    scala.collection.mutable.Map<String, scala.collection.Map<Object, Seq<Object>>>
        partitionAssignments,
    Map<String, Integer> replicationFactors,
    Map<String, Integer> partitionCounts) {
  List<PartitionInfo> underReplicated = new ArrayList();
  KafkaConsumer kafkaConsumer = KafkaUtils.getKafkaConsumer(zkUrl);
  for (String topic : topics) {
    List<PartitionInfo> partitionInfoList = kafkaConsumer.partitionsFor(topic);
    if (partitionInfoList == null) {
      LOG.error("Failed to get partition info for {}", topic);
      continue;
    }
    int numPartitions = partitionCounts.get(topic);

    // when a partition loses all replicas and does not have a live leader,
    // kafkaconsumer.partitionsFor(...) will not return info for that partition.
    // the noLeaderFlag array is used to detect partitions that have no leaders
    boolean[] noLeaderFlags = new boolean[numPartitions];
    for (int i = 0; i < numPartitions; i++) {
      noLeaderFlags[i] = true;
    }
    for (PartitionInfo info : partitionInfoList) {
      if (info.inSyncReplicas().length < info.replicas().length &&
          replicationFactors.get(info.topic()) > info.inSyncReplicas().length) {
        underReplicated.add(info);
      }
      noLeaderFlags[info.partition()] = false;
    }

    // deal with the partitions that do not have leaders
    for (int partitionId = 0; partitionId < numPartitions; partitionId++) {
      if (noLeaderFlags[partitionId]) {
        Seq<Object> seq = partitionAssignments.get(topic).get().get(partitionId).get();
        Node[] nodes = JavaConverters.seqAsJavaList(seq).stream()
            .map(val -> new Node((Integer) val, "", -1)).toArray(Node[]::new);
        PartitionInfo partitionInfo =
            new PartitionInfo(topic, partitionId, null, nodes, new Node[0]);
        underReplicated.add(partitionInfo);
      }
    }
  }
  return underReplicated;
}
 
開發者ID:pinterest,項目名稱:doctorkafka,代碼行數:51,代碼來源:KafkaClusterManager.java

示例10: getProcessingStartOffsets

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入方法依賴的package包/類
/**
 * Find the start offsets for the processing windows. We uses kafka 0.10.1.1 that does not support
 * KafkaConsumer.
 */
public static Map<TopicPartition, Long> getProcessingStartOffsets(KafkaConsumer kafkaConsumer,
                                                                  String brokerStatsTopic,
                                                                  long startTimestampInMillis) {
  List<PartitionInfo> partitionInfos = kafkaConsumer.partitionsFor(brokerStatsTopic);
  LOG.info("Get partition info for {} : {} partitions", brokerStatsTopic, partitionInfos.size());
  List<TopicPartition> topicPartitions = partitionInfos.stream()
      .map(partitionInfo -> new TopicPartition(partitionInfo.topic(), partitionInfo.partition()))
      .collect(Collectors.toList());

  Map<TopicPartition, Long> endOffsets = kafkaConsumer.endOffsets(topicPartitions);
  Map<TopicPartition, Long> beginningOffsets = kafkaConsumer.beginningOffsets(topicPartitions);
  Map<TopicPartition, Long> offsets = new HashMap<>();

  for (TopicPartition tp : topicPartitions) {
    kafkaConsumer.unsubscribe();
    LOG.info("assigning {} to kafkaconsumer", tp);
    List<TopicPartition> tps = new ArrayList<>();
    tps.add(tp);

    kafkaConsumer.assign(tps);
    long endOffset = endOffsets.get(tp);
    long beginningOffset = beginningOffsets.get(tp);
    long offset = Math.max(endOffsets.get(tp) - 10, beginningOffset);
    ConsumerRecord<byte[], byte[]> record = retrieveOneMessage(kafkaConsumer, tp, offset);
    BrokerStats brokerStats = OperatorUtil.deserializeBrokerStats(record);
    if (brokerStats != null) {
      long timestamp = brokerStats.getTimestamp();
      while (timestamp > startTimestampInMillis) {
        offset = Math.max(beginningOffset, offset - 5000);
        record = retrieveOneMessage(kafkaConsumer, tp, offset);
        brokerStats = OperatorUtil.deserializeBrokerStats(record);
        if (brokerStats == null) {
          break;
        }
        timestamp = brokerStats.getTimestamp();
      }
    }
    offsets.put(tp, offset);
    LOG.info("{}: offset = {}, endOffset = {}, # of to-be-processed messages = {}",
        tp, offset, endOffset, endOffset - offset);
  }
  return offsets;
}
 
開發者ID:pinterest,項目名稱:doctorkafka,代碼行數:48,代碼來源:ReplicaStatsManager.java


注:本文中的org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。