当前位置: 首页>>代码示例>>Java>>正文


Java KafkaConsumer.partitionsFor方法代码示例

本文整理汇总了Java中org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor方法的典型用法代码示例。如果您正苦于以下问题:Java KafkaConsumer.partitionsFor方法的具体用法?Java KafkaConsumer.partitionsFor怎么用?Java KafkaConsumer.partitionsFor使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.clients.consumer.KafkaConsumer的用法示例。


在下文中一共展示了KafkaConsumer.partitionsFor方法的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: createConsumerAndSubscribe

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
/**
 * Create a new KafkaConsumer based on the passed in ClientConfig, and subscribe to the appropriate
 * partitions.
 */
public KafkaConsumer createConsumerAndSubscribe(final ClientConfig clientConfig) {
    final KafkaConsumer kafkaConsumer = createConsumer(clientConfig);

    // Determine which partitions to subscribe to, for now do all
    final List<PartitionInfo> partitionInfos = kafkaConsumer.partitionsFor(clientConfig.getTopicConfig().getTopicName());

    // Pull out partitions, convert to topic partitions
    final Collection<TopicPartition> topicPartitions = new ArrayList<>();
    for (final PartitionInfo partitionInfo: partitionInfos) {
        // Skip filtered partitions
        if (!clientConfig.isPartitionFiltered(partitionInfo.partition())) {
            topicPartitions.add(new TopicPartition(partitionInfo.topic(), partitionInfo.partition()));
        }
    }

    // Assign them.
    kafkaConsumer.assign(topicPartitions);

    // Return the kafka consumer.
    return kafkaConsumer;
}
 
开发者ID:SourceLabOrg,项目名称:kafka-webview,代码行数:26,代码来源:KafkaConsumerFactory.java

示例2: consumeAllRecordsFromTopic

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
/**
 * This will consume all records from all partitions on the given topic.
 * @param topic Topic to consume from.
 * @return List of ConsumerRecords consumed.
 */
public List<ConsumerRecord<byte[], byte[]>> consumeAllRecordsFromTopic(final String topic) {
    // Connect to broker to determine what partitions are available.
    KafkaConsumer<byte[], byte[]> kafkaConsumer = kafkaTestServer.getKafkaConsumer(
        ByteArrayDeserializer.class,
        ByteArrayDeserializer.class
    );

    final List<Integer> partitionIds = new ArrayList<>();
    for (PartitionInfo partitionInfo: kafkaConsumer.partitionsFor(topic)) {
        partitionIds.add(partitionInfo.partition());
    }
    kafkaConsumer.close();

    return consumeAllRecordsFromTopic(topic, partitionIds);
}
 
开发者ID:salesforce,项目名称:kafka-junit,代码行数:21,代码来源:KafkaTestUtils.java

示例3: getKafkaOffsets

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
private Map<TopicPartition, OffsetAndMetadata> getKafkaOffsets(
    KafkaConsumer<String, byte[]> client, String topicStr) {
  Map<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>();
  List<PartitionInfo> partitions = client.partitionsFor(topicStr);
  for (PartitionInfo partition : partitions) {
    TopicPartition key = new TopicPartition(topicStr, partition.partition());
    OffsetAndMetadata offsetAndMetadata = client.committed(key);
    if (offsetAndMetadata != null) {
      offsets.put(key, offsetAndMetadata);
    }
  }
  return offsets;
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:14,代码来源:KafkaSource.java

示例4: getKafkaOffsets

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
private Map<TopicPartition, OffsetAndMetadata> getKafkaOffsets(
    KafkaConsumer<String, byte[]> client) {
  Map<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>();
  List<PartitionInfo> partitions = client.partitionsFor(topicStr);
  for (PartitionInfo partition : partitions) {
    TopicPartition key = new TopicPartition(topicStr, partition.partition());
    OffsetAndMetadata offsetAndMetadata = client.committed(key);
    if (offsetAndMetadata != null) {
      offsets.put(key, offsetAndMetadata);
    }
  }
  return offsets;
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:14,代码来源:KafkaChannel.java

示例5: getAllPartitions

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
private List<TopicPartition> getAllPartitions(KafkaConsumer<?, ?> consumer, String... topics) {
    ArrayList<TopicPartition> partitions = new ArrayList<>();

    for (String topic : topics) {
        for (PartitionInfo info : consumer.partitionsFor(topic)) {
            partitions.add(new TopicPartition(info.topic(), info.partition()));
        }
    }
    return partitions;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:11,代码来源:SimpleBenchmark.java

示例6: getAllPartitions

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
private static List<TopicPartition> getAllPartitions(KafkaConsumer<?, ?> consumer, String... topics) {
    ArrayList<TopicPartition> partitions = new ArrayList<>();

    for (String topic : topics) {
        for (PartitionInfo info : consumer.partitionsFor(topic)) {
            partitions.add(new TopicPartition(info.topic(), info.partition()));
        }
    }
    return partitions;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:11,代码来源:SmokeTestDriver.java

示例7: getAllPartitions

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
private static List<TopicPartition> getAllPartitions(final KafkaConsumer<?, ?> consumer,
                                                     final String... topics) {
    final ArrayList<TopicPartition> partitions = new ArrayList<>();

    for (final String topic : topics) {
        for (final PartitionInfo info : consumer.partitionsFor(topic)) {
            partitions.add(new TopicPartition(info.topic(), info.partition()));
        }
    }
    return partitions;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:12,代码来源:EosTestDriver.java

示例8: testProducerAndConsumer

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
/**
 * Test that KafkaServer works as expected!
 *
 * This also serves as a decent example of how to use the producer and consumer.
 */
@Test
public void testProducerAndConsumer() throws Exception {
    final int partitionId = 0;

    // Define our message
    final String expectedKey = "my-key";
    final String expectedValue = "my test message";

    // Define the record we want to produce
    ProducerRecord<String, String> producerRecord = new ProducerRecord<>(topicName, partitionId, expectedKey, expectedValue);

    // Create a new producer
    KafkaProducer<String, String> producer = getKafkaTestServer().getKafkaProducer(StringSerializer.class, StringSerializer.class);

    // Produce it & wait for it to complete.
    Future<RecordMetadata> future = producer.send(producerRecord);
    producer.flush();
    while (!future.isDone()) {
        Thread.sleep(500L);
    }
    logger.info("Produce completed");

    // Close producer!
    producer.close();

    KafkaConsumer<String, String> kafkaConsumer =
        getKafkaTestServer().getKafkaConsumer(StringDeserializer.class, StringDeserializer.class);

    final List<TopicPartition> topicPartitionList = Lists.newArrayList();
    for (final PartitionInfo partitionInfo: kafkaConsumer.partitionsFor(topicName)) {
        topicPartitionList.add(new TopicPartition(partitionInfo.topic(), partitionInfo.partition()));
    }
    kafkaConsumer.assign(topicPartitionList);
    kafkaConsumer.seekToBeginning(topicPartitionList);

    // Pull records from kafka, keep polling until we get nothing back
    ConsumerRecords<String, String> records;
    do {
        records = kafkaConsumer.poll(2000L);
        logger.info("Found {} records in kafka", records.count());
        for (ConsumerRecord<String, String> record: records) {
            // Validate
            assertEquals("Key matches expected", expectedKey, record.key());
            assertEquals("value matches expected", expectedValue, record.value());
        }
    }
    while (!records.isEmpty());

    // close consumer
    kafkaConsumer.close();
}
 
开发者ID:salesforce,项目名称:kafka-junit,代码行数:57,代码来源:KafkaTestServerTest.java

示例9: getUnderReplicatedPartitions

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
/**
 * Call the kafka api to get the list of under-replicated partitions.
 * When a topic partition loses all of its replicas, it will not have a leader broker.
 * We need to handle this special case in detecting under replicated topic partitions.
 */
public static List<PartitionInfo> getUnderReplicatedPartitions(
    String zkUrl, List<String> topics,
    scala.collection.mutable.Map<String, scala.collection.Map<Object, Seq<Object>>>
        partitionAssignments,
    Map<String, Integer> replicationFactors,
    Map<String, Integer> partitionCounts) {
  List<PartitionInfo> underReplicated = new ArrayList();
  KafkaConsumer kafkaConsumer = KafkaUtils.getKafkaConsumer(zkUrl);
  for (String topic : topics) {
    List<PartitionInfo> partitionInfoList = kafkaConsumer.partitionsFor(topic);
    if (partitionInfoList == null) {
      LOG.error("Failed to get partition info for {}", topic);
      continue;
    }
    int numPartitions = partitionCounts.get(topic);

    // when a partition loses all replicas and does not have a live leader,
    // kafkaconsumer.partitionsFor(...) will not return info for that partition.
    // the noLeaderFlag array is used to detect partitions that have no leaders
    boolean[] noLeaderFlags = new boolean[numPartitions];
    for (int i = 0; i < numPartitions; i++) {
      noLeaderFlags[i] = true;
    }
    for (PartitionInfo info : partitionInfoList) {
      if (info.inSyncReplicas().length < info.replicas().length &&
          replicationFactors.get(info.topic()) > info.inSyncReplicas().length) {
        underReplicated.add(info);
      }
      noLeaderFlags[info.partition()] = false;
    }

    // deal with the partitions that do not have leaders
    for (int partitionId = 0; partitionId < numPartitions; partitionId++) {
      if (noLeaderFlags[partitionId]) {
        Seq<Object> seq = partitionAssignments.get(topic).get().get(partitionId).get();
        Node[] nodes = JavaConverters.seqAsJavaList(seq).stream()
            .map(val -> new Node((Integer) val, "", -1)).toArray(Node[]::new);
        PartitionInfo partitionInfo =
            new PartitionInfo(topic, partitionId, null, nodes, new Node[0]);
        underReplicated.add(partitionInfo);
      }
    }
  }
  return underReplicated;
}
 
开发者ID:pinterest,项目名称:doctorkafka,代码行数:51,代码来源:KafkaClusterManager.java

示例10: getProcessingStartOffsets

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
/**
 * Find the start offsets for the processing windows. We uses kafka 0.10.1.1 that does not support
 * KafkaConsumer.
 */
public static Map<TopicPartition, Long> getProcessingStartOffsets(KafkaConsumer kafkaConsumer,
                                                                  String brokerStatsTopic,
                                                                  long startTimestampInMillis) {
  List<PartitionInfo> partitionInfos = kafkaConsumer.partitionsFor(brokerStatsTopic);
  LOG.info("Get partition info for {} : {} partitions", brokerStatsTopic, partitionInfos.size());
  List<TopicPartition> topicPartitions = partitionInfos.stream()
      .map(partitionInfo -> new TopicPartition(partitionInfo.topic(), partitionInfo.partition()))
      .collect(Collectors.toList());

  Map<TopicPartition, Long> endOffsets = kafkaConsumer.endOffsets(topicPartitions);
  Map<TopicPartition, Long> beginningOffsets = kafkaConsumer.beginningOffsets(topicPartitions);
  Map<TopicPartition, Long> offsets = new HashMap<>();

  for (TopicPartition tp : topicPartitions) {
    kafkaConsumer.unsubscribe();
    LOG.info("assigning {} to kafkaconsumer", tp);
    List<TopicPartition> tps = new ArrayList<>();
    tps.add(tp);

    kafkaConsumer.assign(tps);
    long endOffset = endOffsets.get(tp);
    long beginningOffset = beginningOffsets.get(tp);
    long offset = Math.max(endOffsets.get(tp) - 10, beginningOffset);
    ConsumerRecord<byte[], byte[]> record = retrieveOneMessage(kafkaConsumer, tp, offset);
    BrokerStats brokerStats = OperatorUtil.deserializeBrokerStats(record);
    if (brokerStats != null) {
      long timestamp = brokerStats.getTimestamp();
      while (timestamp > startTimestampInMillis) {
        offset = Math.max(beginningOffset, offset - 5000);
        record = retrieveOneMessage(kafkaConsumer, tp, offset);
        brokerStats = OperatorUtil.deserializeBrokerStats(record);
        if (brokerStats == null) {
          break;
        }
        timestamp = brokerStats.getTimestamp();
      }
    }
    offsets.put(tp, offset);
    LOG.info("{}: offset = {}, endOffset = {}, # of to-be-processed messages = {}",
        tp, offset, endOffset, endOffset - offset);
  }
  return offsets;
}
 
开发者ID:pinterest,项目名称:doctorkafka,代码行数:48,代码来源:ReplicaStatsManager.java


注:本文中的org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。