当前位置: 首页>>代码示例>>Java>>正文


Java Cluster.partitionCountForTopic方法代码示例

本文整理汇总了Java中org.apache.kafka.common.Cluster.partitionCountForTopic方法的典型用法代码示例。如果您正苦于以下问题:Java Cluster.partitionCountForTopic方法的具体用法?Java Cluster.partitionCountForTopic怎么用?Java Cluster.partitionCountForTopic使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.common.Cluster的用法示例。


在下文中一共展示了Cluster.partitionCountForTopic方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: assign

import org.apache.kafka.common.Cluster; //导入方法依赖的package包/类
@Override
public Map<String, Assignment> assign(Cluster metadata, Map<String, Subscription> subscriptions) {
    Set<String> allSubscribedTopics = new HashSet<>();
    // 解析subscriptionEntry,去除userData信息
    for (Map.Entry<String, Subscription> subscriptionEntry : subscriptions.entrySet())
        allSubscribedTopics.addAll(subscriptionEntry.getValue().topics());

    Map<String, Integer> partitionsPerTopic = new HashMap<>();
    // 统计每个topic分区的数量
    for (String topic : allSubscribedTopics) {
        Integer numPartitions = metadata.partitionCountForTopic(topic);
        if (numPartitions != null && numPartitions > 0)
            partitionsPerTopic.put(topic, numPartitions);
        else
            log.debug("Skipping assignment for topic {} since no metadata is available", topic);
    }
    //留给实现类去实现
    Map<String, List<TopicPartition>> rawAssignments = assign(partitionsPerTopic, subscriptions);

    // this class maintains no user data, so just wrap the results
    Map<String, Assignment> assignments = new HashMap<>();
    // 整理分区分配的结果
    for (Map.Entry<String, List<TopicPartition>> assignmentEntry : rawAssignments.entrySet())
        assignments.put(assignmentEntry.getKey(), new Assignment(assignmentEntry.getValue()));
    return assignments;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:27,代码来源:AbstractPartitionAssignor.java

示例2: waitOnMetadata

import org.apache.kafka.common.Cluster; //导入方法依赖的package包/类
/**
 * Wait for cluster metadata including partitions for the given topic to be available.
 *
 * @param topic     The topic we want metadata for
 * @param partition A specific partition expected to exist in metadata, or null if there's no preference
 * @param maxWaitMs The maximum time in ms for waiting on the metadata
 * @return The cluster containing topic metadata and the amount of time we waited in ms
 */
// 唤醒sender线程更新metadata中保存的kafka集群元数据
private ClusterAndWaitTime waitOnMetadata(String topic, Integer partition, long maxWaitMs) throws InterruptedException {
    // add topic to metadata topic list if it is not there already and reset expiry
    metadata.add(topic);
    Cluster cluster = metadata.fetch();

    // 判断是不是包含指定的topic
    Integer partitionsCount = cluster.partitionCountForTopic(topic);
    // Return cached metadata if we have it, and if the record's partition is either undefined
    // or within the known partition range
    if (partitionsCount != null && (partition == null || partition < partitionsCount))
        return new ClusterAndWaitTime(cluster, 0);

    long begin = time.milliseconds();
    long remainingWaitMs = maxWaitMs;
    long elapsed;
    // Issue metadata requests until we have metadata for the topic or maxWaitTimeMs is exceeded.
    // In case we already have cached metadata for the topic, but the requested partition is greater
    // than expected, issue an update request only once. This is necessary in case the metadata
    // is stale and the number of partitions for this topic has increased in the meantime.
    do {
        log.trace("Requesting metadata update for topic {}.", topic);
        metadata.add(topic);
        // 获取当前元数据版本号
        int version = metadata.requestUpdate();
        // 唤醒send线程
        sender.wakeup();
        try {
            // 阻塞元数据等待更新完成
            metadata.awaitUpdate(version, remainingWaitMs);
        } catch (TimeoutException ex) {
            // Rethrow with original maxWaitMs to prevent logging exception with remainingWaitMs
            throw new TimeoutException("Failed to update metadata after " + maxWaitMs + " ms.");
        }
        cluster = metadata.fetch();
        elapsed = time.milliseconds() - begin;
        // 检测超时时间
        if (elapsed >= maxWaitMs)
            throw new TimeoutException("Failed to update metadata after " + maxWaitMs + " ms.");
        // 检测权限
        if (cluster.unauthorizedTopics().contains(topic))
            throw new TopicAuthorizationException(topic);
        remainingWaitMs = maxWaitMs - elapsed;
        partitionsCount = cluster.partitionCountForTopic(topic);
    } while (partitionsCount == null);

    if (partition != null && partition >= partitionsCount) {
        throw new KafkaException(
                String.format("Invalid partition given with record: %d is not in the range [0...%d).", partition, partitionsCount));
    }

    return new ClusterAndWaitTime(cluster, elapsed);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:62,代码来源:KafkaProducer.java

示例3: partition

import org.apache.kafka.common.Cluster; //导入方法依赖的package包/类
/**
 * Compute the partition for the given record.
 *
 * @param topic      The topic name
 * @param key        The key to partition on (or null if no key)
 * @param keyBytes   The serialized key to partition on( or null if no key)
 * @param value      The value to partition on or null
 * @param valueBytes The serialized value to partition on or null
 * @param cluster    The current cluster metadata
 */
@Override
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster)
{
    int partitionNum = cluster.partitionCountForTopic(topic);
    return partitionByMod((Long) key, partitionNum);
}
 
开发者ID:dbiir,项目名称:paraflow,代码行数:17,代码来源:DefaultKafkaPartitioner.java


注:本文中的org.apache.kafka.common.Cluster.partitionCountForTopic方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。