当前位置: 首页>>代码示例>>Java>>正文


Java MockConsumer.updatePartitions方法代码示例

本文整理汇总了Java中org.apache.kafka.clients.consumer.MockConsumer.updatePartitions方法的典型用法代码示例。如果您正苦于以下问题:Java MockConsumer.updatePartitions方法的具体用法?Java MockConsumer.updatePartitions怎么用?Java MockConsumer.updatePartitions使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.clients.consumer.MockConsumer的用法示例。


在下文中一共展示了MockConsumer.updatePartitions方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: shouldRequestPartitionInfoIfItDoesntExist

import org.apache.kafka.clients.consumer.MockConsumer; //导入方法依赖的package包/类
@SuppressWarnings("unchecked")
@Test
public void shouldRequestPartitionInfoIfItDoesntExist() throws Exception {
    final MockConsumer<byte[], byte[]> consumer = new MockConsumer(OffsetResetStrategy.EARLIEST) {
        @Override
        public Map<String, List<PartitionInfo>> listTopics() {
            return Collections.emptyMap();
        }
    };

    consumer.updatePartitions(topicPartition.topic(), Collections.singletonList(partitionInfo));
    final StoreChangelogReader changelogReader = new StoreChangelogReader(consumer, Time.SYSTEM, 5000);
    changelogReader.validatePartitionExists(topicPartition, "store");
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:15,代码来源:StoreChangelogReaderTest.java

示例2: shouldInitializeRestoreConsumerWithOffsetsFromStandbyTasks

import org.apache.kafka.clients.consumer.MockConsumer; //导入方法依赖的package包/类
@Test
public void shouldInitializeRestoreConsumerWithOffsetsFromStandbyTasks() throws Exception {
    final KStreamBuilder builder = new KStreamBuilder();
    builder.setApplicationId(applicationId);
    builder.stream("t1").groupByKey().count("count-one");
    builder.stream("t2").groupByKey().count("count-two");

    final StreamThread thread = new StreamThread(
        builder,
        config,
        clientSupplier,
        applicationId,
        clientId,
        processId,
        metrics,
        mockTime,
        new StreamsMetadataState(builder, StreamsMetadataState.UNKNOWN_HOST),
        0);

    final MockConsumer<byte[], byte[]> restoreConsumer = clientSupplier.restoreConsumer;
    restoreConsumer.updatePartitions("stream-thread-test-count-one-changelog",
                                     Collections.singletonList(new PartitionInfo("stream-thread-test-count-one-changelog",
                                                                                 0,
                                                                                 null,
                                                                                 new Node[0],
                                                                                 new Node[0])));
    restoreConsumer.updatePartitions("stream-thread-test-count-two-changelog",
                                     Collections.singletonList(new PartitionInfo("stream-thread-test-count-two-changelog",
                                                                                 0,
                                                                                 null,
                                                                                 new Node[0],
                                                                                 new Node[0])));

    final Map<TaskId, Set<TopicPartition>> standbyTasks = new HashMap<>();
    final TopicPartition t1 = new TopicPartition("t1", 0);
    standbyTasks.put(new TaskId(0, 0), Utils.mkSet(t1));

    thread.setPartitionAssignor(new StreamPartitionAssignor() {
        @Override
        Map<TaskId, Set<TopicPartition>> standbyTasks() {
            return standbyTasks;
        }
    });

    thread.rebalanceListener.onPartitionsRevoked(Collections.<TopicPartition>emptyList());
    thread.rebalanceListener.onPartitionsAssigned(Collections.<TopicPartition>emptyList());

    assertThat(restoreConsumer.assignment(), equalTo(Utils.mkSet(new TopicPartition("stream-thread-test-count-one-changelog", 0))));

    // assign an existing standby plus a new one
    standbyTasks.put(new TaskId(1, 0), Utils.mkSet(new TopicPartition("t2", 0)));
    thread.rebalanceListener.onPartitionsRevoked(Collections.<TopicPartition>emptyList());
    thread.rebalanceListener.onPartitionsAssigned(Collections.<TopicPartition>emptyList());

    assertThat(restoreConsumer.assignment(), equalTo(Utils.mkSet(new TopicPartition("stream-thread-test-count-one-changelog", 0),
                                                                 new TopicPartition("stream-thread-test-count-two-changelog", 0))));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:58,代码来源:StreamThreadTest.java

示例3: shouldCloseSuspendedTasksThatAreNoLongerAssignedToThisStreamThreadBeforeCreatingNewTasks

import org.apache.kafka.clients.consumer.MockConsumer; //导入方法依赖的package包/类
@Test
public void shouldCloseSuspendedTasksThatAreNoLongerAssignedToThisStreamThreadBeforeCreatingNewTasks() throws Exception {
    final KStreamBuilder builder = new KStreamBuilder();
    builder.setApplicationId(applicationId);
    builder.stream("t1").groupByKey().count("count-one");
    builder.stream("t2").groupByKey().count("count-two");

    final StreamThread thread = new StreamThread(
        builder,
        config,
        clientSupplier,
        applicationId,
        clientId,
        processId,
        metrics,
        mockTime,
        new StreamsMetadataState(builder, StreamsMetadataState.UNKNOWN_HOST),
        0);
    final MockConsumer<byte[], byte[]> restoreConsumer = clientSupplier.restoreConsumer;
    restoreConsumer.updatePartitions("stream-thread-test-count-one-changelog",
                                     Collections.singletonList(new PartitionInfo("stream-thread-test-count-one-changelog",
                                                                                 0,
                                                                                 null,
                                                                                 new Node[0],
                                                                                 new Node[0])));
    restoreConsumer.updatePartitions("stream-thread-test-count-two-changelog",
                                     Collections.singletonList(new PartitionInfo("stream-thread-test-count-two-changelog",
                                                                                 0,
                                                                                 null,
                                                                                 new Node[0],
                                                                                 new Node[0])));


    final HashMap<TopicPartition, Long> offsets = new HashMap<>();
    offsets.put(new TopicPartition("stream-thread-test-count-one-changelog", 0), 0L);
    offsets.put(new TopicPartition("stream-thread-test-count-two-changelog", 0), 0L);
    restoreConsumer.updateEndOffsets(offsets);
    restoreConsumer.updateBeginningOffsets(offsets);

    final Map<TaskId, Set<TopicPartition>> standbyTasks = new HashMap<>();
    final TopicPartition t1 = new TopicPartition("t1", 0);
    standbyTasks.put(new TaskId(0, 0), Utils.mkSet(t1));

    final Map<TaskId, Set<TopicPartition>> activeTasks = new HashMap<>();
    final TopicPartition t2 = new TopicPartition("t2", 0);
    activeTasks.put(new TaskId(1, 0), Utils.mkSet(t2));

    thread.setPartitionAssignor(new StreamPartitionAssignor() {
        @Override
        Map<TaskId, Set<TopicPartition>> standbyTasks() {
            return standbyTasks;
        }

        @Override
        Map<TaskId, Set<TopicPartition>> activeTasks() {
            return activeTasks;
        }
    });

    thread.rebalanceListener.onPartitionsRevoked(Collections.<TopicPartition>emptyList());
    thread.rebalanceListener.onPartitionsAssigned(Utils.mkSet(t2));

    // swap the assignment around and make sure we don't get any exceptions
    standbyTasks.clear();
    activeTasks.clear();
    standbyTasks.put(new TaskId(1, 0), Utils.mkSet(t2));
    activeTasks.put(new TaskId(0, 0), Utils.mkSet(t1));

    thread.rebalanceListener.onPartitionsRevoked(Collections.<TopicPartition>emptyList());
    thread.rebalanceListener.onPartitionsAssigned(Utils.mkSet(t1));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:72,代码来源:StreamThreadTest.java

示例4: ProcessorTopologyTestDriver

import org.apache.kafka.clients.consumer.MockConsumer; //导入方法依赖的package包/类
/**
 * Create a new test driver instance.
 * @param config the stream configuration for the topology
 * @param builder the topology builder that will be used to create the topology instance
 */
public ProcessorTopologyTestDriver(final StreamsConfig config,
                                   final TopologyBuilder builder) {
    topology = builder.setApplicationId(APPLICATION_ID).build(null);
    final ProcessorTopology globalTopology  = builder.buildGlobalStateTopology();

    // Set up the consumer and producer ...
    final Consumer<byte[], byte[]> consumer = new MockConsumer<>(OffsetResetStrategy.EARLIEST);
    final Serializer<byte[]> bytesSerializer = new ByteArraySerializer();
    producer = new MockProducer<byte[], byte[]>(true, bytesSerializer, bytesSerializer) {
        @Override
        public List<PartitionInfo> partitionsFor(final String topic) {
            return Collections.singletonList(new PartitionInfo(topic, PARTITION_ID, null, null, null));
        }
    };

    // Identify internal topics for forwarding in process ...
    for (final TopologyBuilder.TopicsInfo topicsInfo : builder.topicGroups().values()) {
        internalTopics.addAll(topicsInfo.repartitionSourceTopics.keySet());
    }

    // Set up all of the topic+partition information and subscribe the consumer to each ...
    for (final String topic : topology.sourceTopics()) {
        final TopicPartition tp = new TopicPartition(topic, PARTITION_ID);
        partitionsByTopic.put(topic, tp);
        offsetsByTopicPartition.put(tp, new AtomicLong());
    }

    consumer.assign(offsetsByTopicPartition.keySet());

    final StateDirectory stateDirectory = new StateDirectory(APPLICATION_ID, TestUtils.tempDirectory().getPath(), Time.SYSTEM);
    final StreamsMetrics streamsMetrics = new MockStreamsMetrics(new Metrics());
    final ThreadCache cache = new ThreadCache("mock", 1024 * 1024, streamsMetrics);

    if (globalTopology != null) {
        final MockConsumer<byte[], byte[]> globalConsumer = createGlobalConsumer();
        for (final String topicName : globalTopology.sourceTopics()) {
            final List<PartitionInfo> partitionInfos = new ArrayList<>();
            partitionInfos.add(new PartitionInfo(topicName, 1, null, null, null));
            globalConsumer.updatePartitions(topicName, partitionInfos);
            final TopicPartition partition = new TopicPartition(topicName, 1);
            globalConsumer.updateEndOffsets(Collections.singletonMap(partition, 0L));
            globalPartitionsByTopic.put(topicName, partition);
            offsetsByTopicPartition.put(partition, new AtomicLong());
        }
        final GlobalStateManagerImpl stateManager = new GlobalStateManagerImpl(globalTopology, globalConsumer, stateDirectory);
        globalStateTask = new GlobalStateUpdateTask(globalTopology,
                                                    new GlobalProcessorContextImpl(config, stateManager, streamsMetrics, cache),
                                                    stateManager
        );
        globalStateTask.initialize();
    }

    if (!partitionsByTopic.isEmpty()) {
        task = new StreamTask(TASK_ID,
                              APPLICATION_ID,
                              partitionsByTopic.values(),
                              topology,
                              consumer,
                              new StoreChangelogReader(
                                  createRestoreConsumer(topology.storeToChangelogTopic()),
                                  Time.SYSTEM,
                                  5000),
                              config,
                              streamsMetrics, stateDirectory,
                              cache,
                              new MockTime(),
                              producer);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:75,代码来源:ProcessorTopologyTestDriver.java


注:本文中的org.apache.kafka.clients.consumer.MockConsumer.updatePartitions方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。