当前位置: 首页>>代码示例>>Java>>正文


Java TestUtils.waitForCondition方法代码示例

本文整理汇总了Java中org.apache.kafka.test.TestUtils.waitForCondition方法的典型用法代码示例。如果您正苦于以下问题:Java TestUtils.waitForCondition方法的具体用法?Java TestUtils.waitForCondition怎么用?Java TestUtils.waitForCondition使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.test.TestUtils的用法示例。


在下文中一共展示了TestUtils.waitForCondition方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: waitUntilMinKeyValueRecordsReceived

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
/**
 * Wait until enough data (key-value records) has been consumed.
 *
 * @param consumerConfig     Kafka Consumer configuration
 * @param topic              Topic to consume from
 * @param expectedNumRecords Minimum number of expected records
 * @param waitTime           Upper bound in waiting time in milliseconds
 * @return All the records consumed, or null if no records are consumed
 * @throws InterruptedException
 * @throws AssertionError       if the given wait time elapses
 */
public static <K, V> List<KeyValue<K, V>> waitUntilMinKeyValueRecordsReceived(final Properties consumerConfig,
                                                                              final String topic,
                                                                              final int expectedNumRecords,
                                                                              final long waitTime) throws InterruptedException {
    final List<KeyValue<K, V>> accumData = new ArrayList<>();
    try (final Consumer<K, V> consumer = createConsumer(consumerConfig)) {
        final TestCondition valuesRead = new TestCondition() {
            @Override
            public boolean conditionMet() {
                final List<KeyValue<K, V>> readData =
                    readKeyValues(topic, consumer, waitTime, expectedNumRecords);
                accumData.addAll(readData);
                return accumData.size() >= expectedNumRecords;
            }
        };
        final String conditionDetails =
            "Expecting " + expectedNumRecords + " records from topic " + topic +
                " while only received " + accumData.size() + ": " + accumData;
        TestUtils.waitForCondition(valuesRead, waitTime, conditionDetails);
    }
    return accumData;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:34,代码来源:IntegrationTestUtils.java

示例2: waitUntilMinValuesRecordsReceived

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
/**
 * Wait until enough data (value records) has been consumed.
 *
 * @param consumerConfig     Kafka Consumer configuration
 * @param topic              Topic to consume from
 * @param expectedNumRecords Minimum number of expected records
 * @param waitTime           Upper bound in waiting time in milliseconds
 * @return All the records consumed, or null if no records are consumed
 * @throws InterruptedException
 * @throws AssertionError       if the given wait time elapses
 */
public static <V> List<V> waitUntilMinValuesRecordsReceived(final Properties consumerConfig,
                                                            final String topic,
                                                            final int expectedNumRecords,
                                                            final long waitTime) throws InterruptedException {
    final List<V> accumData = new ArrayList<>();
    try (final Consumer<Object, V> consumer = createConsumer(consumerConfig)) {
        final TestCondition valuesRead = new TestCondition() {
            @Override
            public boolean conditionMet() {
                final List<V> readData =
                    readValues(topic, consumer, waitTime, expectedNumRecords);
                accumData.addAll(readData);
                return accumData.size() >= expectedNumRecords;
            }
        };
        final String conditionDetails =
            "Expecting " + expectedNumRecords + " records from topic " + topic +
                " while only received " + accumData.size() + ": " + accumData;
        TestUtils.waitForCondition(valuesRead, waitTime, conditionDetails);
    }
    return accumData;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:34,代码来源:IntegrationTestUtils.java

示例3: waitUntilMetadataIsPropagated

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
public static void waitUntilMetadataIsPropagated(final List<KafkaServer> servers,
                                                 final String topic,
                                                 final int partition,
                                                 final long timeout) throws InterruptedException {
    TestUtils.waitForCondition(new TestCondition() {
        @Override
        public boolean conditionMet() {
            for (final KafkaServer server : servers) {
                final MetadataCache metadataCache = server.apis().metadataCache();
                final Option<PartitionStateInfo> partitionInfo =
                        metadataCache.getPartitionInfo(topic, partition);
                if (partitionInfo.isEmpty()) {
                    return false;
                }
                final PartitionStateInfo partitionStateInfo = partitionInfo.get();
                if (!Request.isValidBrokerId(partitionStateInfo.leaderIsrAndControllerEpoch().leaderAndIsr().leader())) {
                    return false;
                }
            }
            return true;
        }
    }, timeout, "metadata for topic=" + topic + " partition=" + partition + " not propagated to all brokers");

}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:25,代码来源:IntegrationTestUtils.java

示例4: shouldAddStateStoreToRegexDefinedSource

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
@Test
public void shouldAddStateStoreToRegexDefinedSource() throws Exception {

    final ProcessorSupplier<String, String> processorSupplier = new MockProcessorSupplier<>();
    final MockStateStoreSupplier stateStoreSupplier = new MockStateStoreSupplier("testStateStore", false);
    final long thirtySecondTimeout = 30 * 1000;

    final TopologyBuilder builder = new TopologyBuilder()
            .addSource("ingest", Pattern.compile("topic-\\d+"))
            .addProcessor("my-processor", processorSupplier, "ingest")
            .addStateStore(stateStoreSupplier, "my-processor");


    final KafkaStreams streams = new KafkaStreams(builder, streamsConfiguration);
    try {
        streams.start();

        final TestCondition stateStoreNameBoundToSourceTopic = new TestCondition() {
            @Override
            public boolean conditionMet() {
                final Map<String, List<String>> stateStoreToSourceTopic = builder.stateStoreNameToSourceTopics();
                final List<String> topicNamesList = stateStoreToSourceTopic.get("testStateStore");
                return topicNamesList != null && !topicNamesList.isEmpty() && topicNamesList.get(0).equals("topic-1");
            }
        };

        TestUtils.waitForCondition(stateStoreNameBoundToSourceTopic, thirtySecondTimeout, "Did not find topic: [topic-1] connected to state store: [testStateStore]");

    } finally {
        streams.close();
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:33,代码来源:RegexSourceIntegrationTest.java

示例5: shouldThrowStreamsExceptionNoResetSpecified

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
@Test
public void shouldThrowStreamsExceptionNoResetSpecified() throws Exception {
    Properties props = new Properties();
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "none");

    Properties localConfig = StreamsTestUtils.getStreamsConfig(
            "testAutoOffsetWithNone",
            CLUSTER.bootstrapServers(),
            STRING_SERDE_CLASSNAME,
            STRING_SERDE_CLASSNAME,
            props);

    final KStreamBuilder builder = new KStreamBuilder();
    final KStream<String, String> exceptionStream = builder.stream(NOOP);

    exceptionStream.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);

    KafkaStreams streams = new KafkaStreams(builder, localConfig);

    final TestingUncaughtExceptionHandler uncaughtExceptionHandler = new TestingUncaughtExceptionHandler();

    final TestCondition correctExceptionThrownCondition = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return uncaughtExceptionHandler.correctExceptionThrown;
        }
    };

    streams.setUncaughtExceptionHandler(uncaughtExceptionHandler);
    streams.start();
    TestUtils.waitForCondition(correctExceptionThrownCondition, "The expected NoOffsetForPartitionException was never thrown");
    streams.close();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:34,代码来源:KStreamsFineGrainedAutoResetIntegrationTest.java

示例6: deleteTopicsAndWait

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
/**
 * Deletes multiple topics and blocks until all topics got deleted.
 *
 * @param timeoutMs the max time to wait for the topics to be deleted (does not block if {@code <= 0})
 * @param topics the name of the topics
 */
public void deleteTopicsAndWait(final long timeoutMs, final String... topics) throws Exception {
    for (final String topic : topics) {
        try {
            brokers[0].deleteTopic(topic);
        } catch (final UnknownTopicOrPartitionException e) { }
    }

    if (timeoutMs > 0) {
        TestUtils.waitForCondition(new TopicsDeletedCondition(topics), timeoutMs, "Topics not deleted after " + timeoutMs + " milli seconds.");
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:18,代码来源:EmbeddedKafkaCluster.java

示例7: cleanup

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
@Before
public void cleanup() throws Exception {
    ++testNo;

    if (adminClient == null) {
        adminClient = AdminClient.createSimplePlaintext(CLUSTER.bootstrapServers());
    }

    // busy wait until cluster (ie, ConsumerGroupCoordinator) is available
    while (true) {
        Thread.sleep(50);

        try {
            TestUtils.waitForCondition(consumerGroupInactive, TIMEOUT_MULTIPLIER * CLEANUP_CONSUMER_TIMEOUT,
                    "Test consumer group active even after waiting " + (TIMEOUT_MULTIPLIER * CLEANUP_CONSUMER_TIMEOUT) + " ms.");
        } catch (final TimeoutException e) {
            continue;
        }
        break;
    }

    prepareInputData();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:24,代码来源:ResetIntegrationTest.java

示例8: awaitFirstHeartbeat

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
private void awaitFirstHeartbeat(final AtomicBoolean heartbeatReceived) throws Exception {
    mockTime.sleep(HEARTBEAT_INTERVAL_MS);
    TestUtils.waitForCondition(new TestCondition() {
        @Override
        public boolean conditionMet() {
            return heartbeatReceived.get();
        }
    }, 3000, "Should have received a heartbeat request after joining the group");
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:10,代码来源:AbstractCoordinatorTest.java

示例9: waitForRequests

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
public void waitForRequests(final int minRequests, long maxWaitMs) throws InterruptedException {
    TestUtils.waitForCondition(new TestCondition() {
        @Override
        public boolean conditionMet() {
            return requests.size() >= minRequests;
        }
    }, maxWaitMs, "Expected requests have not been sent");
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:9,代码来源:MockClient.java

示例10: testClose

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
private void testClose(SecurityProtocol securityProtocol, ChannelBuilder clientChannelBuilder) throws Exception {
    String node = "0";
    server = createEchoServer(securityProtocol);
    clientChannelBuilder.configure(sslClientConfigs);
    this.selector = new Selector(5000, new Metrics(), new MockTime(), "MetricGroup", clientChannelBuilder);
    InetSocketAddress addr = new InetSocketAddress("localhost", server.port());
    selector.connect(node, addr, BUFFER_SIZE, BUFFER_SIZE);

    NetworkTestUtils.waitForChannelReady(selector, node);

    final ByteArrayOutputStream bytesOut = new ByteArrayOutputStream();
    server.outputChannel(Channels.newChannel(bytesOut));
    server.selector().muteAll();
    byte[] message = TestUtils.randomString(100).getBytes();
    int count = 20;
    final int totalSendSize = count * (message.length + 4);
    for (int i = 0; i < count; i++) {
        selector.send(new NetworkSend(node, ByteBuffer.wrap(message)));
        do {
            selector.poll(0L);
        } while (selector.completedSends().isEmpty());
    }
    server.selector().unmuteAll();
    selector.close(node);
    TestUtils.waitForCondition(new TestCondition() {
        @Override
        public boolean conditionMet() {
            return bytesOut.toByteArray().length == totalSendSize;
        }
    }, 5000, "All requests sent were not processed");
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:32,代码来源:SslTransportLayerTest.java

示例11: testRegexMatchesTopicsAWhenCreated

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
@Test
public void testRegexMatchesTopicsAWhenCreated() throws Exception {

    final Serde<String> stringSerde = Serdes.String();
    final List<String> expectedFirstAssignment = Arrays.asList("TEST-TOPIC-1");
    final List<String> expectedSecondAssignment = Arrays.asList("TEST-TOPIC-1", "TEST-TOPIC-2");

    final StreamsConfig streamsConfig = new StreamsConfig(streamsConfiguration);

    CLUSTER.createTopic("TEST-TOPIC-1");

    final KStreamBuilder builder = new KStreamBuilder();

    final KStream<String, String> pattern1Stream = builder.stream(Pattern.compile("TEST-TOPIC-\\d"));

    pattern1Stream.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);

    final KafkaStreams streams = new KafkaStreams(builder, streamsConfiguration);

    final Field streamThreadsField = streams.getClass().getDeclaredField("threads");
    streamThreadsField.setAccessible(true);
    final StreamThread[] streamThreads = (StreamThread[]) streamThreadsField.get(streams);
    final StreamThread originalThread = streamThreads[0];

    final TestStreamThread testStreamThread = new TestStreamThread(builder, streamsConfig,
        new DefaultKafkaClientSupplier(),
        originalThread.applicationId, originalThread.clientId, originalThread.processId, new Metrics(), Time.SYSTEM);

    final TestCondition oneTopicAdded = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return testStreamThread.assignedTopicPartitions.equals(expectedFirstAssignment);
        }
    };

    streamThreads[0] = testStreamThread;
    streams.start();

    TestUtils.waitForCondition(oneTopicAdded, STREAM_TASKS_NOT_UPDATED);

    CLUSTER.createTopic("TEST-TOPIC-2");

    final TestCondition secondTopicAdded = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return testStreamThread.assignedTopicPartitions.equals(expectedSecondAssignment);
        }
    };

    TestUtils.waitForCondition(secondTopicAdded, STREAM_TASKS_NOT_UPDATED);

    streams.close();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:54,代码来源:RegexSourceIntegrationTest.java

示例12: testRegexMatchesTopicsAWhenDeleted

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
@Test
public void testRegexMatchesTopicsAWhenDeleted() throws Exception {

    final Serde<String> stringSerde = Serdes.String();
    final List<String> expectedFirstAssignment = Arrays.asList("TEST-TOPIC-A", "TEST-TOPIC-B");
    final List<String> expectedSecondAssignment = Arrays.asList("TEST-TOPIC-B");

    final StreamsConfig streamsConfig = new StreamsConfig(streamsConfiguration);

    CLUSTER.createTopics("TEST-TOPIC-A", "TEST-TOPIC-B");

    final KStreamBuilder builder = new KStreamBuilder();

    final KStream<String, String> pattern1Stream = builder.stream(Pattern.compile("TEST-TOPIC-[A-Z]"));

    pattern1Stream.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);

    final KafkaStreams streams = new KafkaStreams(builder, streamsConfiguration);

    final Field streamThreadsField = streams.getClass().getDeclaredField("threads");
    streamThreadsField.setAccessible(true);
    final StreamThread[] streamThreads = (StreamThread[]) streamThreadsField.get(streams);
    final StreamThread originalThread = streamThreads[0];

    final TestStreamThread testStreamThread = new TestStreamThread(builder, streamsConfig,
        new DefaultKafkaClientSupplier(),
        originalThread.applicationId, originalThread.clientId, originalThread.processId, new Metrics(), Time.SYSTEM);

    streamThreads[0] = testStreamThread;

    final TestCondition bothTopicsAdded = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return testStreamThread.assignedTopicPartitions.equals(expectedFirstAssignment);
        }
    };
    streams.start();

    TestUtils.waitForCondition(bothTopicsAdded, STREAM_TASKS_NOT_UPDATED);

    CLUSTER.deleteTopic("TEST-TOPIC-A");

    final TestCondition oneTopicRemoved = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return testStreamThread.assignedTopicPartitions.equals(expectedSecondAssignment);
        }
    };

    TestUtils.waitForCondition(oneTopicRemoved, STREAM_TASKS_NOT_UPDATED);

    streams.close();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:54,代码来源:RegexSourceIntegrationTest.java

示例13: testMultipleConsumersCanReadFromPartitionedTopic

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
@Test
public void testMultipleConsumersCanReadFromPartitionedTopic() throws Exception {

    final Serde<String> stringSerde = Serdes.String();
    final KStreamBuilder builderLeader = new KStreamBuilder();
    final KStreamBuilder builderFollower = new KStreamBuilder();
    final List<String> expectedAssignment = Arrays.asList(PARTITIONED_TOPIC_1,  PARTITIONED_TOPIC_2);

    final KStream<String, String> partitionedStreamLeader = builderLeader.stream(Pattern.compile("partitioned-\\d"));
    final KStream<String, String> partitionedStreamFollower = builderFollower.stream(Pattern.compile("partitioned-\\d"));


    partitionedStreamLeader.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);
    partitionedStreamFollower.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);

    final KafkaStreams partitionedStreamsLeader  = new KafkaStreams(builderLeader, streamsConfiguration);
    final KafkaStreams partitionedStreamsFollower  = new KafkaStreams(builderFollower, streamsConfiguration);

    final StreamsConfig streamsConfig = new StreamsConfig(streamsConfiguration);


    final Field leaderStreamThreadsField = partitionedStreamsLeader.getClass().getDeclaredField("threads");
    leaderStreamThreadsField.setAccessible(true);
    final StreamThread[] leaderStreamThreads = (StreamThread[]) leaderStreamThreadsField.get(partitionedStreamsLeader);
    final StreamThread originalLeaderThread = leaderStreamThreads[0];

    final TestStreamThread leaderTestStreamThread = new TestStreamThread(builderLeader, streamsConfig,
            new DefaultKafkaClientSupplier(),
            originalLeaderThread.applicationId, originalLeaderThread.clientId, originalLeaderThread.processId, new Metrics(), Time.SYSTEM);

    leaderStreamThreads[0] = leaderTestStreamThread;

    final TestCondition bothTopicsAddedToLeader = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return leaderTestStreamThread.assignedTopicPartitions.equals(expectedAssignment);
        }
    };



    final Field followerStreamThreadsField = partitionedStreamsFollower.getClass().getDeclaredField("threads");
    followerStreamThreadsField.setAccessible(true);
    final StreamThread[] followerStreamThreads = (StreamThread[]) followerStreamThreadsField.get(partitionedStreamsFollower);
    final StreamThread originalFollowerThread = followerStreamThreads[0];

    final TestStreamThread followerTestStreamThread = new TestStreamThread(builderFollower, streamsConfig,
            new DefaultKafkaClientSupplier(),
            originalFollowerThread.applicationId, originalFollowerThread.clientId, originalFollowerThread.processId, new Metrics(), Time.SYSTEM);

    followerStreamThreads[0] = followerTestStreamThread;


    final TestCondition bothTopicsAddedToFollower = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return followerTestStreamThread.assignedTopicPartitions.equals(expectedAssignment);
        }
    };

    partitionedStreamsLeader.start();
    TestUtils.waitForCondition(bothTopicsAddedToLeader, "Topics never assigned to leader stream");


    partitionedStreamsFollower.start();
    TestUtils.waitForCondition(bothTopicsAddedToFollower, "Topics never assigned to follower stream");

    partitionedStreamsLeader.close();
    partitionedStreamsFollower.close();

}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:72,代码来源:RegexSourceIntegrationTest.java

示例14: shouldKStreamGlobalKTableLeftJoin

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
@Test
public void shouldKStreamGlobalKTableLeftJoin() throws Exception {
    final KStream<String, String> streamTableJoin = stream.leftJoin(globalTable, keyMapper, joiner);
    streamTableJoin.foreach(foreachAction);
    produceInitialGlobalTableValues();
    startStreams();
    produceTopicValues(inputStream);

    final Map<String, String> expected = new HashMap<>();
    expected.put("a", "1+A");
    expected.put("b", "2+B");
    expected.put("c", "3+C");
    expected.put("d", "4+D");
    expected.put("e", "5+null");

    TestUtils.waitForCondition(new TestCondition() {
        @Override
        public boolean conditionMet() {
            return results.equals(expected);
        }
    }, 30000L, "waiting for initial values");


    produceGlobalTableValues();

    final ReadOnlyKeyValueStore<Long, String> replicatedStore = kafkaStreams.store(globalStore, QueryableStoreTypes.<Long, String>keyValueStore());

    TestUtils.waitForCondition(new TestCondition() {
        @Override
        public boolean conditionMet() {
            return "J".equals(replicatedStore.get(5L));
        }
    }, 30000, "waiting for data in replicated store");
    produceTopicValues(inputStream);

    expected.put("a", "1+F");
    expected.put("b", "2+G");
    expected.put("c", "3+H");
    expected.put("d", "4+I");
    expected.put("e", "5+J");

    TestUtils.waitForCondition(new TestCondition() {
        @Override
        public boolean conditionMet() {
            return results.equals(expected);
        }
    }, 30000L, "waiting for final values");
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:49,代码来源:GlobalKTableIntegrationTest.java

示例15: shouldKStreamGlobalKTableJoin

import org.apache.kafka.test.TestUtils; //导入方法依赖的package包/类
@Test
public void shouldKStreamGlobalKTableJoin() throws Exception {
    final KStream<String, String> streamTableJoin = stream.join(globalTable, keyMapper, joiner);
    streamTableJoin.foreach(foreachAction);
    produceInitialGlobalTableValues();
    startStreams();
    produceTopicValues(inputStream);

    final Map<String, String> expected = new HashMap<>();
    expected.put("a", "1+A");
    expected.put("b", "2+B");
    expected.put("c", "3+C");
    expected.put("d", "4+D");

    TestUtils.waitForCondition(new TestCondition() {
        @Override
        public boolean conditionMet() {
            return results.equals(expected);
        }
    }, 30000L, "waiting for initial values");


    produceGlobalTableValues();

    final ReadOnlyKeyValueStore<Long, String> replicatedStore = kafkaStreams.store(globalStore, QueryableStoreTypes.<Long, String>keyValueStore());

    TestUtils.waitForCondition(new TestCondition() {
        @Override
        public boolean conditionMet() {
            return "J".equals(replicatedStore.get(5L));
        }
    }, 30000, "waiting for data in replicated store");

    produceTopicValues(inputStream);

    expected.put("a", "1+F");
    expected.put("b", "2+G");
    expected.put("c", "3+H");
    expected.put("d", "4+I");
    expected.put("e", "5+J");

    TestUtils.waitForCondition(new TestCondition() {
        @Override
        public boolean conditionMet() {
            return results.equals(expected);
        }
    }, 30000L, "waiting for final values");
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:49,代码来源:GlobalKTableIntegrationTest.java


注:本文中的org.apache.kafka.test.TestUtils.waitForCondition方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。