當前位置: 首頁>>代碼示例>>Java>>正文


Java KafkaStreams.close方法代碼示例

本文整理匯總了Java中org.apache.kafka.streams.KafkaStreams.close方法的典型用法代碼示例。如果您正苦於以下問題:Java KafkaStreams.close方法的具體用法?Java KafkaStreams.close怎麽用?Java KafkaStreams.close使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.kafka.streams.KafkaStreams的用法示例。


在下文中一共展示了KafkaStreams.close方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: main

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
public static void main(String[] args) throws Exception {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pipe");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

    // setting offset reset to earliest so that we can re-run the demo code with the same pre-loaded data
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

    KStreamBuilder builder = new KStreamBuilder();

    builder.stream("streams-file-input").to("streams-pipe-output");

    KafkaStreams streams = new KafkaStreams(builder, props);
    streams.start();

    // usually the stream application would be running forever,
    // in this example we just let it run for some time and stop since the input data is finite.
    Thread.sleep(5000L);

    streams.close();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:24,代碼來源:PipeDemo.java

示例2: runGenericBenchmark

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
private void runGenericBenchmark(final KafkaStreams streams, final String nameOfBenchmark, final CountDownLatch latch) {
    streams.start();

    long startTime = System.currentTimeMillis();

    while (latch.getCount() > 0) {
        try {
            latch.await();
        } catch (InterruptedException ex) {
            //ignore
        }
    }
    long endTime = System.currentTimeMillis();
    printResults(nameOfBenchmark, endTime - startTime);

    streams.close();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:18,代碼來源:SimpleBenchmark.java

示例3: test

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void test() throws Exception {
  Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);

  Properties config = new Properties();
  config.put(StreamsConfig.APPLICATION_ID_CONFIG, "stream-app");
  config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, senderProps.get("bootstrap.servers"));
  config.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.Integer().getClass());
  config.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

  Producer<Integer, String> producer = createProducer();
  ProducerRecord<Integer, String> record = new ProducerRecord<>("stream-test", 1, "test");
  producer.send(record);

  final Serde<String> stringSerde = Serdes.String();
  final Serde<Integer> intSerde = Serdes.Integer();

  KStreamBuilder builder = new KStreamBuilder();
  KStream<Integer, String> kStream = builder
      .stream(intSerde, stringSerde, "stream-test");

  kStream.map((key, value) -> new KeyValue<>(key, value + "map")).to("stream-out");

  KafkaStreams streams = new KafkaStreams(builder, new StreamsConfig(config),
      new TracingKafkaClientSupplier(mockTracer));
  streams.start();

  await().atMost(15, TimeUnit.SECONDS).until(reportedSpansSize(), equalTo(3));

  streams.close();
  producer.close();

  List<MockSpan> spans = mockTracer.finishedSpans();
  assertEquals(3, spans.size());
  checkSpans(spans);

  assertNull(mockTracer.activeSpan());
}
 
開發者ID:opentracing-contrib,項目名稱:java-kafka-client,代碼行數:39,代碼來源:TracingKafkaStreamsTest.java

示例4: main

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
public static void main(String[] args) throws Exception {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-wordcount-processor");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
    props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

    // setting offset reset to earliest so that we can re-run the demo code with the same pre-loaded data
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

    TopologyBuilder builder = new TopologyBuilder();

    builder.addSource("Source", "streams-file-input");

    builder.addProcessor("Process", new MyProcessorSupplier(), "Source");
    builder.addStateStore(Stores.create("Counts").withStringKeys().withIntegerValues().inMemory().build(), "Process");

    builder.addSink("Sink", "streams-wordcount-processor-output", "Process");

    KafkaStreams streams = new KafkaStreams(builder, props);
    streams.start();

    // usually the stream application would be running forever,
    // in this example we just let it run for some time and stop since the input data is finite.
    Thread.sleep(5000L);

    streams.close();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:30,代碼來源:WordCountProcessorDemo.java

示例5: shouldAddStateStoreToRegexDefinedSource

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void shouldAddStateStoreToRegexDefinedSource() throws Exception {

    final ProcessorSupplier<String, String> processorSupplier = new MockProcessorSupplier<>();
    final MockStateStoreSupplier stateStoreSupplier = new MockStateStoreSupplier("testStateStore", false);
    final long thirtySecondTimeout = 30 * 1000;

    final TopologyBuilder builder = new TopologyBuilder()
            .addSource("ingest", Pattern.compile("topic-\\d+"))
            .addProcessor("my-processor", processorSupplier, "ingest")
            .addStateStore(stateStoreSupplier, "my-processor");


    final KafkaStreams streams = new KafkaStreams(builder, streamsConfiguration);
    try {
        streams.start();

        final TestCondition stateStoreNameBoundToSourceTopic = new TestCondition() {
            @Override
            public boolean conditionMet() {
                final Map<String, List<String>> stateStoreToSourceTopic = builder.stateStoreNameToSourceTopics();
                final List<String> topicNamesList = stateStoreToSourceTopic.get("testStateStore");
                return topicNamesList != null && !topicNamesList.isEmpty() && topicNamesList.get(0).equals("topic-1");
            }
        };

        TestUtils.waitForCondition(stateStoreNameBoundToSourceTopic, thirtySecondTimeout, "Did not find topic: [topic-1] connected to state store: [testStateStore]");

    } finally {
        streams.close();
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:33,代碼來源:RegexSourceIntegrationTest.java

示例6: shouldThrowStreamsExceptionNoResetSpecified

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void shouldThrowStreamsExceptionNoResetSpecified() throws Exception {
    Properties props = new Properties();
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "none");

    Properties localConfig = StreamsTestUtils.getStreamsConfig(
            "testAutoOffsetWithNone",
            CLUSTER.bootstrapServers(),
            STRING_SERDE_CLASSNAME,
            STRING_SERDE_CLASSNAME,
            props);

    final KStreamBuilder builder = new KStreamBuilder();
    final KStream<String, String> exceptionStream = builder.stream(NOOP);

    exceptionStream.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);

    KafkaStreams streams = new KafkaStreams(builder, localConfig);

    final TestingUncaughtExceptionHandler uncaughtExceptionHandler = new TestingUncaughtExceptionHandler();

    final TestCondition correctExceptionThrownCondition = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return uncaughtExceptionHandler.correctExceptionThrown;
        }
    };

    streams.setUncaughtExceptionHandler(uncaughtExceptionHandler);
    streams.start();
    TestUtils.waitForCondition(correctExceptionThrownCondition, "The expected NoOffsetForPartitionException was never thrown");
    streams.close();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:34,代碼來源:KStreamsFineGrainedAutoResetIntegrationTest.java

示例7: main

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
public static void main(String[] args) throws Exception{

        Properties props = new Properties();
        props.put(StreamsConfig.APPLICATION_ID_CONFIG, "wordcount");
        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
        props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());

        // setting offset reset to earliest so that we can re-run the demo code with the same pre-loaded data
        // Note: To re-run the demo, you need to use the offset reset tool:
        // https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+Application+Reset+Tool
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

        // work-around for an issue around timing of creating internal topics
        // Fixed in Kafka 0.10.2.0
        // don't use in large production apps - this increases network load
        // props.put(CommonClientConfigs.METADATA_MAX_AGE_CONFIG, 500);

        KStreamBuilder builder = new KStreamBuilder();

        KStream<String, String> source = builder.stream("wordcount-input");


        final Pattern pattern = Pattern.compile("\\W+");
        KStream counts  = source.flatMapValues(value-> Arrays.asList(pattern.split(value.toLowerCase())))
                .map((key, value) -> new KeyValue<Object, Object>(value, value))
                .filter((key, value) -> (!value.equals("the")))
                .groupByKey()
                .count("CountStore").mapValues(value->Long.toString(value)).toStream();
        counts.to("wordcount-output");

        KafkaStreams streams = new KafkaStreams(builder, props);

        // This is for reset to work. Don't use in production - it causes the app to re-load the state from Kafka on every start
        streams.cleanUp();

        streams.start();

        // usually the stream application would be running forever,
        // in this example we just let it run for some time and stop since the input data is finite.
        Thread.sleep(5000L);

        streams.close();

    }
 
開發者ID:gwenshap,項目名稱:kafka-streams-wordcount,代碼行數:46,代碼來源:WordCountExample.java

示例8: testRegexMatchesTopicsAWhenCreated

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void testRegexMatchesTopicsAWhenCreated() throws Exception {

    final Serde<String> stringSerde = Serdes.String();
    final List<String> expectedFirstAssignment = Arrays.asList("TEST-TOPIC-1");
    final List<String> expectedSecondAssignment = Arrays.asList("TEST-TOPIC-1", "TEST-TOPIC-2");

    final StreamsConfig streamsConfig = new StreamsConfig(streamsConfiguration);

    CLUSTER.createTopic("TEST-TOPIC-1");

    final KStreamBuilder builder = new KStreamBuilder();

    final KStream<String, String> pattern1Stream = builder.stream(Pattern.compile("TEST-TOPIC-\\d"));

    pattern1Stream.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);

    final KafkaStreams streams = new KafkaStreams(builder, streamsConfiguration);

    final Field streamThreadsField = streams.getClass().getDeclaredField("threads");
    streamThreadsField.setAccessible(true);
    final StreamThread[] streamThreads = (StreamThread[]) streamThreadsField.get(streams);
    final StreamThread originalThread = streamThreads[0];

    final TestStreamThread testStreamThread = new TestStreamThread(builder, streamsConfig,
        new DefaultKafkaClientSupplier(),
        originalThread.applicationId, originalThread.clientId, originalThread.processId, new Metrics(), Time.SYSTEM);

    final TestCondition oneTopicAdded = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return testStreamThread.assignedTopicPartitions.equals(expectedFirstAssignment);
        }
    };

    streamThreads[0] = testStreamThread;
    streams.start();

    TestUtils.waitForCondition(oneTopicAdded, STREAM_TASKS_NOT_UPDATED);

    CLUSTER.createTopic("TEST-TOPIC-2");

    final TestCondition secondTopicAdded = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return testStreamThread.assignedTopicPartitions.equals(expectedSecondAssignment);
        }
    };

    TestUtils.waitForCondition(secondTopicAdded, STREAM_TASKS_NOT_UPDATED);

    streams.close();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:54,代碼來源:RegexSourceIntegrationTest.java

示例9: testRegexMatchesTopicsAWhenDeleted

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void testRegexMatchesTopicsAWhenDeleted() throws Exception {

    final Serde<String> stringSerde = Serdes.String();
    final List<String> expectedFirstAssignment = Arrays.asList("TEST-TOPIC-A", "TEST-TOPIC-B");
    final List<String> expectedSecondAssignment = Arrays.asList("TEST-TOPIC-B");

    final StreamsConfig streamsConfig = new StreamsConfig(streamsConfiguration);

    CLUSTER.createTopics("TEST-TOPIC-A", "TEST-TOPIC-B");

    final KStreamBuilder builder = new KStreamBuilder();

    final KStream<String, String> pattern1Stream = builder.stream(Pattern.compile("TEST-TOPIC-[A-Z]"));

    pattern1Stream.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);

    final KafkaStreams streams = new KafkaStreams(builder, streamsConfiguration);

    final Field streamThreadsField = streams.getClass().getDeclaredField("threads");
    streamThreadsField.setAccessible(true);
    final StreamThread[] streamThreads = (StreamThread[]) streamThreadsField.get(streams);
    final StreamThread originalThread = streamThreads[0];

    final TestStreamThread testStreamThread = new TestStreamThread(builder, streamsConfig,
        new DefaultKafkaClientSupplier(),
        originalThread.applicationId, originalThread.clientId, originalThread.processId, new Metrics(), Time.SYSTEM);

    streamThreads[0] = testStreamThread;

    final TestCondition bothTopicsAdded = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return testStreamThread.assignedTopicPartitions.equals(expectedFirstAssignment);
        }
    };
    streams.start();

    TestUtils.waitForCondition(bothTopicsAdded, STREAM_TASKS_NOT_UPDATED);

    CLUSTER.deleteTopic("TEST-TOPIC-A");

    final TestCondition oneTopicRemoved = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return testStreamThread.assignedTopicPartitions.equals(expectedSecondAssignment);
        }
    };

    TestUtils.waitForCondition(oneTopicRemoved, STREAM_TASKS_NOT_UPDATED);

    streams.close();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:54,代碼來源:RegexSourceIntegrationTest.java

示例10: testShouldReadFromRegexAndNamedTopics

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void testShouldReadFromRegexAndNamedTopics() throws Exception {

    final String topic1TestMessage = "topic-1 test";
    final String topic2TestMessage = "topic-2 test";
    final String topicATestMessage = "topic-A test";
    final String topicCTestMessage = "topic-C test";
    final String topicYTestMessage = "topic-Y test";
    final String topicZTestMessage = "topic-Z test";


    final Serde<String> stringSerde = Serdes.String();

    final KStreamBuilder builder = new KStreamBuilder();

    final KStream<String, String> pattern1Stream = builder.stream(Pattern.compile("topic-\\d"));
    final KStream<String, String> pattern2Stream = builder.stream(Pattern.compile("topic-[A-D]"));
    final KStream<String, String> namedTopicsStream = builder.stream(TOPIC_Y, TOPIC_Z);

    pattern1Stream.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);
    pattern2Stream.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);
    namedTopicsStream.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);

    final KafkaStreams streams = new KafkaStreams(builder, streamsConfiguration);
    streams.start();

    final Properties producerConfig = TestUtils.producerConfig(CLUSTER.bootstrapServers(), StringSerializer.class, StringSerializer.class);

    IntegrationTestUtils.produceValuesSynchronously(TOPIC_1, Arrays.asList(topic1TestMessage), producerConfig, mockTime);
    IntegrationTestUtils.produceValuesSynchronously(TOPIC_2, Arrays.asList(topic2TestMessage), producerConfig, mockTime);
    IntegrationTestUtils.produceValuesSynchronously(TOPIC_A, Arrays.asList(topicATestMessage), producerConfig, mockTime);
    IntegrationTestUtils.produceValuesSynchronously(TOPIC_C, Arrays.asList(topicCTestMessage), producerConfig, mockTime);
    IntegrationTestUtils.produceValuesSynchronously(TOPIC_Y, Arrays.asList(topicYTestMessage), producerConfig, mockTime);
    IntegrationTestUtils.produceValuesSynchronously(TOPIC_Z, Arrays.asList(topicZTestMessage), producerConfig, mockTime);

    final Properties consumerConfig = TestUtils.consumerConfig(CLUSTER.bootstrapServers(), StringDeserializer.class, StringDeserializer.class);

    final List<String> expectedReceivedValues = Arrays.asList(topicATestMessage, topic1TestMessage, topic2TestMessage, topicCTestMessage, topicYTestMessage, topicZTestMessage);
    final List<KeyValue<String, String>> receivedKeyValues = IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(consumerConfig, DEFAULT_OUTPUT_TOPIC, 6);
    final List<String> actualValues = new ArrayList<>(6);

    for (final KeyValue<String, String> receivedKeyValue : receivedKeyValues) {
        actualValues.add(receivedKeyValue.value);
    }

    streams.close();
    Collections.sort(actualValues);
    Collections.sort(expectedReceivedValues);
    assertThat(actualValues, equalTo(expectedReceivedValues));
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:51,代碼來源:RegexSourceIntegrationTest.java

示例11: testMultipleConsumersCanReadFromPartitionedTopic

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void testMultipleConsumersCanReadFromPartitionedTopic() throws Exception {

    final Serde<String> stringSerde = Serdes.String();
    final KStreamBuilder builderLeader = new KStreamBuilder();
    final KStreamBuilder builderFollower = new KStreamBuilder();
    final List<String> expectedAssignment = Arrays.asList(PARTITIONED_TOPIC_1,  PARTITIONED_TOPIC_2);

    final KStream<String, String> partitionedStreamLeader = builderLeader.stream(Pattern.compile("partitioned-\\d"));
    final KStream<String, String> partitionedStreamFollower = builderFollower.stream(Pattern.compile("partitioned-\\d"));


    partitionedStreamLeader.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);
    partitionedStreamFollower.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);

    final KafkaStreams partitionedStreamsLeader  = new KafkaStreams(builderLeader, streamsConfiguration);
    final KafkaStreams partitionedStreamsFollower  = new KafkaStreams(builderFollower, streamsConfiguration);

    final StreamsConfig streamsConfig = new StreamsConfig(streamsConfiguration);


    final Field leaderStreamThreadsField = partitionedStreamsLeader.getClass().getDeclaredField("threads");
    leaderStreamThreadsField.setAccessible(true);
    final StreamThread[] leaderStreamThreads = (StreamThread[]) leaderStreamThreadsField.get(partitionedStreamsLeader);
    final StreamThread originalLeaderThread = leaderStreamThreads[0];

    final TestStreamThread leaderTestStreamThread = new TestStreamThread(builderLeader, streamsConfig,
            new DefaultKafkaClientSupplier(),
            originalLeaderThread.applicationId, originalLeaderThread.clientId, originalLeaderThread.processId, new Metrics(), Time.SYSTEM);

    leaderStreamThreads[0] = leaderTestStreamThread;

    final TestCondition bothTopicsAddedToLeader = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return leaderTestStreamThread.assignedTopicPartitions.equals(expectedAssignment);
        }
    };



    final Field followerStreamThreadsField = partitionedStreamsFollower.getClass().getDeclaredField("threads");
    followerStreamThreadsField.setAccessible(true);
    final StreamThread[] followerStreamThreads = (StreamThread[]) followerStreamThreadsField.get(partitionedStreamsFollower);
    final StreamThread originalFollowerThread = followerStreamThreads[0];

    final TestStreamThread followerTestStreamThread = new TestStreamThread(builderFollower, streamsConfig,
            new DefaultKafkaClientSupplier(),
            originalFollowerThread.applicationId, originalFollowerThread.clientId, originalFollowerThread.processId, new Metrics(), Time.SYSTEM);

    followerStreamThreads[0] = followerTestStreamThread;


    final TestCondition bothTopicsAddedToFollower = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return followerTestStreamThread.assignedTopicPartitions.equals(expectedAssignment);
        }
    };

    partitionedStreamsLeader.start();
    TestUtils.waitForCondition(bothTopicsAddedToLeader, "Topics never assigned to leader stream");


    partitionedStreamsFollower.start();
    TestUtils.waitForCondition(bothTopicsAddedToFollower, "Topics never assigned to follower stream");

    partitionedStreamsLeader.close();
    partitionedStreamsFollower.close();

}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:72,代碼來源:RegexSourceIntegrationTest.java

示例12: shouldOnlyReadForEarliest

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
private void shouldOnlyReadForEarliest(
    final String topicSuffix,
    final String topic1,
    final String topic2,
    final String topicA,
    final String topicC,
    final String topicY,
    final String topicZ,
    final String outputTopic,
    final List<String> expectedReceivedValues) throws Exception {

    final KStreamBuilder builder = new KStreamBuilder();

    final KStream<String, String> pattern1Stream = builder.stream(KStreamBuilder.AutoOffsetReset.EARLIEST, Pattern.compile("topic-\\d" + topicSuffix));
    final KStream<String, String> pattern2Stream = builder.stream(KStreamBuilder.AutoOffsetReset.LATEST, Pattern.compile("topic-[A-D]" + topicSuffix));
    final KStream<String, String> namedTopicsStream = builder.stream(topicY, topicZ);

    pattern1Stream.to(stringSerde, stringSerde, outputTopic);
    pattern2Stream.to(stringSerde, stringSerde, outputTopic);
    namedTopicsStream.to(stringSerde, stringSerde, outputTopic);

    final Properties producerConfig = TestUtils.producerConfig(CLUSTER.bootstrapServers(), StringSerializer.class, StringSerializer.class);

    IntegrationTestUtils.produceValuesSynchronously(topic1, Collections.singletonList(topic1TestMessage), producerConfig, mockTime);
    IntegrationTestUtils.produceValuesSynchronously(topic2, Collections.singletonList(topic2TestMessage), producerConfig, mockTime);
    IntegrationTestUtils.produceValuesSynchronously(topicA, Collections.singletonList(topicATestMessage), producerConfig, mockTime);
    IntegrationTestUtils.produceValuesSynchronously(topicC, Collections.singletonList(topicCTestMessage), producerConfig, mockTime);
    IntegrationTestUtils.produceValuesSynchronously(topicY, Collections.singletonList(topicYTestMessage), producerConfig, mockTime);
    IntegrationTestUtils.produceValuesSynchronously(topicZ, Collections.singletonList(topicZTestMessage), producerConfig, mockTime);

    final Properties consumerConfig = TestUtils.consumerConfig(CLUSTER.bootstrapServers(), StringDeserializer.class, StringDeserializer.class);

    final KafkaStreams streams = new KafkaStreams(builder, streamsConfiguration);
    streams.start();

    final List<KeyValue<String, String>> receivedKeyValues = IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(consumerConfig, outputTopic, expectedReceivedValues.size());
    final List<String> actualValues = new ArrayList<>(expectedReceivedValues.size());

    for (final KeyValue<String, String> receivedKeyValue : receivedKeyValues) {
        actualValues.add(receivedKeyValue.value);
    }

    streams.close();
    Collections.sort(actualValues);
    Collections.sort(expectedReceivedValues);
    assertThat(actualValues, equalTo(expectedReceivedValues));
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:48,代碼來源:KStreamsFineGrainedAutoResetIntegrationTest.java

示例13: runSimpleCopyTest

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
private void runSimpleCopyTest(final int numberOfRestarts,
                               final String inputTopic,
                               final String throughTopic,
                               final String outputTopic) throws Exception {
    final KStreamBuilder builder = new KStreamBuilder();
    final KStream<Long, Long> input = builder.stream(inputTopic);
    KStream<Long, Long> output = input;
    if (throughTopic != null) {
        output = input.through(throughTopic);
    }
    output.to(outputTopic);

    for (int i = 0; i < numberOfRestarts; ++i) {
        final long factor = i;
        final KafkaStreams streams = new KafkaStreams(
            builder,
            StreamsTestUtils.getStreamsConfig(
                applicationId,
                CLUSTER.bootstrapServers(),
                Serdes.LongSerde.class.getName(),
                Serdes.LongSerde.class.getName(),
                new Properties() {
                    {
                        put(StreamsConfig.consumerPrefix(ConsumerConfig.MAX_POLL_RECORDS_CONFIG), 1);
                        put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, StreamsConfig.EXACTLY_ONCE);
                    }
                }));

        try {
            streams.start();

            final List<KeyValue<Long, Long>> inputData = prepareData(factor * 100, factor * 100 + 10L, 0L, 1L);

            IntegrationTestUtils.produceKeyValuesSynchronously(
                inputTopic,
                inputData,
                TestUtils.producerConfig(CLUSTER.bootstrapServers(), LongSerializer.class, LongSerializer.class),
                CLUSTER.time
            );

            final List<KeyValue<Long, Long>> committedRecords
                = IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(
                TestUtils.consumerConfig(
                    CLUSTER.bootstrapServers(),
                    CONSUMER_GROUP_ID,
                    LongDeserializer.class,
                    LongDeserializer.class,
                    new Properties() {
                        {
                            put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, IsolationLevel.READ_COMMITTED.name().toLowerCase(Locale.ROOT));
                        }
                    }),
                inputTopic,
                inputData.size()
            );

            checkResultPerKey(committedRecords, inputData);
        } finally {
            streams.close();
        }
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:63,代碼來源:EosIntegrationTest.java

示例14: shouldBeAbleToPerformMultipleTransactions

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void shouldBeAbleToPerformMultipleTransactions() throws Exception {
    final KStreamBuilder builder = new KStreamBuilder();
    builder.stream(SINGLE_PARTITION_INPUT_TOPIC).to(SINGLE_PARTITION_OUTPUT_TOPIC);

    final KafkaStreams streams = new KafkaStreams(
        builder,
        StreamsTestUtils.getStreamsConfig(
            applicationId,
            CLUSTER.bootstrapServers(),
            Serdes.LongSerde.class.getName(),
            Serdes.LongSerde.class.getName(),
            new Properties() {
                {
                    put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, StreamsConfig.EXACTLY_ONCE);
                }
            }));

    try {
        streams.start();

        final List<KeyValue<Long, Long>> firstBurstOfData = prepareData(0L, 5L, 0L);
        final List<KeyValue<Long, Long>> secondBurstOfData = prepareData(5L, 8L, 0L);

        IntegrationTestUtils.produceKeyValuesSynchronously(
            SINGLE_PARTITION_INPUT_TOPIC,
            firstBurstOfData,
            TestUtils.producerConfig(CLUSTER.bootstrapServers(), LongSerializer.class, LongSerializer.class),
            CLUSTER.time
        );

        final List<KeyValue<Long, Long>> firstCommittedRecords
            = IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(
            TestUtils.consumerConfig(
                CLUSTER.bootstrapServers(),
                CONSUMER_GROUP_ID,
                LongDeserializer.class,
                LongDeserializer.class,
                new Properties() {
                    {
                        put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, IsolationLevel.READ_COMMITTED.name().toLowerCase(Locale.ROOT));
                    }
                }),
            SINGLE_PARTITION_OUTPUT_TOPIC,
            firstBurstOfData.size()
        );

        assertThat(firstCommittedRecords, equalTo(firstBurstOfData));

        IntegrationTestUtils.produceKeyValuesSynchronously(
            SINGLE_PARTITION_INPUT_TOPIC,
            secondBurstOfData,
            TestUtils.producerConfig(CLUSTER.bootstrapServers(), LongSerializer.class, LongSerializer.class),
            CLUSTER.time
        );

        final List<KeyValue<Long, Long>> secondCommittedRecords
            = IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(
            TestUtils.consumerConfig(
                CLUSTER.bootstrapServers(),
                CONSUMER_GROUP_ID,
                LongDeserializer.class,
                LongDeserializer.class,
                new Properties() {
                    {
                        put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, IsolationLevel.READ_COMMITTED.name().toLowerCase(Locale.ROOT));
                    }
                }),
            SINGLE_PARTITION_OUTPUT_TOPIC,
            secondBurstOfData.size()
        );

        assertThat(secondCommittedRecords, equalTo(secondBurstOfData));
    } finally {
        streams.close();
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:78,代碼來源:EosIntegrationTest.java

示例15: shouldNotViolateEosIfOneTaskFails

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void shouldNotViolateEosIfOneTaskFails() throws Exception {
    // this test writes 10 + 5 + 5 records per partition (running with 2 partitions)
    // the app is supposed to copy all 40 records into the output topic
    // the app commits after each 10 records per partition, and thus will have 2*5 uncommitted writes
    //
    // the failure gets inject after 20 committed and 30 uncommitted records got received
    // -> the failure only kills one thread
    // after fail over, we should read 40 committed records (even if 50 record got written)

    final KafkaStreams streams = getKafkaStreams(false, "appDir", 2);
    try {
        streams.start();

        final List<KeyValue<Long, Long>> committedDataBeforeFailure = prepareData(0L, 10L, 0L, 1L);
        final List<KeyValue<Long, Long>> uncommittedDataBeforeFailure = prepareData(10L, 15L, 0L, 1L);

        final List<KeyValue<Long, Long>> dataBeforeFailure = new ArrayList<>();
        dataBeforeFailure.addAll(committedDataBeforeFailure);
        dataBeforeFailure.addAll(uncommittedDataBeforeFailure);

        final List<KeyValue<Long, Long>> dataAfterFailure = prepareData(15L, 20L, 0L, 1L);

        writeInputData(committedDataBeforeFailure);

        TestUtils.waitForCondition(new TestCondition() {
            @Override
            public boolean conditionMet() {
                return commitRequested.get() == 2;
            }
        }, MAX_WAIT_TIME_MS, "SteamsTasks did not request commit.");

        writeInputData(uncommittedDataBeforeFailure);

        final List<KeyValue<Long, Long>> uncommittedRecords = readResult(dataBeforeFailure.size(), null);
        final List<KeyValue<Long, Long>> committedRecords = readResult(committedDataBeforeFailure.size(), CONSUMER_GROUP_ID);

        checkResultPerKey(committedRecords, committedDataBeforeFailure);
        checkResultPerKey(uncommittedRecords, dataBeforeFailure);

        errorInjected.set(true);
        writeInputData(dataAfterFailure);

        TestUtils.waitForCondition(new TestCondition() {
            @Override
            public boolean conditionMet() {
                return uncaughtException != null;
            }
        }, MAX_WAIT_TIME_MS, "Should receive uncaught exception from one StreamThread.");

        final List<KeyValue<Long, Long>> allCommittedRecords = readResult(
            committedDataBeforeFailure.size() + uncommittedDataBeforeFailure.size() + dataAfterFailure.size(),
            CONSUMER_GROUP_ID + "_ALL");

        final List<KeyValue<Long, Long>> committedRecordsAfterFailure = readResult(
            uncommittedDataBeforeFailure.size() + dataAfterFailure.size(),
            CONSUMER_GROUP_ID);

        final List<KeyValue<Long, Long>> allExpectedCommittedRecordsAfterRecovery = new ArrayList<>();
        allExpectedCommittedRecordsAfterRecovery.addAll(committedDataBeforeFailure);
        allExpectedCommittedRecordsAfterRecovery.addAll(uncommittedDataBeforeFailure);
        allExpectedCommittedRecordsAfterRecovery.addAll(dataAfterFailure);

        final List<KeyValue<Long, Long>> expectedCommittedRecordsAfterRecovery = new ArrayList<>();
        expectedCommittedRecordsAfterRecovery.addAll(uncommittedDataBeforeFailure);
        expectedCommittedRecordsAfterRecovery.addAll(dataAfterFailure);

        checkResultPerKey(allCommittedRecords, allExpectedCommittedRecordsAfterRecovery);
        checkResultPerKey(committedRecordsAfterFailure, expectedCommittedRecordsAfterRecovery);
    } finally {
        streams.close();
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:74,代碼來源:EosIntegrationTest.java


注:本文中的org.apache.kafka.streams.KafkaStreams.close方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。