當前位置: 首頁>>代碼示例>>Java>>正文


Java KafkaStreams.start方法代碼示例

本文整理匯總了Java中org.apache.kafka.streams.KafkaStreams.start方法的典型用法代碼示例。如果您正苦於以下問題:Java KafkaStreams.start方法的具體用法?Java KafkaStreams.start怎麽用?Java KafkaStreams.start使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.kafka.streams.KafkaStreams的用法示例。


在下文中一共展示了KafkaStreams.start方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: main

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
public static void main(String[] args) throws CertificateException, NoSuchAlgorithmException,
    KeyStoreException, IOException, URISyntaxException {
  Properties streamsConfig = new AggregatorConfig().getProperties();

  final StreamsBuilder builder = new StreamsBuilder();

  final KStream<Windowed<String>, String> words =
      builder.stream(String.format("%swords", HEROKU_KAFKA_PREFIX));

  words
      .groupBy((key, word) -> word)
      .windowedBy(TimeWindows.of(TimeUnit.SECONDS.toMillis(10)))
      .count(Materialized.as("windowed-counts"))
      .toStream()
      .process(PostgresSink::new);

  final KafkaStreams streams = new KafkaStreams(builder.build(), streamsConfig);

  streams.cleanUp();
  streams.start();

  Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
}
 
開發者ID:kissaten,項目名稱:kafka-streams-on-heroku,代碼行數:24,代碼來源:Aggregator.java

示例2: worker

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Override
public ReadOnlyKeyValueStore<Long, byte[]> worker() {
    Properties config = super.configBuilder()//
            .put(StreamsConfig.APPLICATION_ID_CONFIG, MallConstants.ORDER_COMMITED_TOPIC)//
            .put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrap)//
            .put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.Long().getClass())//
            .put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.ByteArray().getClass())//
            .build();

    StreamsBuilder builder = new StreamsBuilder();
    KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config));
    streams.setUncaughtExceptionHandler((Thread t, Throwable e) -> {
        // TODO Auto-generated method stub
        log.error(e.getMessage());
    });
    streams.start();

    return this.worker = // k-v query
            streams.store(queryableStoreName, QueryableStoreTypes.<Long, byte[]>keyValueStore());
}
 
開發者ID:jiumao-org,項目名稱:wechat-mall,代碼行數:21,代碼來源:OrderTable.java

示例3: runGenericBenchmark

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
private void runGenericBenchmark(final KafkaStreams streams, final String nameOfBenchmark, final CountDownLatch latch) {
    streams.start();

    long startTime = System.currentTimeMillis();

    while (latch.getCount() > 0) {
        try {
            latch.await();
        } catch (InterruptedException ex) {
            //ignore
        }
    }
    long endTime = System.currentTimeMillis();
    printResults(nameOfBenchmark, endTime - startTime);

    streams.close();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:18,代碼來源:SimpleBenchmark.java

示例4: main

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
public static void main(String[] args) {

        Properties config = new Properties();
        config.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-starter-app");
        config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
        config.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
        config.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

        KStreamBuilder builder = new KStreamBuilder();

        KStream<String, String> kStream = builder.stream("streams-file-input");
        // do stuff
        kStream.to("streams-wordcount-output");

        KafkaStreams streams = new KafkaStreams(builder, config);
        streams.cleanUp(); // only do this in dev - not in prod
        streams.start();

        // print the topology
        System.out.println(streams.toString());

        // shutdown hook to correctly close the streams application
        Runtime.getRuntime().addShutdownHook(new Thread(streams::close));

    }
 
開發者ID:kaiwaehner,項目名稱:kafka-streams-machine-learning-examples,代碼行數:27,代碼來源:StreamsStarterApp.java

示例5: init

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@PostConstruct
public void init() {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "kafka-streams-processor");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka:9092");
    props.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "zookeeper:2181");
    props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class);
    props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://schema-registry:8081");

    KStreamBuilder builder = new KStreamBuilder();

    builder.stream("tweets")
            .map((k, v) -> {
                Tweet tweet = (Tweet) SpecificData.get().deepCopy(Tweet.getClassSchema(), v);
                return new KeyValue<>(tweet.getId(), tweet.getText().toString());
            })
            .to(Serdes.Long(), Serdes.String(), "processed-tweets");

    KafkaStreams streams = new KafkaStreams(builder, props);
    streams.start();
}
 
開發者ID:jeqo,項目名稱:talk-kafka-messaging-logs,代碼行數:22,代碼來源:KafkaTweetProcessor.java

示例6: test

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void test() throws Exception {
  Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);

  Properties config = new Properties();
  config.put(StreamsConfig.APPLICATION_ID_CONFIG, "stream-app");
  config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, senderProps.get("bootstrap.servers"));
  config.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.Integer().getClass());
  config.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

  Producer<Integer, String> producer = createProducer();
  ProducerRecord<Integer, String> record = new ProducerRecord<>("stream-test", 1, "test");
  producer.send(record);

  final Serde<String> stringSerde = Serdes.String();
  final Serde<Integer> intSerde = Serdes.Integer();

  KStreamBuilder builder = new KStreamBuilder();
  KStream<Integer, String> kStream = builder
      .stream(intSerde, stringSerde, "stream-test");

  kStream.map((key, value) -> new KeyValue<>(key, value + "map")).to("stream-out");

  KafkaStreams streams = new KafkaStreams(builder, new StreamsConfig(config),
      new TracingKafkaClientSupplier(mockTracer));
  streams.start();

  await().atMost(15, TimeUnit.SECONDS).until(reportedSpansSize(), equalTo(3));

  streams.close();
  producer.close();

  List<MockSpan> spans = mockTracer.finishedSpans();
  assertEquals(3, spans.size());
  checkSpans(spans);

  assertNull(mockTracer.activeSpan());
}
 
開發者ID:opentracing-contrib,項目名稱:java-kafka-client,代碼行數:39,代碼來源:TracingKafkaStreamsTest.java

示例7: main

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
public static void main(String[] args) throws InterruptedException {

        Properties props = new Properties();
        props.put(APPLICATION_ID_CONFIG, "my-stream-processing-application");
        props.put(BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.serializer", JsonPOJOSerializer.class.getName());
        props.put("value.deserializer", JsonPOJODeserializer.class.getName());

        Map<String, Object> serdeProps = new HashMap<>();
        serdeProps.put("JsonPOJOClass", Messung.class);

        final Serializer<Messung> serializer = new JsonPOJOSerializer<>();
        serializer.configure(serdeProps, false);

        final Deserializer<Messung> deserializer = new JsonPOJODeserializer<>();
        deserializer.configure(serdeProps, false);

        final Serde<Messung> serde = Serdes.serdeFrom(serializer, deserializer);

        StreamsConfig config = new StreamsConfig(props);

        KStreamBuilder builder = new KStreamBuilder();

        builder.stream(Serdes.String(), serde, "produktion")
                .filter( (k,v) -> v.type.equals("Biogas"))
                .to(Serdes.String(), serde,"produktion2");

        KafkaStreams streams = new KafkaStreams(builder, config);
        streams.start();
    }
 
開發者ID:predic8,項目名稱:apache-kafka-demos,代碼行數:33,代碼來源:FilterStream.java

示例8: shouldAddStateStoreToRegexDefinedSource

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void shouldAddStateStoreToRegexDefinedSource() throws Exception {

    final ProcessorSupplier<String, String> processorSupplier = new MockProcessorSupplier<>();
    final MockStateStoreSupplier stateStoreSupplier = new MockStateStoreSupplier("testStateStore", false);
    final long thirtySecondTimeout = 30 * 1000;

    final TopologyBuilder builder = new TopologyBuilder()
            .addSource("ingest", Pattern.compile("topic-\\d+"))
            .addProcessor("my-processor", processorSupplier, "ingest")
            .addStateStore(stateStoreSupplier, "my-processor");


    final KafkaStreams streams = new KafkaStreams(builder, streamsConfiguration);
    try {
        streams.start();

        final TestCondition stateStoreNameBoundToSourceTopic = new TestCondition() {
            @Override
            public boolean conditionMet() {
                final Map<String, List<String>> stateStoreToSourceTopic = builder.stateStoreNameToSourceTopics();
                final List<String> topicNamesList = stateStoreToSourceTopic.get("testStateStore");
                return topicNamesList != null && !topicNamesList.isEmpty() && topicNamesList.get(0).equals("topic-1");
            }
        };

        TestUtils.waitForCondition(stateStoreNameBoundToSourceTopic, thirtySecondTimeout, "Did not find topic: [topic-1] connected to state store: [testStateStore]");

    } finally {
        streams.close();
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:33,代碼來源:RegexSourceIntegrationTest.java

示例9: main

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
public static void main(String[] args) throws Exception {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-wordcount");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

    final StreamsBuilder builder = new StreamsBuilder();

    builder.<String, String>stream("streams-plaintext-input")
           .flatMapValues(value -> Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\\W+")))
           .groupBy((key, value) -> value)
           .count(Materialized.<String, Long, KeyValueStore<Bytes, byte[]>>as("counts-store"))
           .toStream()
           .to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));

    final Topology topology = builder.build();
    final KafkaStreams streams = new KafkaStreams(topology, props);
    final CountDownLatch latch = new CountDownLatch(1);

    // attach shutdown handler to catch control-c
    Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
        @Override
        public void run() {
            streams.close();
            latch.countDown();
        }
    });

    try {
        streams.start();
        latch.await();
    } catch (Throwable e) {
        System.exit(1);
    }
    System.exit(0);
}
 
開發者ID:smarcu,項目名稱:datastreaming-presentation,代碼行數:38,代碼來源:WordCount.java

示例10: main

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
public static void main(String[] args) throws Exception {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-wordcount-processor");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
    props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

    // setting offset reset to earliest so that we can re-run the demo code with the same pre-loaded data
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

    TopologyBuilder builder = new TopologyBuilder();

    builder.addSource("Source", "streams-file-input");

    builder.addProcessor("Process", new MyProcessorSupplier(), "Source");
    builder.addStateStore(Stores.create("Counts").withStringKeys().withIntegerValues().inMemory().build(), "Process");

    builder.addSink("Sink", "streams-wordcount-processor-output", "Process");

    KafkaStreams streams = new KafkaStreams(builder, props);
    streams.start();

    // usually the stream application would be running forever,
    // in this example we just let it run for some time and stop since the input data is finite.
    Thread.sleep(5000L);

    streams.close();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:30,代碼來源:WordCountProcessorDemo.java

示例11: shouldBeAbleToPerformMultipleTransactions

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void shouldBeAbleToPerformMultipleTransactions() throws Exception {
    final KStreamBuilder builder = new KStreamBuilder();
    builder.stream(SINGLE_PARTITION_INPUT_TOPIC).to(SINGLE_PARTITION_OUTPUT_TOPIC);

    final KafkaStreams streams = new KafkaStreams(
        builder,
        StreamsTestUtils.getStreamsConfig(
            applicationId,
            CLUSTER.bootstrapServers(),
            Serdes.LongSerde.class.getName(),
            Serdes.LongSerde.class.getName(),
            new Properties() {
                {
                    put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, StreamsConfig.EXACTLY_ONCE);
                }
            }));

    try {
        streams.start();

        final List<KeyValue<Long, Long>> firstBurstOfData = prepareData(0L, 5L, 0L);
        final List<KeyValue<Long, Long>> secondBurstOfData = prepareData(5L, 8L, 0L);

        IntegrationTestUtils.produceKeyValuesSynchronously(
            SINGLE_PARTITION_INPUT_TOPIC,
            firstBurstOfData,
            TestUtils.producerConfig(CLUSTER.bootstrapServers(), LongSerializer.class, LongSerializer.class),
            CLUSTER.time
        );

        final List<KeyValue<Long, Long>> firstCommittedRecords
            = IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(
            TestUtils.consumerConfig(
                CLUSTER.bootstrapServers(),
                CONSUMER_GROUP_ID,
                LongDeserializer.class,
                LongDeserializer.class,
                new Properties() {
                    {
                        put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, IsolationLevel.READ_COMMITTED.name().toLowerCase(Locale.ROOT));
                    }
                }),
            SINGLE_PARTITION_OUTPUT_TOPIC,
            firstBurstOfData.size()
        );

        assertThat(firstCommittedRecords, equalTo(firstBurstOfData));

        IntegrationTestUtils.produceKeyValuesSynchronously(
            SINGLE_PARTITION_INPUT_TOPIC,
            secondBurstOfData,
            TestUtils.producerConfig(CLUSTER.bootstrapServers(), LongSerializer.class, LongSerializer.class),
            CLUSTER.time
        );

        final List<KeyValue<Long, Long>> secondCommittedRecords
            = IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(
            TestUtils.consumerConfig(
                CLUSTER.bootstrapServers(),
                CONSUMER_GROUP_ID,
                LongDeserializer.class,
                LongDeserializer.class,
                new Properties() {
                    {
                        put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, IsolationLevel.READ_COMMITTED.name().toLowerCase(Locale.ROOT));
                    }
                }),
            SINGLE_PARTITION_OUTPUT_TOPIC,
            secondBurstOfData.size()
        );

        assertThat(secondCommittedRecords, equalTo(secondBurstOfData));
    } finally {
        streams.close();
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:78,代碼來源:EosIntegrationTest.java

示例12: testMultipleConsumersCanReadFromPartitionedTopic

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void testMultipleConsumersCanReadFromPartitionedTopic() throws Exception {

    final Serde<String> stringSerde = Serdes.String();
    final KStreamBuilder builderLeader = new KStreamBuilder();
    final KStreamBuilder builderFollower = new KStreamBuilder();
    final List<String> expectedAssignment = Arrays.asList(PARTITIONED_TOPIC_1,  PARTITIONED_TOPIC_2);

    final KStream<String, String> partitionedStreamLeader = builderLeader.stream(Pattern.compile("partitioned-\\d"));
    final KStream<String, String> partitionedStreamFollower = builderFollower.stream(Pattern.compile("partitioned-\\d"));


    partitionedStreamLeader.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);
    partitionedStreamFollower.to(stringSerde, stringSerde, DEFAULT_OUTPUT_TOPIC);

    final KafkaStreams partitionedStreamsLeader  = new KafkaStreams(builderLeader, streamsConfiguration);
    final KafkaStreams partitionedStreamsFollower  = new KafkaStreams(builderFollower, streamsConfiguration);

    final StreamsConfig streamsConfig = new StreamsConfig(streamsConfiguration);


    final Field leaderStreamThreadsField = partitionedStreamsLeader.getClass().getDeclaredField("threads");
    leaderStreamThreadsField.setAccessible(true);
    final StreamThread[] leaderStreamThreads = (StreamThread[]) leaderStreamThreadsField.get(partitionedStreamsLeader);
    final StreamThread originalLeaderThread = leaderStreamThreads[0];

    final TestStreamThread leaderTestStreamThread = new TestStreamThread(builderLeader, streamsConfig,
            new DefaultKafkaClientSupplier(),
            originalLeaderThread.applicationId, originalLeaderThread.clientId, originalLeaderThread.processId, new Metrics(), Time.SYSTEM);

    leaderStreamThreads[0] = leaderTestStreamThread;

    final TestCondition bothTopicsAddedToLeader = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return leaderTestStreamThread.assignedTopicPartitions.equals(expectedAssignment);
        }
    };



    final Field followerStreamThreadsField = partitionedStreamsFollower.getClass().getDeclaredField("threads");
    followerStreamThreadsField.setAccessible(true);
    final StreamThread[] followerStreamThreads = (StreamThread[]) followerStreamThreadsField.get(partitionedStreamsFollower);
    final StreamThread originalFollowerThread = followerStreamThreads[0];

    final TestStreamThread followerTestStreamThread = new TestStreamThread(builderFollower, streamsConfig,
            new DefaultKafkaClientSupplier(),
            originalFollowerThread.applicationId, originalFollowerThread.clientId, originalFollowerThread.processId, new Metrics(), Time.SYSTEM);

    followerStreamThreads[0] = followerTestStreamThread;


    final TestCondition bothTopicsAddedToFollower = new TestCondition() {
        @Override
        public boolean conditionMet() {
            return followerTestStreamThread.assignedTopicPartitions.equals(expectedAssignment);
        }
    };

    partitionedStreamsLeader.start();
    TestUtils.waitForCondition(bothTopicsAddedToLeader, "Topics never assigned to leader stream");


    partitionedStreamsFollower.start();
    TestUtils.waitForCondition(bothTopicsAddedToFollower, "Topics never assigned to follower stream");

    partitionedStreamsLeader.close();
    partitionedStreamsFollower.close();

}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:72,代碼來源:RegexSourceIntegrationTest.java

示例13: shouldNotViolateEosIfOneTaskFails

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void shouldNotViolateEosIfOneTaskFails() throws Exception {
    // this test writes 10 + 5 + 5 records per partition (running with 2 partitions)
    // the app is supposed to copy all 40 records into the output topic
    // the app commits after each 10 records per partition, and thus will have 2*5 uncommitted writes
    //
    // the failure gets inject after 20 committed and 30 uncommitted records got received
    // -> the failure only kills one thread
    // after fail over, we should read 40 committed records (even if 50 record got written)

    final KafkaStreams streams = getKafkaStreams(false, "appDir", 2);
    try {
        streams.start();

        final List<KeyValue<Long, Long>> committedDataBeforeFailure = prepareData(0L, 10L, 0L, 1L);
        final List<KeyValue<Long, Long>> uncommittedDataBeforeFailure = prepareData(10L, 15L, 0L, 1L);

        final List<KeyValue<Long, Long>> dataBeforeFailure = new ArrayList<>();
        dataBeforeFailure.addAll(committedDataBeforeFailure);
        dataBeforeFailure.addAll(uncommittedDataBeforeFailure);

        final List<KeyValue<Long, Long>> dataAfterFailure = prepareData(15L, 20L, 0L, 1L);

        writeInputData(committedDataBeforeFailure);

        TestUtils.waitForCondition(new TestCondition() {
            @Override
            public boolean conditionMet() {
                return commitRequested.get() == 2;
            }
        }, MAX_WAIT_TIME_MS, "SteamsTasks did not request commit.");

        writeInputData(uncommittedDataBeforeFailure);

        final List<KeyValue<Long, Long>> uncommittedRecords = readResult(dataBeforeFailure.size(), null);
        final List<KeyValue<Long, Long>> committedRecords = readResult(committedDataBeforeFailure.size(), CONSUMER_GROUP_ID);

        checkResultPerKey(committedRecords, committedDataBeforeFailure);
        checkResultPerKey(uncommittedRecords, dataBeforeFailure);

        errorInjected.set(true);
        writeInputData(dataAfterFailure);

        TestUtils.waitForCondition(new TestCondition() {
            @Override
            public boolean conditionMet() {
                return uncaughtException != null;
            }
        }, MAX_WAIT_TIME_MS, "Should receive uncaught exception from one StreamThread.");

        final List<KeyValue<Long, Long>> allCommittedRecords = readResult(
            committedDataBeforeFailure.size() + uncommittedDataBeforeFailure.size() + dataAfterFailure.size(),
            CONSUMER_GROUP_ID + "_ALL");

        final List<KeyValue<Long, Long>> committedRecordsAfterFailure = readResult(
            uncommittedDataBeforeFailure.size() + dataAfterFailure.size(),
            CONSUMER_GROUP_ID);

        final List<KeyValue<Long, Long>> allExpectedCommittedRecordsAfterRecovery = new ArrayList<>();
        allExpectedCommittedRecordsAfterRecovery.addAll(committedDataBeforeFailure);
        allExpectedCommittedRecordsAfterRecovery.addAll(uncommittedDataBeforeFailure);
        allExpectedCommittedRecordsAfterRecovery.addAll(dataAfterFailure);

        final List<KeyValue<Long, Long>> expectedCommittedRecordsAfterRecovery = new ArrayList<>();
        expectedCommittedRecordsAfterRecovery.addAll(uncommittedDataBeforeFailure);
        expectedCommittedRecordsAfterRecovery.addAll(dataAfterFailure);

        checkResultPerKey(allCommittedRecords, allExpectedCommittedRecordsAfterRecovery);
        checkResultPerKey(committedRecordsAfterFailure, expectedCommittedRecordsAfterRecovery);
    } finally {
        streams.close();
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:74,代碼來源:EosIntegrationTest.java

示例14: shouldBeAbleToQueryMapValuesState

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
@Test
public void shouldBeAbleToQueryMapValuesState() throws Exception {
    streamsConfiguration.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    streamsConfiguration.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    final KStreamBuilder builder = new KStreamBuilder();
    final String[] keys = {"hello", "goodbye", "welcome", "go", "kafka"};
    final Set<KeyValue<String, String>> batch1 = new HashSet<>();
    batch1.addAll(Arrays.asList(
        new KeyValue<>(keys[0], "1"),
        new KeyValue<>(keys[1], "1"),
        new KeyValue<>(keys[2], "3"),
        new KeyValue<>(keys[3], "5"),
        new KeyValue<>(keys[4], "2")));

    IntegrationTestUtils.produceKeyValuesSynchronously(
        streamOne,
        batch1,
        TestUtils.producerConfig(
            CLUSTER.bootstrapServers(),
            StringSerializer.class,
            StringSerializer.class,
            new Properties()),
        mockTime);

    final KTable<String, String> t1 = builder.table(streamOne);
    final KTable<String, Long> t2 = t1.mapValues(new ValueMapper<String, Long>() {
        @Override
        public Long apply(final String value) {
            return Long.valueOf(value);
        }
    }, Serdes.Long(), "queryMapValues");
    t2.to(Serdes.String(), Serdes.Long(), outputTopic);

    kafkaStreams = new KafkaStreams(builder, streamsConfiguration);
    kafkaStreams.start();

    waitUntilAtLeastNumRecordProcessed(outputTopic, 1);

    final ReadOnlyKeyValueStore<String, Long>
        myMapStore = kafkaStreams.store("queryMapValues",
        QueryableStoreTypes.<String, Long>keyValueStore());
    for (final KeyValue<String, String> batchEntry : batch1) {
        assertEquals(myMapStore.get(batchEntry.key), Long.valueOf(batchEntry.value));
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:46,代碼來源:QueryableStateIntegrationTest.java

示例15: verifyCanQueryState

import org.apache.kafka.streams.KafkaStreams; //導入方法依賴的package包/類
private void verifyCanQueryState(final int cacheSizeBytes) throws java.util.concurrent.ExecutionException, InterruptedException {
    streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, cacheSizeBytes);
    final KStreamBuilder builder = new KStreamBuilder();
    final String[] keys = {"hello", "goodbye", "welcome", "go", "kafka"};

    final Set<KeyValue<String, String>> batch1 = new TreeSet<>(stringComparator);
    batch1.addAll(Arrays.asList(
        new KeyValue<>(keys[0], "hello"),
        new KeyValue<>(keys[1], "goodbye"),
        new KeyValue<>(keys[2], "welcome"),
        new KeyValue<>(keys[3], "go"),
        new KeyValue<>(keys[4], "kafka")));


    final Set<KeyValue<String, Long>> expectedCount = new TreeSet<>(stringLongComparator);
    for (final String key : keys) {
        expectedCount.add(new KeyValue<>(key, 1L));
    }

    IntegrationTestUtils.produceKeyValuesSynchronously(
            streamOne,
            batch1,
            TestUtils.producerConfig(
            CLUSTER.bootstrapServers(),
            StringSerializer.class,
            StringSerializer.class,
            new Properties()),
            mockTime);

    final KStream<String, String> s1 = builder.stream(streamOne);

    // Non Windowed
    s1.groupByKey().count("my-count").to(Serdes.String(), Serdes.Long(), outputTopic);

    s1.groupByKey().count(TimeWindows.of(WINDOW_SIZE), "windowed-count");
    kafkaStreams = new KafkaStreams(builder, streamsConfiguration);
    kafkaStreams.start();

    waitUntilAtLeastNumRecordProcessed(outputTopic, 1);

    final ReadOnlyKeyValueStore<String, Long>
        myCount = kafkaStreams.store("my-count", QueryableStoreTypes.<String, Long>keyValueStore());

    final ReadOnlyWindowStore<String, Long> windowStore =
        kafkaStreams.store("windowed-count", QueryableStoreTypes.<String, Long>windowStore());
    verifyCanGetByKey(keys,
        expectedCount,
        expectedCount,
        windowStore,
        myCount);

    verifyRangeAndAll(expectedCount, myCount);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:54,代碼來源:QueryableStateIntegrationTest.java


注:本文中的org.apache.kafka.streams.KafkaStreams.start方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。