当前位置: 首页>>代码示例>>Java>>正文


Java KStreamBuilder类代码示例

本文整理汇总了Java中org.apache.kafka.streams.kstream.KStreamBuilder的典型用法代码示例。如果您正苦于以下问题:Java KStreamBuilder类的具体用法?Java KStreamBuilder怎么用?Java KStreamBuilder使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


KStreamBuilder类属于org.apache.kafka.streams.kstream包,在下文中一共展示了KStreamBuilder类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: main

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
public static void main(String[] args) throws Exception {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pipe");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

    // setting offset reset to earliest so that we can re-run the demo code with the same pre-loaded data
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

    KStreamBuilder builder = new KStreamBuilder();

    builder.stream("streams-file-input").to("streams-pipe-output");

    KafkaStreams streams = new KafkaStreams(builder, props);
    streams.start();

    // usually the stream application would be running forever,
    // in this example we just let it run for some time and stop since the input data is finite.
    Thread.sleep(5000L);

    streams.close();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:24,代码来源:PipeDemo.java

示例2: main

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
public static void main(String[] args) throws InterruptedException {

        Properties props = new Properties();
        props.put(APPLICATION_ID_CONFIG, "my-stream-processing-application");
        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");


        StreamsConfig config = new StreamsConfig(props);

        KStreamBuilder builder = new KStreamBuilder();

        builder.stream("produktion")
                .to("produktion2");

        KafkaStreams streams = new KafkaStreams(builder, config);
        streams.start();
    }
 
开发者ID:predic8,项目名称:apache-kafka-demos,代码行数:21,代码来源:SimpleStream.java

示例3: main

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
public static void main(String[] args) throws Exception {
    Properties kafkaStreamProperties = new Properties();
    kafkaStreamProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, "IP-Fraud-Detection");
    kafkaStreamProperties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    kafkaStreamProperties.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "localhost:2181");
    kafkaStreamProperties.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
    kafkaStreamProperties.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());

    Serde<String> stringSerde = Serdes.String();

    KStreamBuilder fraudDetectionTopology = new KStreamBuilder();

    KStream<String, String> ipRecords = fraudDetectionTopology.stream(stringSerde, stringSerde, propertyReader.getPropertyValue("topic"));

    KStream<String, String> fraudIpRecords = ipRecords
            .filter((k, v) -> isFraud(v));

    fraudIpRecords.to(propertyReader.getPropertyValue("output_topic"));

    KafkaStreams streamManager = new KafkaStreams(fraudDetectionTopology, kafkaStreamProperties);
    streamManager.start();

    Runtime.getRuntime().addShutdownHook(new Thread(streamManager::close));
}
 
开发者ID:PacktPublishing,项目名称:Building-Data-Streaming-Applications-with-Apache-Kafka,代码行数:25,代码来源:IPFraudKafkaStreamApp.java

示例4: init

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
@PostConstruct
public void init() {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "kafka-streams-repo");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka:9092");
    props.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "zookeeper:2181");
    props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class);
    props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://schema-registry:8081");

    KStreamBuilder builder = new KStreamBuilder();
    builder.table(Serdes.Long(), Serdes.String(), "processed-tweets", STORE_NAME);

    streams = new KafkaStreams(builder, props);

    streams.start();
}
 
开发者ID:jeqo,项目名称:talk-kafka-messaging-logs,代码行数:17,代码来源:KafkaTweetRepository.java

示例5: joinTopology

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
public static KStreamBuilder joinTopology(KStreamBuilder builder) {
    KStream<String, Integer> kStreamA = builder.stream(stringSerde, integerSerde, INPUT_TOPIC_A);
    KStream<String, Integer> kStreamB = builder.stream(stringSerde, integerSerde, INPUT_TOPIC_B);

    KTable<String, Integer> table = kStreamA
        .groupByKey(stringSerde, integerSerde)
        .aggregate(() -> 0, (k, v, t) -> v, integerSerde, STORAGE_NAME);

    kStreamB
        .leftJoin(table, (v1, v2) -> v1 + v2, stringSerde, integerSerde)
        .to(stringSerde, integerSerde, OUTPUT_TOPIC_A);

    kStreamB
        .leftJoin(table, (v1, v2) -> v1 - v2, stringSerde, integerSerde)
        .to(stringSerde, integerSerde, OUTPUT_TOPIC_B);

    return builder;
}
 
开发者ID:carlosmenezes,项目名称:mockafka,代码行数:19,代码来源:TopologyUtil.java

示例6: merge

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
public static <K, V> KStream<K, V> merge(KStreamBuilder topology, KStream<K, V>[] streams) {
    if (streams == null || streams.length == 0) {
        throw new IllegalArgumentException("Parameter <streams> must not be null or has length zero");
    }

    String name = topology.newName(MERGE_NAME);
    String[] parentNames = new String[streams.length];
    Set<String> allSourceNodes = new HashSet<>();
    boolean requireRepartitioning = false;

    for (int i = 0; i < streams.length; i++) {
        KStreamImpl stream = (KStreamImpl) streams[i];

        parentNames[i] = stream.name;
        requireRepartitioning |= stream.repartitionRequired;
        allSourceNodes.addAll(stream.sourceNodes);
    }

    topology.addProcessor(name, new KStreamPassThrough<>(), parentNames);

    return new KStreamImpl<>(topology, name, allSourceNodes, requireRepartitioning);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:23,代码来源:KStreamImpl.java

示例7: shouldNotThrowUnsupportedOperationExceptionWhenInitializingStateStores

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
@Test
public void shouldNotThrowUnsupportedOperationExceptionWhenInitializingStateStores() throws Exception {
    final String changelogName = "test-application-my-store-changelog";
    final List<TopicPartition> partitions = Utils.mkList(new TopicPartition(changelogName, 0));
    consumer.assign(partitions);
    final Map<TopicPartition, OffsetAndMetadata> committedOffsets = new HashMap<>();
    committedOffsets.put(new TopicPartition(changelogName, 0), new OffsetAndMetadata(0L));
    consumer.commitSync(committedOffsets);

    restoreStateConsumer.updatePartitions(changelogName, Utils.mkList(
            new PartitionInfo(changelogName, 0, Node.noNode(), new Node[0], new Node[0])));
    final KStreamBuilder builder = new KStreamBuilder();
    builder.stream("topic").groupByKey().count("my-store");
    final ProcessorTopology topology = builder.setApplicationId(applicationId).build(0);
    StreamsConfig config = createConfig(baseDir);

    new StandbyTask(taskId, applicationId, partitions, topology, consumer, changelogReader, config,
        new MockStreamsMetrics(new Metrics()), stateDirectory);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:20,代码来源:StandbyTaskTest.java

示例8: createCountStream

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
/**
 * Creates a typical word count topology
 */
private KafkaStreams createCountStream(final String inputTopic, final String outputTopic, final Properties streamsConfiguration) {
    final KStreamBuilder builder = new KStreamBuilder();
    final Serde<String> stringSerde = Serdes.String();
    final KStream<String, String> textLines = builder.stream(stringSerde, stringSerde, inputTopic);

    final KGroupedStream<String, String> groupedByWord = textLines
        .flatMapValues(new ValueMapper<String, Iterable<String>>() {
            @Override
            public Iterable<String> apply(final String value) {
                return Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\\W+"));
            }
        })
        .groupBy(MockKeyValueMapper.<String, String>SelectValueMapper());

    // Create a State Store for the all time word count
    groupedByWord.count("word-count-store-" + inputTopic).to(Serdes.String(), Serdes.Long(), outputTopic);

    // Create a Windowed State Store that contains the word count for every 1 minute
    groupedByWord.count(TimeWindows.of(WINDOW_SIZE), "windowed-word-count-store-" + inputTopic);

    return new KafkaStreams(builder, streamsConfiguration);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:26,代码来源:QueryableStateIntegrationTest.java

示例9: testNotSendingOldValues

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
@Test
public void testNotSendingOldValues() throws Exception {
    final KStreamBuilder builder = new KStreamBuilder();

    final int[] expectedKeys = new int[]{0, 1, 2, 3};

    final KTable<Integer, String> table1;
    final KTable<Integer, String> table2;
    final KTable<Integer, String> joined;
    final MockProcessorSupplier<Integer, String> proc;

    table1 = builder.table(intSerde, stringSerde, topic1, storeName1);
    table2 = builder.table(intSerde, stringSerde, topic2, storeName2);
    joined = table1.join(table2, MockValueJoiner.TOSTRING_JOINER);
    proc = new MockProcessorSupplier<>();
    builder.addProcessor("proc", proc, ((KTableImpl<?, ?, ?>) joined).name);

    doTestSendingOldValues(builder, expectedKeys, table1, table2, proc, joined, false);

}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:21,代码来源:KTableKTableJoinTest.java

示例10: testQueryableNotSendingOldValues

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
@Test
public void testQueryableNotSendingOldValues() throws Exception {
    final KStreamBuilder builder = new KStreamBuilder();

    final int[] expectedKeys = new int[]{0, 1, 2, 3};

    final KTable<Integer, String> table1;
    final KTable<Integer, String> table2;
    final KTable<Integer, String> joined;
    final MockProcessorSupplier<Integer, String> proc;

    table1 = builder.table(intSerde, stringSerde, topic1, storeName1);
    table2 = builder.table(intSerde, stringSerde, topic2, storeName2);
    joined = table1.join(table2, MockValueJoiner.TOSTRING_JOINER, Serdes.String(), "anyQueryableName");
    proc = new MockProcessorSupplier<>();
    builder.addProcessor("proc", proc, ((KTableImpl<?, ?, ?>) joined).name);

    doTestSendingOldValues(builder, expectedKeys, table1, table2, proc, joined, false);

}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:21,代码来源:KTableKTableJoinTest.java

示例11: testSendingOldValues

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
@Test
public void testSendingOldValues() throws Exception {
    final KStreamBuilder builder = new KStreamBuilder();

    final int[] expectedKeys = new int[]{0, 1, 2, 3};

    final KTable<Integer, String> table1;
    final KTable<Integer, String> table2;
    final KTable<Integer, String> joined;
    final MockProcessorSupplier<Integer, String> proc;

    table1 = builder.table(intSerde, stringSerde, topic1, storeName1);
    table2 = builder.table(intSerde, stringSerde, topic2, storeName2);
    joined = table1.join(table2, MockValueJoiner.TOSTRING_JOINER);

    proc = new MockProcessorSupplier<>();
    builder.addProcessor("proc", proc, ((KTableImpl<?, ?, ?>) joined).name);

    doTestSendingOldValues(builder, expectedKeys, table1, table2, proc, joined, true);

}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:22,代码来源:KTableKTableJoinTest.java

示例12: testKTable

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
@Test
public void testKTable() {
    final KStreamBuilder builder = new KStreamBuilder();

    String topic1 = "topic1";

    KTable<String, Integer> table1 = builder.table(stringSerde, intSerde, topic1, "anyStoreName");

    MockProcessorSupplier<String, Integer> proc1 = new MockProcessorSupplier<>();
    table1.toStream().process(proc1);

    driver = new KStreamTestDriver(builder, stateDir);
    driver.process(topic1, "A", 1);
    driver.process(topic1, "B", 2);
    driver.process(topic1, "C", 3);
    driver.process(topic1, "D", 4);
    driver.flushState();
    driver.process(topic1, "A", null);
    driver.process(topic1, "B", null);
    driver.flushState();

    assertEquals(Utils.mkList("A:1", "B:2", "C:3", "D:4", "A:null", "B:null"), proc1.processed);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:24,代码来源:KTableSourceTest.java

示例13: testKTable

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
@Test
public void testKTable() {
    final KStreamBuilder builder = new KStreamBuilder();

    String topic1 = "topic1";

    KTable<String, String> table1 = builder.table(stringSerde, stringSerde, topic1, "anyStoreName");
    KTable<String, Integer> table2 = table1.mapValues(new ValueMapper<CharSequence, Integer>() {
        @Override
        public Integer apply(CharSequence value) {
            return value.charAt(0) - 48;
        }
    });

    MockProcessorSupplier<String, Integer> proc2 = new MockProcessorSupplier<>();
    table2.toStream().process(proc2);

    doTestKTable(builder, topic1, proc2);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:20,代码来源:KTableMapValuesTest.java

示例14: shouldObserveStreamElements

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
@Test
public void shouldObserveStreamElements() {
    final KStreamBuilder builder = new KStreamBuilder();
    final KStream<Integer, String> stream = builder.stream(intSerd, stringSerd, topicName);
    final List<KeyValue<Integer, String>> peekObserved = new ArrayList<>(), streamObserved = new ArrayList<>();
    stream.peek(collect(peekObserved)).foreach(collect(streamObserved));

    driver = new KStreamTestDriver(builder);
    final List<KeyValue<Integer, String>> expected = new ArrayList<>();
    for (int key = 0; key < 32; key++) {
        final String value = "V" + key;
        driver.process(topicName, key, value);
        expected.add(new KeyValue<>(key, value));
    }

    assertEquals(expected, peekObserved);
    assertEquals(expected, streamObserved);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:19,代码来源:KStreamPeekTest.java

示例15: testCountHelper

import org.apache.kafka.streams.kstream.KStreamBuilder; //导入依赖的package包/类
private void testCountHelper(final KStreamBuilder builder, final String input, final MockProcessorSupplier<String, Long> proc) throws IOException {
    driver = new KStreamTestDriver(builder, stateDir);

    driver.process(input, "A", "green");
    driver.flushState();
    driver.process(input, "B", "green");
    driver.flushState();
    driver.process(input, "A", "blue");
    driver.flushState();
    driver.process(input, "C", "yellow");
    driver.flushState();
    driver.process(input, "D", "green");
    driver.flushState();
    driver.flushState();


    assertEquals(Utils.mkList(
        "green:1",
        "green:2",
        "green:1", "blue:1",
        "yellow:1",
        "green:2"
    ), proc.processed);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:25,代码来源:KTableAggregateTest.java


注:本文中的org.apache.kafka.streams.kstream.KStreamBuilder类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。