當前位置: 首頁>>代碼示例>>Java>>正文


Java KafkaStreams類代碼示例

本文整理匯總了Java中org.apache.kafka.streams.KafkaStreams的典型用法代碼示例。如果您正苦於以下問題:Java KafkaStreams類的具體用法?Java KafkaStreams怎麽用?Java KafkaStreams使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


KafkaStreams類屬於org.apache.kafka.streams包,在下文中一共展示了KafkaStreams類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: worker

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
@Override
public ReadOnlyKeyValueStore<Long, byte[]> worker() {
    Properties config = super.configBuilder()//
            .put(StreamsConfig.APPLICATION_ID_CONFIG, MallConstants.ORDER_COMMITED_TOPIC)//
            .put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrap)//
            .put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.Long().getClass())//
            .put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.ByteArray().getClass())//
            .build();

    StreamsBuilder builder = new StreamsBuilder();
    KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config));
    streams.setUncaughtExceptionHandler((Thread t, Throwable e) -> {
        // TODO Auto-generated method stub
        log.error(e.getMessage());
    });
    streams.start();

    return this.worker = // k-v query
            streams.store(queryableStoreName, QueryableStoreTypes.<Long, byte[]>keyValueStore());
}
 
開發者ID:jiumao-org,項目名稱:wechat-mall,代碼行數:21,代碼來源:OrderTable.java

示例2: notFoundWithNoResult

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
@Test
public void notFoundWithNoResult(TestContext context){
    KafkaStreams streamMock = mock(KafkaStreams.class);
    ReadOnlyKeyValueStore<Object, Object> storeMock = mock(ReadOnlyKeyValueStore.class);
    KeyValueIterator<Object, Object> iteratorMock = mock(KeyValueIterator.class);
    when(streamMock.store(eq("store"), any(QueryableStoreType.class))).thenReturn(storeMock);
    SimpleKeyValueIterator iterator = new SimpleKeyValueIterator();
    when(storeMock.range(any(), any())).thenReturn(iterator);


    rule.vertx().deployVerticle(new RangeKeyValueQueryVerticle("host", streamMock), context.asyncAssertSuccess(deployment->{

        RangeKeyValueQuery query = new RangeKeyValueQuery("store", Serdes.String().getClass().getName(), Serdes.String().getClass().getName(), "key".getBytes(), "key".getBytes());

        rule.vertx().eventBus().send(Config.RANGE_KEY_VALUE_QUERY_ADDRESS_PREFIX + "host", query, context.asyncAssertSuccess(reply ->{

            context.assertTrue(reply.body() instanceof MultiValuedKeyValueQueryResponse);
            MultiValuedKeyValueQueryResponse response = (MultiValuedKeyValueQueryResponse) reply.body();
            context.assertEquals(0, response.getResults().size());
            context.assertTrue(iterator.closed);

        }));

    }));

}
 
開發者ID:ftrossbach,項目名稱:kiqr,代碼行數:27,代碼來源:RangeKeyValuesQueryVerticleTest.java

示例3: main

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
public static void main(String[] args) throws Exception {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pipe");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

    // setting offset reset to earliest so that we can re-run the demo code with the same pre-loaded data
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

    KStreamBuilder builder = new KStreamBuilder();

    builder.stream("streams-file-input").to("streams-pipe-output");

    KafkaStreams streams = new KafkaStreams(builder, props);
    streams.start();

    // usually the stream application would be running forever,
    // in this example we just let it run for some time and stop since the input data is finite.
    Thread.sleep(5000L);

    streams.close();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:24,代碼來源:PipeDemo.java

示例4: getLocalMetrics

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
/**
 * Query local state store to extract metrics
 *
 * @return local Metrics
 */
private Metrics getLocalMetrics() {
    HostInfo thisInstance = GlobalAppState.getInstance().getHostPortInfo();
    KafkaStreams ks = GlobalAppState.getInstance().getKafkaStreams();

    String source = thisInstance.host() + ":" + thisInstance.port();
    Metrics localMetrics = new Metrics();

    ReadOnlyKeyValueStore<String, Double> averageStore = ks
            .store(storeName,
                    QueryableStoreTypes.<String, Double>keyValueStore());

    LOGGER.log(Level.INFO, "Entries in store {0}", averageStore.approximateNumEntries());
    KeyValueIterator<String, Double> storeIterator = averageStore.all();

    while (storeIterator.hasNext()) {
        KeyValue<String, Double> kv = storeIterator.next();
        localMetrics.add(source, kv.key, String.valueOf(kv.value));

    }
    LOGGER.log(Level.INFO, "Local store state {0}", localMetrics);
    return localMetrics;
}
 
開發者ID:abhirockzz,項目名稱:docker-kafka-streams,代碼行數:28,代碼來源:MetricsResource.java

示例5: getMachineMetric

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
/**
 * Metrics for a machine
 *
 * @param machine
 * @return the metric
 */
@GET
@Path("{machine}")
@Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
public Response getMachineMetric(@PathParam("machine") String machine) {
    LOGGER.log(Level.INFO, "Fetching metrics for machine {0}", machine);

    KafkaStreams ks = GlobalAppState.getInstance().getKafkaStreams();
    HostInfo thisInstance = GlobalAppState.getInstance().getHostPortInfo();

    Metrics metrics = null;

    StreamsMetadata metadataForMachine = ks.metadataForKey(storeName, machine, new StringSerializer());

    if (metadataForMachine.host().equals(thisInstance.host()) && metadataForMachine.port() == thisInstance.port()) {
        LOGGER.log(Level.INFO, "Querying local store for machine {0}", machine);
        metrics = getLocalMetrics(machine);
    } else {
        //LOGGER.log(Level.INFO, "Querying remote store for machine {0}", machine);
        String url = "http://" + metadataForMachine.host() + ":" + metadataForMachine.port() + "/metrics/remote/" + machine;
        metrics = Utils.getRemoteStoreState(url, 2, TimeUnit.SECONDS);
        LOGGER.log(Level.INFO, "Metric from remote store at {0} == {1}", new Object[]{url, metrics});
    }

    return Response.ok(metrics).build();
}
 
開發者ID:abhirockzz,項目名稱:docker-kafka-streams,代碼行數:32,代碼來源:MetricsResource.java

示例6: main

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
public static void main(String[] args) throws InterruptedException {

        Properties props = new Properties();
        props.put(APPLICATION_ID_CONFIG, "my-stream-processing-application");
        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");


        StreamsConfig config = new StreamsConfig(props);

        KStreamBuilder builder = new KStreamBuilder();

        builder.stream("produktion")
                .to("produktion2");

        KafkaStreams streams = new KafkaStreams(builder, config);
        streams.start();
    }
 
開發者ID:predic8,項目名稱:apache-kafka-demos,代碼行數:21,代碼來源:SimpleStream.java

示例7: main

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
public static void main(String[] args) throws Exception {
    Properties kafkaStreamProperties = new Properties();
    kafkaStreamProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, "kafka-stream-wordCount");
    kafkaStreamProperties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    kafkaStreamProperties.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "localhost:2181");
    kafkaStreamProperties.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
    kafkaStreamProperties.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());

    Serde<String> stringSerde = Serdes.String();
    Serde<Long> longSerde = Serdes.Long();

    KStreamBuilder streamTopology = new KStreamBuilder();
    KStream<String, String> topicRecords = streamTopology.stream(stringSerde, stringSerde, "input");
    KStream<String, Long> wordCounts = topicRecords
            .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
            .map((key, word) -> new KeyValue<>(word, word))
            .countByKey("Count")
            .toStream();
    wordCounts.to(stringSerde, longSerde, "wordCount");

    KafkaStreams streamManager = new KafkaStreams(streamTopology, kafkaStreamProperties);
    streamManager.start();

    Runtime.getRuntime().addShutdownHook(new Thread(streamManager::close));
}
 
開發者ID:PacktPublishing,項目名稱:Building-Data-Streaming-Applications-with-Apache-Kafka,代碼行數:26,代碼來源:KafkaStreamWordCount.java

示例8: main

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
public static void main(String[] args) throws Exception {
    Properties kafkaStreamProperties = new Properties();
    kafkaStreamProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, "IP-Fraud-Detection");
    kafkaStreamProperties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    kafkaStreamProperties.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "localhost:2181");
    kafkaStreamProperties.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
    kafkaStreamProperties.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());

    Serde<String> stringSerde = Serdes.String();

    KStreamBuilder fraudDetectionTopology = new KStreamBuilder();

    KStream<String, String> ipRecords = fraudDetectionTopology.stream(stringSerde, stringSerde, propertyReader.getPropertyValue("topic"));

    KStream<String, String> fraudIpRecords = ipRecords
            .filter((k, v) -> isFraud(v));

    fraudIpRecords.to(propertyReader.getPropertyValue("output_topic"));

    KafkaStreams streamManager = new KafkaStreams(fraudDetectionTopology, kafkaStreamProperties);
    streamManager.start();

    Runtime.getRuntime().addShutdownHook(new Thread(streamManager::close));
}
 
開發者ID:PacktPublishing,項目名稱:Building-Data-Streaming-Applications-with-Apache-Kafka,代碼行數:25,代碼來源:IPFraudKafkaStreamApp.java

示例9: getLocalMetrics

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
/**
 * get Metrics for a machine
 * @param machine
 * @return 
 */
private Metrics getLocalMetrics(String machine) {
    LOGGER.log(Level.INFO, "Getting Metrics for machine {0}", machine);
    
    HostInfo thisInstance = GlobalAppState.getInstance().getHostPortInfo();
    KafkaStreams ks = GlobalAppState.getInstance().getKafkaStreams();

    String source = thisInstance.host() + ":" + thisInstance.port();
    Metrics localMetrics = new Metrics();

    ReadOnlyKeyValueStore<String, Double> averageStore = ks
            .store(storeName,
                    QueryableStoreTypes.<String, Double>keyValueStore());

    LOGGER.log(Level.INFO, "Entries in store {0}", averageStore.approximateNumEntries());

    localMetrics.add(source, machine, String.valueOf(averageStore.get(machine)));

    LOGGER.log(Level.INFO, "Metrics for machine {0} - {1}", new Object[]{machine, localMetrics});
    return localMetrics;
}
 
開發者ID:abhirockzz,項目名稱:kafka-streams-example,代碼行數:26,代碼來源:MetricsResource.java

示例10: init

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
@PostConstruct
public void init() {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "kafka-streams-repo");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka:9092");
    props.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "zookeeper:2181");
    props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class);
    props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://schema-registry:8081");

    KStreamBuilder builder = new KStreamBuilder();
    builder.table(Serdes.Long(), Serdes.String(), "processed-tweets", STORE_NAME);

    streams = new KafkaStreams(builder, props);

    streams.start();
}
 
開發者ID:jeqo,項目名稱:talk-kafka-messaging-logs,代碼行數:17,代碼來源:KafkaTweetRepository.java

示例11: init

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
@PostConstruct
public void init() {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "kafka-streams-processor");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka:9092");
    props.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "zookeeper:2181");
    props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class);
    props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://schema-registry:8081");

    KStreamBuilder builder = new KStreamBuilder();

    builder.stream("tweets")
            .map((k, v) -> {
                Tweet tweet = (Tweet) SpecificData.get().deepCopy(Tweet.getClassSchema(), v);
                return new KeyValue<>(tweet.getId(), tweet.getText().toString());
            })
            .to(Serdes.Long(), Serdes.String(), "processed-tweets");

    KafkaStreams streams = new KafkaStreams(builder, props);
    streams.start();
}
 
開發者ID:jeqo,項目名稱:talk-kafka-messaging-logs,代碼行數:22,代碼來源:KafkaTweetProcessor.java

示例12: main

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
public static void main(String[] args) {

        Properties config = new Properties();
        config.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-starter-app");
        config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
        config.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
        config.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

        KStreamBuilder builder = new KStreamBuilder();

        KStream<String, String> kStream = builder.stream("streams-file-input");
        // do stuff
        kStream.to("streams-wordcount-output");

        KafkaStreams streams = new KafkaStreams(builder, config);
        streams.cleanUp(); // only do this in dev - not in prod
        streams.start();

        // print the topology
        System.out.println(streams.toString());

        // shutdown hook to correctly close the streams application
        Runtime.getRuntime().addShutdownHook(new Thread(streams::close));

    }
 
開發者ID:kaiwaehner,項目名稱:kafka-streams-machine-learning-examples,代碼行數:27,代碼來源:StreamsStarterApp.java

示例13: createCountStream

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
/**
 * Creates a typical word count topology
 */
private KafkaStreams createCountStream(final String inputTopic, final String outputTopic, final Properties streamsConfiguration) {
    final KStreamBuilder builder = new KStreamBuilder();
    final Serde<String> stringSerde = Serdes.String();
    final KStream<String, String> textLines = builder.stream(stringSerde, stringSerde, inputTopic);

    final KGroupedStream<String, String> groupedByWord = textLines
        .flatMapValues(new ValueMapper<String, Iterable<String>>() {
            @Override
            public Iterable<String> apply(final String value) {
                return Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\\W+"));
            }
        })
        .groupBy(MockKeyValueMapper.<String, String>SelectValueMapper());

    // Create a State Store for the all time word count
    groupedByWord.count("word-count-store-" + inputTopic).to(Serdes.String(), Serdes.Long(), outputTopic);

    // Create a Windowed State Store that contains the word count for every 1 minute
    groupedByWord.count(TimeWindows.of(WINDOW_SIZE), "windowed-word-count-store-" + inputTopic);

    return new KafkaStreams(builder, streamsConfiguration);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:26,代碼來源:QueryableStateIntegrationTest.java

示例14: kStreamKTableJoin

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
/**
 * Measure the performance of a KStream-KTable left join. The setup is such that each
 * KStream record joins to exactly one element in the KTable
 */
public void kStreamKTableJoin(String kStreamTopic, String kTableTopic) throws Exception {
    if (maybeSetupPhase(kStreamTopic, "simple-benchmark-produce-kstream", false)) {
        maybeSetupPhase(kTableTopic, "simple-benchmark-produce-ktable", false);
        return;
    }

    CountDownLatch latch = new CountDownLatch(1);

    // setup join
    Properties props = setStreamProperties("simple-benchmark-kstream-ktable-join");
    final KafkaStreams streams = createKafkaStreamsKStreamKTableJoin(props, kStreamTopic, kTableTopic, latch);

    // run benchmark
    runGenericBenchmark(streams, "Streams KStreamKTable LeftJoin Performance [records/latency/rec-sec/MB-sec joined]: ", latch);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:20,代碼來源:SimpleBenchmark.java

示例15: kStreamKStreamJoin

import org.apache.kafka.streams.KafkaStreams; //導入依賴的package包/類
/**
 * Measure the performance of a KStream-KStream left join. The setup is such that each
 * KStream record joins to exactly one element in the other KStream
 */
public void kStreamKStreamJoin(String kStreamTopic1, String kStreamTopic2) throws Exception {
    if (maybeSetupPhase(kStreamTopic1, "simple-benchmark-produce-kstream-topic1", false)) {
        maybeSetupPhase(kStreamTopic2, "simple-benchmark-produce-kstream-topic2", false);
        return;
    }

    CountDownLatch latch = new CountDownLatch(1);

    // setup join
    Properties props = setStreamProperties("simple-benchmark-kstream-kstream-join");
    final KafkaStreams streams = createKafkaStreamsKStreamKStreamJoin(props, kStreamTopic1, kStreamTopic2, latch);

    // run benchmark
    runGenericBenchmark(streams, "Streams KStreamKStream LeftJoin Performance [records/latency/rec-sec/MB-sec  joined]: ", latch);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:20,代碼來源:SimpleBenchmark.java


注:本文中的org.apache.kafka.streams.KafkaStreams類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。