当前位置: 首页>>代码示例>>Java>>正文


Java Stores类代码示例

本文整理汇总了Java中org.apache.kafka.streams.state.Stores的典型用法代码示例。如果您正苦于以下问题:Java Stores类的具体用法?Java Stores怎么用?Java Stores使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


Stores类属于org.apache.kafka.streams.state包,在下文中一共展示了Stores类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: shouldDriveGlobalStore

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
@SuppressWarnings("unchecked")
@Test
public void shouldDriveGlobalStore() throws Exception {
    final StateStoreSupplier storeSupplier = Stores.create("my-store")
            .withStringKeys().withStringValues().inMemory().disableLogging().build();
    final String global = "global";
    final String topic = "topic";
    final TopologyBuilder topologyBuilder = this.builder
            .addGlobalStore(storeSupplier, global, STRING_DESERIALIZER, STRING_DESERIALIZER, topic, "processor", define(new StatefulProcessor("my-store")));

    driver = new ProcessorTopologyTestDriver(config, topologyBuilder);
    final KeyValueStore<String, String> globalStore = (KeyValueStore<String, String>) topologyBuilder.globalStateStores().get("my-store");
    driver.process(topic, "key1", "value1", STRING_SERIALIZER, STRING_SERIALIZER);
    driver.process(topic, "key2", "value2", STRING_SERIALIZER, STRING_SERIALIZER);
    assertEquals("value1", globalStore.get("key1"));
    assertEquals("value2", globalStore.get("key2"));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:18,代码来源:ProcessorTopologyTest.java

示例2: main

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
public static void main(String[] args) {
    Topology builder = new Topology();
    String processorNames = "";
    builder.addSource("SOURCE", "src-topic")//
            .addStateStore(//
                    Stores.keyValueStoreBuilder(//
                            Stores.persistentKeyValueStore(OrderConstants.GOOD_ORDER_TOPIC), //
                            new Serdes.LongSerde(), //
                            new Serdes.ByteArraySerde()), //
                    processorNames)//
            //
            .addProcessor("PROCESS1", GoodOrderProcessor::new, "SOURCE")//
            .addProcessor("PROCESS2", GoodOrderProcessor::new, "PROCESS1")//
            .addProcessor("PROCESS3", GoodOrderProcessor::new, "PROCESS1")//
            // connect the state store "COUNTS" with processor "PROCESS2"  
            .connectProcessorAndStateStores("PROCESS2", OrderConstants.GOOD_ORDER_TOPIC)  
            //
            .addSink("SINK1", "sink-topic1", "PROCESS1")//
            .addSink("SINK2", "sink-topic2", "PROCESS2")//
            .addSink("SINK3", "sink-topic3", "PROCESS3");
}
 
开发者ID:jiumao-org,项目名称:wechat-mall,代码行数:22,代码来源:OrderAnalysis.java

示例3: createKeyValueStore

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
@SuppressWarnings("unchecked")
@Override
protected <K, V> KeyValueStore<K, V> createKeyValueStore(
        ProcessorContext context,
        Class<K> keyClass,
        Class<V> valueClass,
        boolean useContextSerdes) {

    StateStoreSupplier supplier;
    if (useContextSerdes) {
        supplier = Stores.create("my-store").withKeys(context.keySerde()).withValues(context.valueSerde()).inMemory().maxEntries(10).build();
    } else {
        supplier = Stores.create("my-store").withKeys(keyClass).withValues(valueClass).inMemory().maxEntries(10).build();
    }

    KeyValueStore<K, V> store = (KeyValueStore<K, V>) supplier.get();
    store.init(context, store);
    return store;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:20,代码来源:InMemoryLRUCacheStoreTest.java

示例4: createKeyValueStore

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
@SuppressWarnings("unchecked")
@Override
protected <K, V> KeyValueStore<K, V> createKeyValueStore(final ProcessorContext context,
                                                         final Class<K> keyClass,
                                                         final Class<V> valueClass,
                                                         final boolean useContextSerdes) {
    final Stores.PersistentKeyValueFactory<?, ?> factory;
    if (useContextSerdes) {
        factory = Stores
                .create("my-store")
                .withKeys(context.keySerde())
                .withValues(context.valueSerde())
                .persistent();

    } else {
        factory = Stores
                .create("my-store")
                .withKeys(keyClass)
                .withValues(valueClass)
                .persistent();
    }

    final KeyValueStore<K, V> store = (KeyValueStore<K, V>) factory.build().get();
    store.init(context, store);
    return store;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:27,代码来源:RocksDBKeyValueStoreTest.java

示例5: createKeyValueStore

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
@SuppressWarnings("unchecked")
@Override
protected <K, V> KeyValueStore<K, V> createKeyValueStore(
        ProcessorContext context,
        Class<K> keyClass,
        Class<V> valueClass,
        boolean useContextSerdes) {

    StateStoreSupplier supplier;
    if (useContextSerdes) {
        supplier = Stores.create("my-store").withKeys(context.keySerde()).withValues(context.valueSerde()).inMemory().build();
    } else {
        supplier = Stores.create("my-store").withKeys(keyClass).withValues(valueClass).inMemory().build();
    }

    KeyValueStore<K, V> store = (KeyValueStore<K, V>) supplier.get();
    store.init(context, store);
    return store;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:20,代码来源:InMemoryKeyValueStoreTest.java

示例6: shouldThroughOnUnassignedStateStoreAccess

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
@Test(expected = TopologyBuilderException.class)
public void shouldThroughOnUnassignedStateStoreAccess() {
    final String sourceNodeName = "source";
    final String goodNodeName = "goodGuy";
    final String badNodeName = "badGuy";

    final Properties config = new Properties();
    config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "host:1");
    config.put(StreamsConfig.APPLICATION_ID_CONFIG, "appId");
    final StreamsConfig streamsConfig = new StreamsConfig(config);

    try {
        final TopologyBuilder builder = new TopologyBuilder();
        builder
            .addSource(sourceNodeName, "topic")
            .addProcessor(goodNodeName, new LocalMockProcessorSupplier(), sourceNodeName)
            .addStateStore(
                Stores.create(LocalMockProcessorSupplier.STORE_NAME).withStringKeys().withStringValues().inMemory().build(),
                goodNodeName)
            .addProcessor(badNodeName, new LocalMockProcessorSupplier(), sourceNodeName);

        final ProcessorTopologyTestDriver driver = new ProcessorTopologyTestDriver(streamsConfig, builder);
        driver.process("topic", null, null);
    } catch (final StreamsException e) {
        final Throwable cause = e.getCause();
        if (cause != null
            && cause instanceof TopologyBuilderException
            && cause.getMessage().equals("Invalid topology building: Processor " + badNodeName + " has no access to StateStore " + LocalMockProcessorSupplier.STORE_NAME)) {
            throw (TopologyBuilderException) cause;
        } else {
            throw new RuntimeException("Did expect different exception. Did catch:", e);
        }
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:35,代码来源:TopologyBuilderTest.java

示例7: initStreams

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
@Override
protected KafkaStreams initStreams() {

    UUIDSerializer keySerializer = new UUIDSerializer();
    DataSerializer<Command> commandSerializer = new DataSerializer<>(
            Command.class);
    DataSerializer<Result> resultSerializer = new DataSerializer<>(
            Result.class);

    producer = new KafkaProducer<>(getConfig(), keySerializer,
            commandSerializer);

    KStreamBuilder builder = new KStreamBuilder();

    StateStore resultStore = Stores.create(STORE_NAME)
            .withKeys(keySerializer) //
            .withValues(resultSerializer) //
            .persistent() //
            .disableLogging() //
            .build().get();

    builder.addGlobalStore(resultStore, SOURCE_NAME, keySerializer,
            resultSerializer, buildResultTopic(application), PROCESSOR_NAME,
            ResultProcessor::new);

    localStreams = new KafkaStreams(builder,
            new StreamsConfig(getConfig()));

    return localStreams;
}
 
开发者ID:servicecatalog,项目名称:service-tools,代码行数:31,代码来源:CommandProducer.java

示例8: afterPropertiesSet

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
/**
 * {@inheritDoc}
 * @see org.springframework.beans.factory.InitializingBean#afterPropertiesSet()
 */
@Override
public void afterPropertiesSet() throws Exception {
	if(keySerde==null) {
		if(keySerializer==null) throw new Exception("No Key Serializer Defined");
		if(keyDeserializer==null) throw new Exception("No Key Deserializer Defined");
		keySerde = new StatelessSerde<K>(keySerializer, keyDeserializer);			
	}
	if(valueSerde==null) {
		if(valueSerializer==null) throw new Exception("No Value Serializer Defined");
		if(valueDeserializer==null) throw new Exception("No Value Deserializer Defined");
		valueSerde = new StatelessSerde<V>(valueSerializer, valueDeserializer);			
	}		
	final Stores.KeyValueFactory<K, V> factory = Stores.create(name)
		.withKeys(keySerde)
		.withValues(valueSerde);
	if(inMemory) {
		stateStoreSupplier = factory.inMemory().build();
	} else {
		stateStoreSupplier = factory.persistent().build();
	}
}
 
开发者ID:nickman,项目名称:HeliosStreams,代码行数:26,代码来源:StateStoreDefinition.java

示例9: main

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
public static void main(String[] args) throws IOException {
		Properties props = new Properties();
        props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-wordcount-processor");
        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka0:19092");
        props.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "zookeeper0:12181/kafka");
        props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
        props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.Integer().getClass());
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
		
		TopologyBuilder builder = new TopologyBuilder();
		builder.addSource("SOURCE", new StringDeserializer(), new StringDeserializer(), "words")
				.addProcessor("WordCountProcessor", WordCountProcessor::new, "SOURCE")
				.addStateStore(Stores.create("Counts").withStringKeys().withIntegerValues().inMemory().build(), "WordCountProcessor")
//				.connectProcessorAndStateStores("WordCountProcessor", "Counts")
				.addSink("SINK", "count", new StringSerializer(), new IntegerSerializer(), "WordCountProcessor");
		
        KafkaStreams stream = new KafkaStreams(builder, props);
        stream.start();
        System.in.read();
        stream.close();
        stream.cleanUp();
	}
 
开发者ID:habren,项目名称:KafkaExample,代码行数:23,代码来源:WordCountTopology.java

示例10: init

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
@Override
protected void init() {
    this.aggregateResponses = Stores.create(AGGREGATE_RESPONSE_STORE_NAME, context())
                                    .withStringKeys()
                                    .withValues(Serdes.document(), Serdes.document())
                                    .inMemory()
                                    .build();
    this.inputOffsets = Stores.create(AGGREGATE_INPUTS_STORE_NAME, context())
                              .withIntegerKeys()
                              .withLongValues()
                              .inMemory()
                              .build();

    // Load the models from the store, removing any that are too old ...
    Set<String> oldKeys = new HashSet<>();
    this.aggregateResponses.all().forEachRemaining(entry -> {
        // If the response is completed and old, then mark it for deletion ...
        if (Message.isAggregateResponseCompletedAndExpired(entry.value(), this::isExpired)) {
            oldKeys.add(entry.key());
        }
    });
    // And finally remove all of the expired responses ...
    oldKeys.forEach(this.aggregateResponses::delete);
}
 
开发者ID:rhauch,项目名称:debezium-proto,代码行数:25,代码来源:ResponseAccumulatorService.java

示例11: processingTopologyBuilder

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
private TopologyBuilder processingTopologyBuilder() {
    //create state store
    StateStoreSupplier machineToAvgCPUUsageStore
            = Stores.create(AVG_STORE_NAME)
                    .withStringKeys()
                    .withDoubleValues()
                    .inMemory()
                    .build();

    StateStoreSupplier machineToNumOfRecordsReadStore
            = Stores.create(NUM_RECORDS_STORE_NAME)
                    .withStringKeys()
                    .withIntegerValues()
                    .inMemory()
                    .build();

    TopologyBuilder builder = new TopologyBuilder();

    builder.addSource(SOURCE_NAME, TOPIC_NAME)
            .addProcessor(PROCESSOR_NAME, new ProcessorSupplier() {
                @Override
                public Processor get() {
                    return new CPUCumulativeAverageProcessor();
                }
            }, SOURCE_NAME)
            .addStateStore(machineToAvgCPUUsageStore, PROCESSOR_NAME)
            .addStateStore(machineToNumOfRecordsReadStore, PROCESSOR_NAME);

    LOGGER.info("Kafka streams processing topology ready");

    return builder;
}
 
开发者ID:abhirockzz,项目名称:docker-kafka-streams,代码行数:33,代码来源:CPUMetricStreamHandler.java

示例12: processingTopologyBuilder

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
private TopologyBuilder processingTopologyBuilder() {

        StateStoreSupplier machineToAvgCPUUsageStore
                = Stores.create(AVG_STORE_NAME)
                        .withStringKeys()
                        .withDoubleValues()
                        .inMemory()
                        .build();

        StateStoreSupplier machineToNumOfRecordsReadStore
                = Stores.create(NUM_RECORDS_STORE_NAME)
                        .withStringKeys()
                        .withIntegerValues()
                        .inMemory()
                        .build();

        TopologyBuilder builder = new TopologyBuilder();

        builder.addSource(SOURCE_NAME, TOPIC_NAME)
                .addProcessor(PROCESSOR_NAME, new ProcessorSupplier() {
                    @Override
                    public Processor get() {
                        return new CPUCumulativeAverageProcessor();
                    }
                }, SOURCE_NAME)
                .addStateStore(machineToAvgCPUUsageStore, PROCESSOR_NAME)
                .addStateStore(machineToNumOfRecordsReadStore, PROCESSOR_NAME);

        LOGGER.info("Kafka streams processing topology ready");

        return builder;
    }
 
开发者ID:abhirockzz,项目名称:kafka-streams-example,代码行数:33,代码来源:CPUMetricStreamHandler.java

示例13: main

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
public static void main(String[] args) throws Exception {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-wordcount-processor");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
    props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

    // setting offset reset to earliest so that we can re-run the demo code with the same pre-loaded data
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

    TopologyBuilder builder = new TopologyBuilder();

    builder.addSource("Source", "streams-file-input");

    builder.addProcessor("Process", new MyProcessorSupplier(), "Source");
    builder.addStateStore(Stores.create("Counts").withStringKeys().withIntegerValues().inMemory().build(), "Process");

    builder.addSink("Sink", "streams-wordcount-processor-output", "Process");

    KafkaStreams streams = new KafkaStreams(builder, props);
    streams.start();

    // usually the stream application would be running forever,
    // in this example we just let it run for some time and stop since the input data is finite.
    Thread.sleep(5000L);

    streams.close();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:30,代码来源:WordCountProcessorDemo.java

示例14: createWindowedStateStore

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
private static <K, V> StateStoreSupplier createWindowedStateStore(final JoinWindows windows,
                                                                 final Serde<K> keySerde,
                                                                 final Serde<V> valueSerde,
                                                                 final String storeName) {
    return Stores.create(storeName)
        .withKeys(keySerde)
        .withValues(valueSerde)
        .persistent()
        .windowed(windows.size(), windows.maintainMs(), windows.segments, true)
        .build();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:12,代码来源:KStreamImpl.java

示例15: storeFactory

import org.apache.kafka.streams.state.Stores; //导入依赖的package包/类
static  <T, K> Stores.PersistentKeyValueFactory<K, T> storeFactory(final Serde<K> keySerde,
                                                                   final Serde<T> aggValueSerde,
                                                                   final String storeName) {
    return Stores.create(storeName)
            .withKeys(keySerde)
            .withValues(aggValueSerde)
            .persistent()
            .enableCaching();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:10,代码来源:AbstractStream.java


注:本文中的org.apache.kafka.streams.state.Stores类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。