当前位置: 首页>>代码示例>>Java>>正文


Java Serdes.serdeFrom方法代码示例

本文整理汇总了Java中org.apache.kafka.common.serialization.Serdes.serdeFrom方法的典型用法代码示例。如果您正苦于以下问题:Java Serdes.serdeFrom方法的具体用法?Java Serdes.serdeFrom怎么用?Java Serdes.serdeFrom使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.common.serialization.Serdes的用法示例。


在下文中一共展示了Serdes.serdeFrom方法的11个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: main

import org.apache.kafka.common.serialization.Serdes; //导入方法依赖的package包/类
public static void main(String[] args) throws InterruptedException {

        Properties props = new Properties();
        props.put(APPLICATION_ID_CONFIG, "my-stream-processing-application");
        props.put(BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.serializer", JsonPOJOSerializer.class.getName());
        props.put("value.deserializer", JsonPOJODeserializer.class.getName());

        Map<String, Object> serdeProps = new HashMap<>();
        serdeProps.put("JsonPOJOClass", Messung.class);

        final Serializer<Messung> serializer = new JsonPOJOSerializer<>();
        serializer.configure(serdeProps, false);

        final Deserializer<Messung> deserializer = new JsonPOJODeserializer<>();
        deserializer.configure(serdeProps, false);

        final Serde<Messung> serde = Serdes.serdeFrom(serializer, deserializer);

        StreamsConfig config = new StreamsConfig(props);

        KStreamBuilder builder = new KStreamBuilder();

        builder.stream(Serdes.String(), serde, "produktion")
                .filter( (k,v) -> v.type.equals("Biogas"))
                .to(Serdes.String(), serde,"produktion2");

        KafkaStreams streams = new KafkaStreams(builder, config);
        streams.start();
    }
 
开发者ID:predic8,项目名称:apache-kafka-demos,代码行数:33,代码来源:FilterStream.java

示例2: buildBasicSerde

import org.apache.kafka.common.serialization.Serdes; //导入方法依赖的package包/类
/**
 * Builds a Serde for T using a basic Serializer and Deserializer that do not implement configure or close
 */
public static <T> Serde<T> buildBasicSerde(final SerializeFunc<T> serializeFunc, final DeserializeFunc<T> deserializeFunc) {
    return Serdes.serdeFrom(buildBasicSerializer(serializeFunc), buildBasicDeserializer(deserializeFunc));
}
 
开发者ID:gchq,项目名称:stroom-stats,代码行数:7,代码来源:SerdeUtils.java

示例3: consume

import org.apache.kafka.common.serialization.Serdes; //导入方法依赖的package包/类
private void consume() {
        Properties consumerProps = new Properties();
        consumerProps.put("bootstrap.servers", "stroom.kafka:9092");
        consumerProps.put("group.id", "consumerGroup");
        consumerProps.put("enable.auto.commit", "true");
        consumerProps.put("auto.commit.interval.ms", "1000");
        consumerProps.put("session.timeout.ms", "30000");
        consumerProps.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        consumerProps.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        Serde<String> stringSerde = Serdes.String();
        Serde<Long> longSerde = Serdes.Long();
//        LongAggregatorSerializer longAggregatorSerialiser = new LongAggregatorSerializer();
//        LongAggregatorDeserializer longAggregatorDeserialiser = new LongAggregatorDeserializer();
//        Serde<LongAggregator> longAggregatorSerde = Serdes.serdeFrom(longAggregatorSerialiser, longAggregatorDeserialiser);
        Serde<LongAggregator> longAggregatorSerde = SerdeUtils.buildBasicSerde(
                (topic, data) -> Bytes.toBytes(data.getAggregateVal()),
                (topic, bData) -> new LongAggregator(Bytes.toLong(bData)));

        SerdeUtils.verify(longAggregatorSerde, new LongAggregator(123));

        WindowedSerializer<Long> longWindowedSerializer = new WindowedSerializer<>(longSerde.serializer());
        WindowedDeserializer<Long> longWindowedDeserializer = new WindowedDeserializer<>(longSerde.deserializer());
        Serde<Windowed<Long>> windowedSerde = Serdes.serdeFrom(longWindowedSerializer, longWindowedDeserializer);

        KafkaConsumer<Windowed<Long>, LongAggregator> consumer = new KafkaConsumer<>(
                consumerProps,
                windowedSerde.deserializer(),
//                longSerde.deserializer(),
                longAggregatorSerde.deserializer());

        consumer.subscribe(Collections.singletonList(DEST_TOPIC));

        ExecutorService executorService = Executors.newSingleThreadExecutor();


        @SuppressWarnings("FutureReturnValueIgnored")
        Future future = executorService.submit(() -> {
            LOGGER.info("Consumer about to poll");
            Instant terminationTime = null;
//            while (!isTerminated.get() || Instant.now().isBefore(terminationTime.plusSeconds(10))) {
            while (true) {
                try {
//                    ConsumerRecords<Windowed<Long>, LongAggregator> records = consumer.poll(100);
                    ConsumerRecords<Windowed<Long>, LongAggregator> records = consumer.poll(100);
//                LOGGER.info("Received {} messages in batch", records.count());
                    for (ConsumerRecord<Windowed<Long>, LongAggregator> record : records) {
//                    for (ConsumerRecord<Long, LongAggregator> record : records) {
                        //                    System.out.printf("offset = %d, key = %s, value = %s\n", record.offset(), record.key(), record.value());
                        LOGGER.info("Received message key: {} winStart: {} winEnd {} winDuration: {} val: {}",
                                epochMsToString(record.key().key()),
                                epochMsToString(record.key().window().start()),
                                epochMsToString(record.key().window().end()),
                                record.key().window().end() - record.key().window().start(),
                                record.value().getAggregateVal());
//                        LOGGER.info("Received message key: {} val: {}",
//                                epochMsToString(record.key()),
//                                record.value().getAggregateVal());
//                        outputData.computeIfAbsent(record.key(),aLong -> new AtomicLong()).addAndGet(record.value().getAggregateVal());
                        outputData.computeIfAbsent(record.key().key(), aLong -> new AtomicLong()).addAndGet(record.value().getAggregateVal());
                    }
                } catch (Exception e) {
                    LOGGER.error("Error polling topic {} ", DEST_TOPIC, e);
                }
                if (isTerminated.get()) {
                    terminationTime = Instant.now();
                }
            }
//            consumer.close();
//            LOGGER.info("Consumer closed");

        });
        LOGGER.info("Consumer started");
    }
 
开发者ID:gchq,项目名称:stroom-stats,代码行数:75,代码来源:KafkaStreamsSandbox.java

示例4: main

import org.apache.kafka.common.serialization.Serdes; //导入方法依赖的package包/类
public static void main(String[] args) {

        String bootstrapServers = System.getenv("KAFKA_BOOTSTRAP_SERVERS");
        LOG.info("KAFKA_BOOTSTRAP_SERVERS = {}", bootstrapServers);

        Properties props = new Properties();
        props.put(StreamsConfig.APPLICATION_ID_CONFIG, APP_NAME);
        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
        props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
        props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
        props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

        KStreamBuilder builder = new KStreamBuilder();

        KStream<String, String> source = builder.stream(sourceAddress);

        KStream<Windowed<String>, String> max = source
                /*.selectKey((key, value, newKey) -> {
                    return "temp";
                })*/
                .selectKey(new KeyValueMapper<String, String, String>() {
                    @Override
                    public String apply(String key, String value) {
                        return "temp";
                    }
                })
                .groupByKey()
                .reduce((a,b) -> {
                    if (Integer.parseInt(a) > Integer.parseInt(b))
                        return a;
                    else
                        return b;
                }, TimeWindows.of(TimeUnit.SECONDS.toMillis(5000)))
                .toStream();

        WindowedSerializer<String> windowedSerializer = new WindowedSerializer<>(Serdes.String().serializer());
        WindowedDeserializer<String> windowedDeserializer = new WindowedDeserializer<>(Serdes.String().deserializer());
        Serde<Windowed<String>> windowedSerde = Serdes.serdeFrom(windowedSerializer, windowedDeserializer);

        // need to override key serde to Windowed<String> type
        max.to(windowedSerde, Serdes.String(), destinationAddress);

        final KafkaStreams streams = new KafkaStreams(builder, props);

        final CountDownLatch latch = new CountDownLatch(1);

        // attach shutdown handler to catch control-c
        Runtime.getRuntime().addShutdownHook(new Thread("streams-temperature-shutdown-hook") {
            @Override
            public void run() {
                streams.close();
                latch.countDown();
            }
        });

        try {
            streams.start();
            latch.await();
        } catch (Throwable e) {
            System.exit(1);
        }
        System.exit(0);
    }
 
开发者ID:ppatierno,项目名称:enmasse-iot-demo,代码行数:65,代码来源:KafkaTemperature.java

示例5: main

import org.apache.kafka.common.serialization.Serdes; //导入方法依赖的package包/类
public static void main(String args[])  {
    if(args.length < 3) {
        System.exit(1);
    }

    Boolean joinOperationEnabled = false;

    if(args.length == 4)
        joinOperationEnabled = true;

    String broker = args[0];
    String inputTopic = args[1];
    String outputTopic = args[2];

    Properties properties = defaultConsumingProperties();
    properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, broker);

    StreamsConfig streamingConfig = new StreamsConfig(properties);

    JSONDeserializer<StreamState> streamStateDeserializer = new JSONDeserializer<>(StreamState.class);
    JSONSerializer<StreamState> streamStateSerializer = new JSONSerializer<>();

    Serde<StreamState> streamStateSerde = Serdes.serdeFrom(streamStateSerializer, streamStateDeserializer);
    TopologyBuilder builder = new TopologyBuilder();
    if(joinOperationEnabled) {
        /*  Join operations can be performed between KStreams (and/or KTables). */
        /*  We should initialize a KStreamBuilder instead of a TopologyBuilder. */
        /*  TO-DO:  -   Some stuff. */
    }
    else {
        builder .addSource(SOURCE, new StringDeserializer(), new StringDeserializer(), inputTopic)
                .addProcessor(MESSAGE_PROCESSOR, MessageProcessor::new, SOURCE)
                .addProcessor(STREAM_AGGREGATOR_PROCESSOR, StreamAggregator::new, MESSAGE_PROCESSOR)
                .addStateStore(Stores.create(STREAM_STATE).withStringKeys().withValues(streamStateSerde).inMemory().build(), STREAM_AGGREGATOR_PROCESSOR)
                .addProcessor(STREAM_APPLY_REDUCTION_PROCESSOR, StreamReduction::new, STREAM_AGGREGATOR_PROCESSOR)
                .addProcessor(RESULT_PROCESSOR, ResultProcessor::new, STREAM_APPLY_REDUCTION_PROCESSOR)
                .addSink(SINK, outputTopic, new StringSerializer(), new StringSerializer(), RESULT_PROCESSOR);
    }

    KafkaStreams kafkaStreams = new KafkaStreams(builder, streamingConfig);
    kafkaStreams.start();
}
 
开发者ID:gdibernardo,项目名称:streaming-engines-benchmark,代码行数:43,代码来源:Consumer.java

示例6: SpecificAvroSerde

import org.apache.kafka.common.serialization.Serdes; //导入方法依赖的package包/类
/**
 * Constructor used by Kafka Streams.
 */
public SpecificAvroSerde() {
    inner = Serdes.serdeFrom(new SpecificAvroSerializer<>(), new SpecificAvroDeserializer<>());
}
 
开发者ID:jeqo,项目名称:talk-kafka-messaging-logs,代码行数:7,代码来源:SpecificAvroSerde.java

示例7: GenericAvroSerde

import org.apache.kafka.common.serialization.Serdes; //导入方法依赖的package包/类
/**
 * Constructor used by Kafka Streams.
 */
public GenericAvroSerde() {
    inner = Serdes.serdeFrom(new GenericAvroSerializer(), new GenericAvroDeserializer());
}
 
开发者ID:jeqo,项目名称:talk-kafka-messaging-logs,代码行数:7,代码来源:GenericAvroSerde.java

示例8: main

import org.apache.kafka.common.serialization.Serdes; //导入方法依赖的package包/类
public static void main(String[] args) throws Exception {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pageview-untyped");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    props.put(StreamsConfig.DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG, JsonTimestampExtractor.class);
    props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);

    // setting offset reset to earliest so that we can re-run the demo code with the same pre-loaded data
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

    KStreamBuilder builder = new KStreamBuilder();

    final Serializer<JsonNode> jsonSerializer = new JsonSerializer();
    final Deserializer<JsonNode> jsonDeserializer = new JsonDeserializer();
    final Serde<JsonNode> jsonSerde = Serdes.serdeFrom(jsonSerializer, jsonDeserializer);

    KStream<String, JsonNode> views = builder.stream(Serdes.String(), jsonSerde, "streams-pageview-input");

    KTable<String, JsonNode> users = builder.table(Serdes.String(), jsonSerde,
        "streams-userprofile-input", "streams-userprofile-store-name");

    KTable<String, String> userRegions = users.mapValues(new ValueMapper<JsonNode, String>() {
        @Override
        public String apply(JsonNode record) {
            return record.get("region").textValue();
        }
    });

    KStream<JsonNode, JsonNode> regionCount = views
            .leftJoin(userRegions, new ValueJoiner<JsonNode, String, JsonNode>() {
                @Override
                public JsonNode apply(JsonNode view, String region) {
                    ObjectNode jNode = JsonNodeFactory.instance.objectNode();

                    return jNode.put("user", view.get("user").textValue())
                            .put("page", view.get("page").textValue())
                            .put("region", region == null ? "UNKNOWN" : region);
                }
            })
            .map(new KeyValueMapper<String, JsonNode, KeyValue<String, JsonNode>>() {
                @Override
                public KeyValue<String, JsonNode> apply(String user, JsonNode viewRegion) {
                    return new KeyValue<>(viewRegion.get("region").textValue(), viewRegion);
                }
            })
            .groupByKey(Serdes.String(), jsonSerde)
            .count(TimeWindows.of(7 * 24 * 60 * 60 * 1000L).advanceBy(1000), "RollingSevenDaysOfPageViewsByRegion")
            // TODO: we can merge ths toStream().map(...) with a single toStream(...)
            .toStream()
            .map(new KeyValueMapper<Windowed<String>, Long, KeyValue<JsonNode, JsonNode>>() {
                @Override
                public KeyValue<JsonNode, JsonNode> apply(Windowed<String> key, Long value) {
                    ObjectNode keyNode = JsonNodeFactory.instance.objectNode();
                    keyNode.put("window-start", key.window().start())
                            .put("region", key.key());

                    ObjectNode valueNode = JsonNodeFactory.instance.objectNode();
                    valueNode.put("count", value);

                    return new KeyValue<>((JsonNode) keyNode, (JsonNode) valueNode);
                }
            });

    // write to the result topic
    regionCount.to(jsonSerde, jsonSerde, "streams-pageviewstats-untyped-output");

    KafkaStreams streams = new KafkaStreams(builder, props);
    streams.start();

    // usually the stream application would be running forever,
    // in this example we just let it run for some time and stop since the input data is finite.
    Thread.sleep(5000L);

    streams.close();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:76,代码来源:PageViewUntypedDemo.java

示例9: withBuiltinTypes

import org.apache.kafka.common.serialization.Serdes; //导入方法依赖的package包/类
/**
 * Create a new instance of {@link StateSerdes} for the given state name and key-/value-type classes.
 *
 * @param topic      the topic name
 * @param keyClass   the class of the key type
 * @param valueClass the class of the value type
 * @param <K>        the key type
 * @param <V>        the value type
 * @return a new instance of {@link StateSerdes}
 */
public static <K, V> StateSerdes<K, V> withBuiltinTypes(
    final String topic,
    final Class<K> keyClass,
    final Class<V> valueClass) {
    return new StateSerdes<>(topic, Serdes.serdeFrom(keyClass), Serdes.serdeFrom(valueClass));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:17,代码来源:StateSerdes.java

示例10: create

import org.apache.kafka.common.serialization.Serdes; //导入方法依赖的package包/类
/**
 * Create a driver object that will have a {@link #context()} that records messages
 * {@link ProcessorContext#forward(Object, Object) forwarded} by the store and that provides the specified serializers and
 * deserializers. This can be used when store is created to rely upon the ProcessorContext's default key and value serializers
 * and deserializers.
 *
 * @param keySerializer the key serializer for the {@link ProcessorContext}; may not be null
 * @param keyDeserializer the key deserializer for the {@link ProcessorContext}; may not be null
 * @param valueSerializer the value serializer for the {@link ProcessorContext}; may not be null
 * @param valueDeserializer the value deserializer for the {@link ProcessorContext}; may not be null
 * @return the test driver; never null
 */
public static <K, V> KeyValueStoreTestDriver<K, V> create(final Serializer<K> keySerializer,
                                                          final Deserializer<K> keyDeserializer,
                                                          final Serializer<V> valueSerializer,
                                                          final Deserializer<V> valueDeserializer) {
    final StateSerdes<K, V> serdes = new StateSerdes<>(
        "unexpected",
        Serdes.serdeFrom(keySerializer, keyDeserializer),
        Serdes.serdeFrom(valueSerializer, valueDeserializer));
    return new KeyValueStoreTestDriver<>(serdes);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:23,代码来源:KeyValueStoreTestDriver.java

示例11: serdFrom

import org.apache.kafka.common.serialization.Serdes; //导入方法依赖的package包/类
/**
 * @param <T> The class should have a constructor without any arguments and
 *            have setter and getter for every member variable
 * @param pojoClass POJO class.
 * @return Instance of {@link Serde}
 */
public static <T> Serde<T> serdFrom(Class<T> pojoClass) {
    return Serdes.serdeFrom(new GenericSerializer<T>(pojoClass), new GenericDeserializer<T>(pojoClass));
}
 
开发者ID:jiumao-org,项目名称:wechat-mall,代码行数:10,代码来源:SerdesFactory.java


注:本文中的org.apache.kafka.common.serialization.Serdes.serdeFrom方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。