当前位置: 首页>>代码示例>>Java>>正文


Java Serde.serializer方法代码示例

本文整理汇总了Java中org.apache.kafka.common.serialization.Serde.serializer方法的典型用法代码示例。如果您正苦于以下问题:Java Serde.serializer方法的具体用法?Java Serde.serializer怎么用?Java Serde.serializer使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.common.serialization.Serde的用法示例。


在下文中一共展示了Serde.serializer方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: buildProducer

import org.apache.kafka.common.serialization.Serde; //导入方法依赖的package包/类
private KafkaProducer<StatEventKey, StatAggregate> buildProducer() {

        //Configure the producers
        Map<String, Object> producerProps = getProducerProps();

        Serde<StatEventKey> statKeySerde = StatEventKeySerde.instance();
        Serde<StatAggregate> statAggregateSerde = StatAggregateSerde.instance();

        try {
            return new KafkaProducer<>(
                    producerProps,
                    statKeySerde.serializer(),
                    statAggregateSerde.serializer());
        } catch (Exception e) {
            try {
                String props = producerProps.entrySet().stream()
                        .map(entry -> "  " + entry.getKey() + "=" + entry.getValue())
                        .collect(Collectors.joining("\n"));
                LOGGER.error("Error initialising kafka producer with props:\n{}",props, e);
            } catch (Exception e1) {
            }
            LOGGER.error("Error initialising kafka producer, unable to dump property values ", e);
            throw e;
        }
    }
 
开发者ID:gchq,项目名称:stroom-stats,代码行数:26,代码来源:StatisticsAggregationService.java

示例2: to

import org.apache.kafka.common.serialization.Serde; //导入方法依赖的package包/类
@SuppressWarnings("unchecked")
@Override
public void to(final Serde<K> keySerde, final Serde<V> valSerde, StreamPartitioner<? super K, ? super V> partitioner, final String topic) {
    Objects.requireNonNull(topic, "topic can't be null");
    final String name = topology.newName(SINK_NAME);

    final Serializer<K> keySerializer = keySerde == null ? null : keySerde.serializer();
    final Serializer<V> valSerializer = valSerde == null ? null : valSerde.serializer();

    if (partitioner == null && keySerializer != null && keySerializer instanceof WindowedSerializer) {
        final WindowedSerializer<Object> windowedSerializer = (WindowedSerializer<Object>) keySerializer;
        partitioner = (StreamPartitioner<K, V>) new WindowedStreamPartitioner<Object, V>(topic, windowedSerializer);
    }

    topology.addSink(name, topic, keySerializer, valSerializer, partitioner, this.name);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:17,代码来源:KStreamImpl.java

示例3: createReparitionedSource

import org.apache.kafka.common.serialization.Serde; //导入方法依赖的package包/类
static <K1, V1> String createReparitionedSource(AbstractStream<K1> stream,
                                                Serde<K1> keySerde,
                                                Serde<V1> valSerde,
                                                final String topicNamePrefix) {
    Serializer<K1> keySerializer = keySerde != null ? keySerde.serializer() : null;
    Serializer<V1> valSerializer = valSerde != null ? valSerde.serializer() : null;
    Deserializer<K1> keyDeserializer = keySerde != null ? keySerde.deserializer() : null;
    Deserializer<V1> valDeserializer = valSerde != null ? valSerde.deserializer() : null;
    String baseName = topicNamePrefix != null ? topicNamePrefix : stream.name;

    String repartitionTopic = baseName + REPARTITION_TOPIC_SUFFIX;
    String sinkName = stream.topology.newName(SINK_NAME);
    String filterName = stream.topology.newName(FILTER_NAME);
    String sourceName = stream.topology.newName(SOURCE_NAME);

    stream.topology.addInternalTopic(repartitionTopic);
    stream.topology.addProcessor(filterName, new KStreamFilter<>(new Predicate<K1, V1>() {
        @Override
        public boolean test(final K1 key, final V1 value) {
            return key != null;
        }
    }, false), stream.name);

    stream.topology.addSink(sinkName, repartitionTopic, keySerializer,
                     valSerializer, filterName);
    stream.topology.addSource(sourceName, keyDeserializer, valDeserializer,
                       repartitionTopic);

    return sourceName;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:31,代码来源:KStreamImpl.java

示例4: input

import org.apache.kafka.common.serialization.Serde; //导入方法依赖的package包/类
public <K, V> MockafkaBuilder input(String topic, Serde<K> keySerde, Serde<V> valueSerde, Message<K, V>... data) {
    Serializer<K> keySerializer = keySerde.serializer();
    Serializer<V> valueSerializer = valueSerde.serializer();

    List<Message<byte[], byte[]>> convertedData = Stream.of(data)
        .map(m -> new Message<>(keySerializer.serialize(topic, m.getKey()), valueSerializer.serialize(topic, m.getValue())))
        .collect(toList());

    inputs.put(topic, convertedData);
    return this;
}
 
开发者ID:carlosmenezes,项目名称:mockafka,代码行数:12,代码来源:MockafkaBuilder.java

示例5: consume

import org.apache.kafka.common.serialization.Serde; //导入方法依赖的package包/类
private void consume() {
        Properties consumerProps = new Properties();
        consumerProps.put("bootstrap.servers", "stroom.kafka:9092");
        consumerProps.put("group.id", "consumerGroup");
        consumerProps.put("enable.auto.commit", "true");
        consumerProps.put("auto.commit.interval.ms", "1000");
        consumerProps.put("session.timeout.ms", "30000");
        consumerProps.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        consumerProps.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        Serde<String> stringSerde = Serdes.String();
        Serde<Long> longSerde = Serdes.Long();
//        LongAggregatorSerializer longAggregatorSerialiser = new LongAggregatorSerializer();
//        LongAggregatorDeserializer longAggregatorDeserialiser = new LongAggregatorDeserializer();
//        Serde<LongAggregator> longAggregatorSerde = Serdes.serdeFrom(longAggregatorSerialiser, longAggregatorDeserialiser);
        Serde<LongAggregator> longAggregatorSerde = SerdeUtils.buildBasicSerde(
                (topic, data) -> Bytes.toBytes(data.getAggregateVal()),
                (topic, bData) -> new LongAggregator(Bytes.toLong(bData)));

        SerdeUtils.verify(longAggregatorSerde, new LongAggregator(123));

        WindowedSerializer<Long> longWindowedSerializer = new WindowedSerializer<>(longSerde.serializer());
        WindowedDeserializer<Long> longWindowedDeserializer = new WindowedDeserializer<>(longSerde.deserializer());
        Serde<Windowed<Long>> windowedSerde = Serdes.serdeFrom(longWindowedSerializer, longWindowedDeserializer);

        KafkaConsumer<Windowed<Long>, LongAggregator> consumer = new KafkaConsumer<>(
                consumerProps,
                windowedSerde.deserializer(),
//                longSerde.deserializer(),
                longAggregatorSerde.deserializer());

        consumer.subscribe(Collections.singletonList(DEST_TOPIC));

        ExecutorService executorService = Executors.newSingleThreadExecutor();


        @SuppressWarnings("FutureReturnValueIgnored")
        Future future = executorService.submit(() -> {
            LOGGER.info("Consumer about to poll");
            Instant terminationTime = null;
//            while (!isTerminated.get() || Instant.now().isBefore(terminationTime.plusSeconds(10))) {
            while (true) {
                try {
//                    ConsumerRecords<Windowed<Long>, LongAggregator> records = consumer.poll(100);
                    ConsumerRecords<Windowed<Long>, LongAggregator> records = consumer.poll(100);
//                LOGGER.info("Received {} messages in batch", records.count());
                    for (ConsumerRecord<Windowed<Long>, LongAggregator> record : records) {
//                    for (ConsumerRecord<Long, LongAggregator> record : records) {
                        //                    System.out.printf("offset = %d, key = %s, value = %s\n", record.offset(), record.key(), record.value());
                        LOGGER.info("Received message key: {} winStart: {} winEnd {} winDuration: {} val: {}",
                                epochMsToString(record.key().key()),
                                epochMsToString(record.key().window().start()),
                                epochMsToString(record.key().window().end()),
                                record.key().window().end() - record.key().window().start(),
                                record.value().getAggregateVal());
//                        LOGGER.info("Received message key: {} val: {}",
//                                epochMsToString(record.key()),
//                                record.value().getAggregateVal());
//                        outputData.computeIfAbsent(record.key(),aLong -> new AtomicLong()).addAndGet(record.value().getAggregateVal());
                        outputData.computeIfAbsent(record.key().key(), aLong -> new AtomicLong()).addAndGet(record.value().getAggregateVal());
                    }
                } catch (Exception e) {
                    LOGGER.error("Error polling topic {} ", DEST_TOPIC, e);
                }
                if (isTerminated.get()) {
                    terminationTime = Instant.now();
                }
            }
//            consumer.close();
//            LOGGER.info("Consumer closed");

        });
        LOGGER.info("Consumer started");
    }
 
开发者ID:gchq,项目名称:stroom-stats,代码行数:75,代码来源:KafkaStreamsSandbox.java


注:本文中的org.apache.kafka.common.serialization.Serde.serializer方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。