当前位置: 首页>>代码示例>>Java>>正文


Java Deserializer类代码示例

本文整理汇总了Java中org.apache.kafka.common.serialization.Deserializer的典型用法代码示例。如果您正苦于以下问题:Java Deserializer类的具体用法?Java Deserializer怎么用?Java Deserializer使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


Deserializer类属于org.apache.kafka.common.serialization包,在下文中一共展示了Deserializer类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: KmqClient

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
public KmqClient(KmqConfig config, KafkaClients clients,
                 Class<? extends Deserializer<K>> keyDeserializer,
                 Class<? extends Deserializer<V>> valueDeserializer,
                 long msgPollTimeout) {

    this.config = config;
    this.msgPollTimeout = msgPollTimeout;

    this.msgConsumer = clients.createConsumer(config.getMsgConsumerGroupId(), keyDeserializer, valueDeserializer);
    // Using the custom partitioner, each offset-partition will contain markers only from a single queue-partition.
    this.markerProducer = clients.createProducer(
            MarkerKey.MarkerKeySerializer.class, MarkerValue.MarkerValueSerializer.class,
            Collections.singletonMap(ProducerConfig.PARTITIONER_CLASS_CONFIG, ParititionFromMarkerKey.class));

    LOG.info(String.format("Subscribing to topic: %s, using group id: %s", config.getMsgTopic(), config.getMsgConsumerGroupId()));
    msgConsumer.subscribe(Collections.singletonList(config.getMsgTopic()));
}
 
开发者ID:softwaremill,项目名称:kmq,代码行数:18,代码来源:KmqClient.java

示例2: afterPropertiesSet

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
@Override
@SuppressWarnings("unchecked")
public void afterPropertiesSet() throws Exception {
    if (topics == null && topicPatternString == null) {
        throw new IllegalArgumentException("topic info must not be null");
    }
    Assert.notEmpty(configs, "configs must not be null");
    Assert.notNull(payloadListener, "payloadListener must be null");
    String valueDeserializerKlass = (String) configs.get("value.deserializer");
    configs.put("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
    Consumer<String, byte[]> consumer = new KafkaConsumer<>(configs);

    Deserializer valueDeserializer = createDeserializer(valueDeserializerKlass);
    valueDeserializer.configure(configs, false);

    if (topics != null) {
        listenableConsumer =
                new ListenableTracingConsumer<>(consumer, Arrays.asList(topics), valueDeserializer);
    } else {
        listenableConsumer =
                new ListenableTracingConsumer<>(consumer, Pattern.compile(topicPatternString), valueDeserializer);
    }
    if (payloadListener != null) {
        listenableConsumer.addListener(payloadListener);
    }
    listenableConsumer.start();
}
 
开发者ID:YanXs,项目名称:nighthawk,代码行数:28,代码来源:ListenableConsumerFactoryBean.java

示例3: buildBasicDeserializer

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
/**
 * Builds a Deserializer of T with the passed stateless function and no configure or close implementations
 */
public static <T> Deserializer<T> buildBasicDeserializer(final DeserializeFunc<T> deserializeFunc) {
    return new Deserializer<T>() {
        @Override
        public void configure(final Map<String, ?> configs, final boolean isKey) {
        }

        @Override
        public T deserialize(final String topic, final byte[] bData) {
            return deserializeFunc.deserialize(topic, bData);
        }

        @Override
        public void close() {
        }
    };
}
 
开发者ID:gchq,项目名称:stroom-stats,代码行数:20,代码来源:SerdeUtils.java

示例4: OldApiTopicConsumer

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
/**
 * 
 * @param connector
 * @param topics
 * @param processThreads 
 */
@SuppressWarnings("unchecked")
public OldApiTopicConsumer(ConsumerContext context) {

    this.consumerContext = context;
    try {
        Class<?> deserializerClass = Class
            .forName(context.getProperties().getProperty("value.deserializer"));
        deserializer = (Deserializer<Object>) deserializerClass.newInstance();
    } catch (Exception e) {
    }
    this.connector = kafka.consumer.Consumer
        .createJavaConsumerConnector(new ConsumerConfig(context.getProperties()));

    int poolSize = consumerContext.getMessageHandlers().size();
    this.fetchExecutor = new StandardThreadExecutor(poolSize, poolSize, 0, TimeUnit.SECONDS,
        poolSize, new StandardThreadFactory("KafkaFetcher"));

    this.defaultProcessExecutor = new StandardThreadExecutor(1, context.getMaxProcessThreads(),
        30, TimeUnit.SECONDS, context.getMaxProcessThreads(),
        new StandardThreadFactory("KafkaProcessor"), new PoolFullRunsPolicy());

    logger.info(
        "Kafka Conumer ThreadPool initialized,fetchPool Size:{},defalutProcessPool Size:{} ",
        poolSize, context.getMaxProcessThreads());
}
 
开发者ID:warlock-china,项目名称:azeroth,代码行数:32,代码来源:OldApiTopicConsumer.java

示例5: configure

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
    if (inner == null) {
        String propertyName = isKey ? "key.deserializer.inner.class" : "value.deserializer.inner.class";
        Object innerDeserializerClass = configs.get(propertyName);
        propertyName = (innerDeserializerClass == null) ? "deserializer.inner.class" : propertyName;
        String value = null;
        try {
            value = (String) configs.get(propertyName);
            inner = Deserializer.class.cast(Utils.newInstance(value, Deserializer.class));
            inner.configure(configs, isKey);
        } catch (ClassNotFoundException e) {
            throw new ConfigException(propertyName, value, "Class " + value + " could not be found.");
        }
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:17,代码来源:WindowedDeserializer.java

示例6: init

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
@SuppressWarnings("unchecked")
@Override
public void init(ProcessorContext context) {
    super.init(context);
    this.context = context;

    // if deserializers are null, get the default ones from the context
    if (this.keyDeserializer == null)
        this.keyDeserializer = ensureExtended((Deserializer<K>) context.keySerde().deserializer());
    if (this.valDeserializer == null)
        this.valDeserializer = ensureExtended((Deserializer<V>) context.valueSerde().deserializer());

    // if value deserializers are for {@code Change} values, set the inner deserializer when necessary
    if (this.valDeserializer instanceof ChangedDeserializer &&
            ((ChangedDeserializer) this.valDeserializer).inner() == null)
        ((ChangedDeserializer) this.valDeserializer).setInner(context.valueSerde().deserializer());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:18,代码来源:SourceNode.java

示例7: receiveMessages

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
private <K, V> List<KeyValue<K, V>> receiveMessages(final Deserializer<K>
                                                        keyDeserializer,
                                                    final Deserializer<V>
                                                        valueDeserializer,
                                                    final int numMessages)
    throws InterruptedException {
    final Properties consumerProperties = new Properties();
    consumerProperties
        .setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, CLUSTER.bootstrapServers());
    consumerProperties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "kgroupedstream-test-" + testNo);
    consumerProperties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    consumerProperties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, keyDeserializer.getClass().getName());
    consumerProperties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, valueDeserializer.getClass().getName());
    return IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(
        consumerProperties,
        outputTopic,
        numMessages,
        60 * 1000);

}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:21,代码来源:KStreamAggregationIntegrationTest.java

示例8: receiveMessages

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
private List<String> receiveMessages(final Deserializer<?> valueDeserializer,
                                     final int numMessages, final String topic) throws InterruptedException {

    final Properties config = new Properties();

    config.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, CLUSTER.bootstrapServers());
    config.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "kstream-test");
    config.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    config.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
        IntegerDeserializer.class.getName());
    config.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
        valueDeserializer.getClass().getName());
    final List<String> received = IntegrationTestUtils.waitUntilMinValuesRecordsReceived(
        config,
        topic,
        numMessages,
        60 * 1000);
    Collections.sort(received);

    return received;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:22,代码来源:KStreamRepartitionJoinTest.java

示例9: receiveMessages

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
private <K, V> List<KeyValue<K, V>> receiveMessages(final Deserializer<K>
                                                        keyDeserializer,
                                                    final Deserializer<V>
                                                        valueDeserializer,
                                                    final int numMessages)
    throws InterruptedException {
    final Properties consumerProperties = new Properties();
    consumerProperties
        .setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, CLUSTER.bootstrapServers());
    consumerProperties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "kgroupedstream-test-" +
        testNo);
    consumerProperties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    consumerProperties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
        keyDeserializer.getClass().getName());
    consumerProperties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
        valueDeserializer.getClass().getName());
    return IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(consumerProperties,
        outputTopic,
        numMessages,
        60 * 1000);

}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:23,代码来源:KStreamAggregationDedupIntegrationTest.java

示例10: testWindowedDeserializerNoArgConstructors

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
@Test
public void testWindowedDeserializerNoArgConstructors() {
    Map<String, String> props = new HashMap<>();
    // test key[value].deserializer.inner.class takes precedence over serializer.inner.class
    WindowedDeserializer<StringSerializer> windowedDeserializer = new WindowedDeserializer<>();
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "host:1");
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "appId");
    props.put("key.deserializer.inner.class", "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("deserializer.inner.class", "org.apache.kafka.common.serialization.StringDeserializer");
    windowedDeserializer.configure(props, true);
    Deserializer<?> inner = windowedDeserializer.innerDeserializer();
    assertNotNull("Inner deserializer should be not null", inner);
    assertTrue("Inner deserializer type should be StringDeserializer", inner instanceof StringDeserializer);
    // test deserializer.inner.class
    props.put("deserializer.inner.class", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
    props.remove("key.deserializer.inner.class");
    props.remove("value.deserializer.inner.class");
    WindowedDeserializer<?> windowedDeserializer1 = new WindowedDeserializer<>();
    windowedDeserializer1.configure(props, false);
    Deserializer<?> inner1 = windowedDeserializer1.innerDeserializer();
    assertNotNull("Inner deserializer should be not null", inner1);
    assertTrue("Inner deserializer type should be ByteArrayDeserializer", inner1 instanceof ByteArrayDeserializer);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:24,代码来源:WindowedStreamPartitionerTest.java

示例11: createFetcher

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
private <K, V> Fetcher<K, V> createFetcher(SubscriptionState subscriptions,
                                           Metrics metrics,
                                           Deserializer<K> keyDeserializer,
                                           Deserializer<V> valueDeserializer,
                                           int maxPollRecords,
                                           IsolationLevel isolationLevel) {
    return new Fetcher<>(consumerClient,
            minBytes,
            maxBytes,
            maxWaitMs,
            fetchSize,
            maxPollRecords,
            true, // check crc
            keyDeserializer,
            valueDeserializer,
            metadata,
            subscriptions,
            metrics,
            metricsRegistry,
            time,
            retryBackoffMs,
            isolationLevel);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:24,代码来源:FetcherTest.java

示例12: init

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
@SuppressWarnings("unchecked")
void init(ServletContext context) {
  String serializedConfig = context.getInitParameter(ConfigUtils.class.getName() + ".serialized");
  Objects.requireNonNull(serializedConfig);
  this.config = ConfigUtils.deserialize(serializedConfig);
  this.updateTopic = config.getString("oryx.update-topic.message.topic");
  this.maxMessageSize = config.getInt("oryx.update-topic.message.max-size");
  this.updateTopicLockMaster = config.getString("oryx.update-topic.lock.master");
  this.updateTopicBroker = config.getString("oryx.update-topic.broker");
  this.readOnly = config.getBoolean("oryx.serving.api.read-only");
  if (!readOnly) {
    this.inputTopic = config.getString("oryx.input-topic.message.topic");
    this.inputTopicLockMaster = config.getString("oryx.input-topic.lock.master");
    this.inputTopicBroker = config.getString("oryx.input-topic.broker");
  }
  this.modelManagerClassName = config.getString("oryx.serving.model-manager-class");
  this.updateDecoderClass = (Class<? extends Deserializer<U>>) ClassUtils.loadClass(
      config.getString("oryx.update-topic.message.decoder-class"), Deserializer.class);
  Preconditions.checkArgument(maxMessageSize > 0);
}
 
开发者ID:oncewang,项目名称:oryx2,代码行数:21,代码来源:ModelManagerListener.java

示例13: testSingleMessageSegment

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
@Test
public void testSingleMessageSegment() {
  // Create serializer/deserializers.
  Serializer<LargeMessageSegment> segmentSerializer = new DefaultSegmentSerializer();
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();

  byte[] messageWrappedBytes = wrapMessageBytes(segmentSerializer, "message".getBytes());

  MessageAssembler messageAssembler = new MessageAssemblerImpl(100, 100, true, segmentDeserializer);
  MessageAssembler.AssembleResult assembleResult =
      messageAssembler.assemble(new TopicPartition("topic", 0), 0, messageWrappedBytes);

  assertNotNull(assembleResult.messageBytes());
  assertEquals(assembleResult.messageStartingOffset(), 0, "The message starting offset should be 0");
  assertEquals(assembleResult.messageEndingOffset(), 0, "The message ending offset should be 0");
}
 
开发者ID:becketqin,项目名称:likafka-clients,代码行数:17,代码来源:MessageAssemblerTest.java

示例14: testSerde

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
@Test
public void testSerde() {
  Serializer<String> stringSerializer = new StringSerializer();
  Deserializer<String> stringDeserializer = new StringDeserializer();
  Serializer<LargeMessageSegment> segmentSerializer = new DefaultSegmentSerializer();
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();

  String s = LiKafkaClientsTestUtils.getRandomString(100);
  assertEquals(s.length(), 100);
  byte[] stringBytes = stringSerializer.serialize("topic", s);
  assertEquals(stringBytes.length, 100);
  LargeMessageSegment segment =
      new LargeMessageSegment(LiKafkaClientsUtils.randomUUID(), 0, 2, stringBytes.length, ByteBuffer.wrap(stringBytes));
  // String bytes + segment header
  byte[] serializedSegment = segmentSerializer.serialize("topic", segment);
  assertEquals(serializedSegment.length, 1 + stringBytes.length + LargeMessageSegment.SEGMENT_INFO_OVERHEAD + 4);

  LargeMessageSegment deserializedSegment = segmentDeserializer.deserialize("topic", serializedSegment);
  assertEquals(deserializedSegment.messageId, segment.messageId);
  assertEquals(deserializedSegment.messageSizeInBytes, segment.messageSizeInBytes);
  assertEquals(deserializedSegment.numberOfSegments, segment.numberOfSegments);
  assertEquals(deserializedSegment.sequenceNumber, segment.sequenceNumber);
  assertEquals(deserializedSegment.payload.limit(), 100);
  String deserializedString = stringDeserializer.deserialize("topic", deserializedSegment.payloadArray());
  assertEquals(deserializedString.length(), s.length());
}
 
开发者ID:linkedin,项目名称:li-apache-kafka-clients,代码行数:27,代码来源:SerializerDeserializerTest.java

示例15: deserializer

import org.apache.kafka.common.serialization.Deserializer; //导入依赖的package包/类
@Override
public Deserializer<T> deserializer() {
    return new Deserializer<T>() {
        @Override
        public void configure(Map<String, ?> configs, boolean isKey) {

        }

        @Override
        public T deserialize(String topic, byte[] data) {
            T result;
            try {
                result = mapper.readValue(data, cls);
            } catch (Exception e) {
                throw new SerializationException(e);
            }

            return result;
        }

        @Override
        public void close() {

        }
    };
}
 
开发者ID:amient,项目名称:hello-kafka-streams,代码行数:27,代码来源:JsonPOJOSerde.java


注:本文中的org.apache.kafka.common.serialization.Deserializer类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。