當前位置: 首頁>>代碼示例>>Java>>正文


Java Deserializer類代碼示例

本文整理匯總了Java中org.apache.kafka.common.serialization.Deserializer的典型用法代碼示例。如果您正苦於以下問題:Java Deserializer類的具體用法?Java Deserializer怎麽用?Java Deserializer使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


Deserializer類屬於org.apache.kafka.common.serialization包,在下文中一共展示了Deserializer類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: KmqClient

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
public KmqClient(KmqConfig config, KafkaClients clients,
                 Class<? extends Deserializer<K>> keyDeserializer,
                 Class<? extends Deserializer<V>> valueDeserializer,
                 long msgPollTimeout) {

    this.config = config;
    this.msgPollTimeout = msgPollTimeout;

    this.msgConsumer = clients.createConsumer(config.getMsgConsumerGroupId(), keyDeserializer, valueDeserializer);
    // Using the custom partitioner, each offset-partition will contain markers only from a single queue-partition.
    this.markerProducer = clients.createProducer(
            MarkerKey.MarkerKeySerializer.class, MarkerValue.MarkerValueSerializer.class,
            Collections.singletonMap(ProducerConfig.PARTITIONER_CLASS_CONFIG, ParititionFromMarkerKey.class));

    LOG.info(String.format("Subscribing to topic: %s, using group id: %s", config.getMsgTopic(), config.getMsgConsumerGroupId()));
    msgConsumer.subscribe(Collections.singletonList(config.getMsgTopic()));
}
 
開發者ID:softwaremill,項目名稱:kmq,代碼行數:18,代碼來源:KmqClient.java

示例2: afterPropertiesSet

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
@Override
@SuppressWarnings("unchecked")
public void afterPropertiesSet() throws Exception {
    if (topics == null && topicPatternString == null) {
        throw new IllegalArgumentException("topic info must not be null");
    }
    Assert.notEmpty(configs, "configs must not be null");
    Assert.notNull(payloadListener, "payloadListener must be null");
    String valueDeserializerKlass = (String) configs.get("value.deserializer");
    configs.put("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
    Consumer<String, byte[]> consumer = new KafkaConsumer<>(configs);

    Deserializer valueDeserializer = createDeserializer(valueDeserializerKlass);
    valueDeserializer.configure(configs, false);

    if (topics != null) {
        listenableConsumer =
                new ListenableTracingConsumer<>(consumer, Arrays.asList(topics), valueDeserializer);
    } else {
        listenableConsumer =
                new ListenableTracingConsumer<>(consumer, Pattern.compile(topicPatternString), valueDeserializer);
    }
    if (payloadListener != null) {
        listenableConsumer.addListener(payloadListener);
    }
    listenableConsumer.start();
}
 
開發者ID:YanXs,項目名稱:nighthawk,代碼行數:28,代碼來源:ListenableConsumerFactoryBean.java

示例3: buildBasicDeserializer

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
/**
 * Builds a Deserializer of T with the passed stateless function and no configure or close implementations
 */
public static <T> Deserializer<T> buildBasicDeserializer(final DeserializeFunc<T> deserializeFunc) {
    return new Deserializer<T>() {
        @Override
        public void configure(final Map<String, ?> configs, final boolean isKey) {
        }

        @Override
        public T deserialize(final String topic, final byte[] bData) {
            return deserializeFunc.deserialize(topic, bData);
        }

        @Override
        public void close() {
        }
    };
}
 
開發者ID:gchq,項目名稱:stroom-stats,代碼行數:20,代碼來源:SerdeUtils.java

示例4: OldApiTopicConsumer

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
/**
 * 
 * @param connector
 * @param topics
 * @param processThreads 
 */
@SuppressWarnings("unchecked")
public OldApiTopicConsumer(ConsumerContext context) {

    this.consumerContext = context;
    try {
        Class<?> deserializerClass = Class
            .forName(context.getProperties().getProperty("value.deserializer"));
        deserializer = (Deserializer<Object>) deserializerClass.newInstance();
    } catch (Exception e) {
    }
    this.connector = kafka.consumer.Consumer
        .createJavaConsumerConnector(new ConsumerConfig(context.getProperties()));

    int poolSize = consumerContext.getMessageHandlers().size();
    this.fetchExecutor = new StandardThreadExecutor(poolSize, poolSize, 0, TimeUnit.SECONDS,
        poolSize, new StandardThreadFactory("KafkaFetcher"));

    this.defaultProcessExecutor = new StandardThreadExecutor(1, context.getMaxProcessThreads(),
        30, TimeUnit.SECONDS, context.getMaxProcessThreads(),
        new StandardThreadFactory("KafkaProcessor"), new PoolFullRunsPolicy());

    logger.info(
        "Kafka Conumer ThreadPool initialized,fetchPool Size:{},defalutProcessPool Size:{} ",
        poolSize, context.getMaxProcessThreads());
}
 
開發者ID:warlock-china,項目名稱:azeroth,代碼行數:32,代碼來源:OldApiTopicConsumer.java

示例5: configure

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
    if (inner == null) {
        String propertyName = isKey ? "key.deserializer.inner.class" : "value.deserializer.inner.class";
        Object innerDeserializerClass = configs.get(propertyName);
        propertyName = (innerDeserializerClass == null) ? "deserializer.inner.class" : propertyName;
        String value = null;
        try {
            value = (String) configs.get(propertyName);
            inner = Deserializer.class.cast(Utils.newInstance(value, Deserializer.class));
            inner.configure(configs, isKey);
        } catch (ClassNotFoundException e) {
            throw new ConfigException(propertyName, value, "Class " + value + " could not be found.");
        }
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:17,代碼來源:WindowedDeserializer.java

示例6: init

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public void init(ProcessorContext context) {
    super.init(context);
    this.context = context;

    // if deserializers are null, get the default ones from the context
    if (this.keyDeserializer == null)
        this.keyDeserializer = ensureExtended((Deserializer<K>) context.keySerde().deserializer());
    if (this.valDeserializer == null)
        this.valDeserializer = ensureExtended((Deserializer<V>) context.valueSerde().deserializer());

    // if value deserializers are for {@code Change} values, set the inner deserializer when necessary
    if (this.valDeserializer instanceof ChangedDeserializer &&
            ((ChangedDeserializer) this.valDeserializer).inner() == null)
        ((ChangedDeserializer) this.valDeserializer).setInner(context.valueSerde().deserializer());
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:18,代碼來源:SourceNode.java

示例7: receiveMessages

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
private <K, V> List<KeyValue<K, V>> receiveMessages(final Deserializer<K>
                                                        keyDeserializer,
                                                    final Deserializer<V>
                                                        valueDeserializer,
                                                    final int numMessages)
    throws InterruptedException {
    final Properties consumerProperties = new Properties();
    consumerProperties
        .setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, CLUSTER.bootstrapServers());
    consumerProperties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "kgroupedstream-test-" + testNo);
    consumerProperties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    consumerProperties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, keyDeserializer.getClass().getName());
    consumerProperties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, valueDeserializer.getClass().getName());
    return IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(
        consumerProperties,
        outputTopic,
        numMessages,
        60 * 1000);

}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:21,代碼來源:KStreamAggregationIntegrationTest.java

示例8: receiveMessages

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
private List<String> receiveMessages(final Deserializer<?> valueDeserializer,
                                     final int numMessages, final String topic) throws InterruptedException {

    final Properties config = new Properties();

    config.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, CLUSTER.bootstrapServers());
    config.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "kstream-test");
    config.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    config.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
        IntegerDeserializer.class.getName());
    config.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
        valueDeserializer.getClass().getName());
    final List<String> received = IntegrationTestUtils.waitUntilMinValuesRecordsReceived(
        config,
        topic,
        numMessages,
        60 * 1000);
    Collections.sort(received);

    return received;
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:22,代碼來源:KStreamRepartitionJoinTest.java

示例9: receiveMessages

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
private <K, V> List<KeyValue<K, V>> receiveMessages(final Deserializer<K>
                                                        keyDeserializer,
                                                    final Deserializer<V>
                                                        valueDeserializer,
                                                    final int numMessages)
    throws InterruptedException {
    final Properties consumerProperties = new Properties();
    consumerProperties
        .setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, CLUSTER.bootstrapServers());
    consumerProperties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "kgroupedstream-test-" +
        testNo);
    consumerProperties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    consumerProperties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
        keyDeserializer.getClass().getName());
    consumerProperties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
        valueDeserializer.getClass().getName());
    return IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(consumerProperties,
        outputTopic,
        numMessages,
        60 * 1000);

}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:23,代碼來源:KStreamAggregationDedupIntegrationTest.java

示例10: testWindowedDeserializerNoArgConstructors

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
@Test
public void testWindowedDeserializerNoArgConstructors() {
    Map<String, String> props = new HashMap<>();
    // test key[value].deserializer.inner.class takes precedence over serializer.inner.class
    WindowedDeserializer<StringSerializer> windowedDeserializer = new WindowedDeserializer<>();
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "host:1");
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "appId");
    props.put("key.deserializer.inner.class", "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("deserializer.inner.class", "org.apache.kafka.common.serialization.StringDeserializer");
    windowedDeserializer.configure(props, true);
    Deserializer<?> inner = windowedDeserializer.innerDeserializer();
    assertNotNull("Inner deserializer should be not null", inner);
    assertTrue("Inner deserializer type should be StringDeserializer", inner instanceof StringDeserializer);
    // test deserializer.inner.class
    props.put("deserializer.inner.class", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
    props.remove("key.deserializer.inner.class");
    props.remove("value.deserializer.inner.class");
    WindowedDeserializer<?> windowedDeserializer1 = new WindowedDeserializer<>();
    windowedDeserializer1.configure(props, false);
    Deserializer<?> inner1 = windowedDeserializer1.innerDeserializer();
    assertNotNull("Inner deserializer should be not null", inner1);
    assertTrue("Inner deserializer type should be ByteArrayDeserializer", inner1 instanceof ByteArrayDeserializer);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:24,代碼來源:WindowedStreamPartitionerTest.java

示例11: createFetcher

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
private <K, V> Fetcher<K, V> createFetcher(SubscriptionState subscriptions,
                                           Metrics metrics,
                                           Deserializer<K> keyDeserializer,
                                           Deserializer<V> valueDeserializer,
                                           int maxPollRecords,
                                           IsolationLevel isolationLevel) {
    return new Fetcher<>(consumerClient,
            minBytes,
            maxBytes,
            maxWaitMs,
            fetchSize,
            maxPollRecords,
            true, // check crc
            keyDeserializer,
            valueDeserializer,
            metadata,
            subscriptions,
            metrics,
            metricsRegistry,
            time,
            retryBackoffMs,
            isolationLevel);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:24,代碼來源:FetcherTest.java

示例12: init

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
@SuppressWarnings("unchecked")
void init(ServletContext context) {
  String serializedConfig = context.getInitParameter(ConfigUtils.class.getName() + ".serialized");
  Objects.requireNonNull(serializedConfig);
  this.config = ConfigUtils.deserialize(serializedConfig);
  this.updateTopic = config.getString("oryx.update-topic.message.topic");
  this.maxMessageSize = config.getInt("oryx.update-topic.message.max-size");
  this.updateTopicLockMaster = config.getString("oryx.update-topic.lock.master");
  this.updateTopicBroker = config.getString("oryx.update-topic.broker");
  this.readOnly = config.getBoolean("oryx.serving.api.read-only");
  if (!readOnly) {
    this.inputTopic = config.getString("oryx.input-topic.message.topic");
    this.inputTopicLockMaster = config.getString("oryx.input-topic.lock.master");
    this.inputTopicBroker = config.getString("oryx.input-topic.broker");
  }
  this.modelManagerClassName = config.getString("oryx.serving.model-manager-class");
  this.updateDecoderClass = (Class<? extends Deserializer<U>>) ClassUtils.loadClass(
      config.getString("oryx.update-topic.message.decoder-class"), Deserializer.class);
  Preconditions.checkArgument(maxMessageSize > 0);
}
 
開發者ID:oncewang,項目名稱:oryx2,代碼行數:21,代碼來源:ModelManagerListener.java

示例13: testSingleMessageSegment

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
@Test
public void testSingleMessageSegment() {
  // Create serializer/deserializers.
  Serializer<LargeMessageSegment> segmentSerializer = new DefaultSegmentSerializer();
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();

  byte[] messageWrappedBytes = wrapMessageBytes(segmentSerializer, "message".getBytes());

  MessageAssembler messageAssembler = new MessageAssemblerImpl(100, 100, true, segmentDeserializer);
  MessageAssembler.AssembleResult assembleResult =
      messageAssembler.assemble(new TopicPartition("topic", 0), 0, messageWrappedBytes);

  assertNotNull(assembleResult.messageBytes());
  assertEquals(assembleResult.messageStartingOffset(), 0, "The message starting offset should be 0");
  assertEquals(assembleResult.messageEndingOffset(), 0, "The message ending offset should be 0");
}
 
開發者ID:becketqin,項目名稱:likafka-clients,代碼行數:17,代碼來源:MessageAssemblerTest.java

示例14: testSerde

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
@Test
public void testSerde() {
  Serializer<String> stringSerializer = new StringSerializer();
  Deserializer<String> stringDeserializer = new StringDeserializer();
  Serializer<LargeMessageSegment> segmentSerializer = new DefaultSegmentSerializer();
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();

  String s = LiKafkaClientsTestUtils.getRandomString(100);
  assertEquals(s.length(), 100);
  byte[] stringBytes = stringSerializer.serialize("topic", s);
  assertEquals(stringBytes.length, 100);
  LargeMessageSegment segment =
      new LargeMessageSegment(LiKafkaClientsUtils.randomUUID(), 0, 2, stringBytes.length, ByteBuffer.wrap(stringBytes));
  // String bytes + segment header
  byte[] serializedSegment = segmentSerializer.serialize("topic", segment);
  assertEquals(serializedSegment.length, 1 + stringBytes.length + LargeMessageSegment.SEGMENT_INFO_OVERHEAD + 4);

  LargeMessageSegment deserializedSegment = segmentDeserializer.deserialize("topic", serializedSegment);
  assertEquals(deserializedSegment.messageId, segment.messageId);
  assertEquals(deserializedSegment.messageSizeInBytes, segment.messageSizeInBytes);
  assertEquals(deserializedSegment.numberOfSegments, segment.numberOfSegments);
  assertEquals(deserializedSegment.sequenceNumber, segment.sequenceNumber);
  assertEquals(deserializedSegment.payload.limit(), 100);
  String deserializedString = stringDeserializer.deserialize("topic", deserializedSegment.payloadArray());
  assertEquals(deserializedString.length(), s.length());
}
 
開發者ID:linkedin,項目名稱:li-apache-kafka-clients,代碼行數:27,代碼來源:SerializerDeserializerTest.java

示例15: deserializer

import org.apache.kafka.common.serialization.Deserializer; //導入依賴的package包/類
@Override
public Deserializer<T> deserializer() {
    return new Deserializer<T>() {
        @Override
        public void configure(Map<String, ?> configs, boolean isKey) {

        }

        @Override
        public T deserialize(String topic, byte[] data) {
            T result;
            try {
                result = mapper.readValue(data, cls);
            } catch (Exception e) {
                throw new SerializationException(e);
            }

            return result;
        }

        @Override
        public void close() {

        }
    };
}
 
開發者ID:amient,項目名稱:hello-kafka-streams,代碼行數:27,代碼來源:JsonPOJOSerde.java


注:本文中的org.apache.kafka.common.serialization.Deserializer類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。