當前位置: 首頁>>代碼示例>>Java>>正文


Java Deserializer.configure方法代碼示例

本文整理匯總了Java中org.apache.kafka.common.serialization.Deserializer.configure方法的典型用法代碼示例。如果您正苦於以下問題:Java Deserializer.configure方法的具體用法?Java Deserializer.configure怎麽用?Java Deserializer.configure使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.kafka.common.serialization.Deserializer的用法示例。


在下文中一共展示了Deserializer.configure方法的8個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: afterPropertiesSet

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
@Override
@SuppressWarnings("unchecked")
public void afterPropertiesSet() throws Exception {
    if (topics == null && topicPatternString == null) {
        throw new IllegalArgumentException("topic info must not be null");
    }
    Assert.notEmpty(configs, "configs must not be null");
    Assert.notNull(payloadListener, "payloadListener must be null");
    String valueDeserializerKlass = (String) configs.get("value.deserializer");
    configs.put("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
    Consumer<String, byte[]> consumer = new KafkaConsumer<>(configs);

    Deserializer valueDeserializer = createDeserializer(valueDeserializerKlass);
    valueDeserializer.configure(configs, false);

    if (topics != null) {
        listenableConsumer =
                new ListenableTracingConsumer<>(consumer, Arrays.asList(topics), valueDeserializer);
    } else {
        listenableConsumer =
                new ListenableTracingConsumer<>(consumer, Pattern.compile(topicPatternString), valueDeserializer);
    }
    if (payloadListener != null) {
        listenableConsumer.addListener(payloadListener);
    }
    listenableConsumer.start();
}
 
開發者ID:YanXs,項目名稱:nighthawk,代碼行數:28,代碼來源:ListenableConsumerFactoryBean.java

示例2: main

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
public static void main(String[] args) throws InterruptedException {

        Properties props = new Properties();
        props.put(APPLICATION_ID_CONFIG, "my-stream-processing-application");
        props.put(BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.serializer", JsonPOJOSerializer.class.getName());
        props.put("value.deserializer", JsonPOJODeserializer.class.getName());

        Map<String, Object> serdeProps = new HashMap<>();
        serdeProps.put("JsonPOJOClass", Messung.class);

        final Serializer<Messung> serializer = new JsonPOJOSerializer<>();
        serializer.configure(serdeProps, false);

        final Deserializer<Messung> deserializer = new JsonPOJODeserializer<>();
        deserializer.configure(serdeProps, false);

        final Serde<Messung> serde = Serdes.serdeFrom(serializer, deserializer);

        StreamsConfig config = new StreamsConfig(props);

        KStreamBuilder builder = new KStreamBuilder();

        builder.stream(Serdes.String(), serde, "produktion")
                .filter( (k,v) -> v.type.equals("Biogas"))
                .to(Serdes.String(), serde,"produktion2");

        KafkaStreams streams = new KafkaStreams(builder, config);
        streams.start();
    }
 
開發者ID:predic8,項目名稱:apache-kafka-demos,代碼行數:33,代碼來源:FilterStream.java

示例3: getJsonDeserializer

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
private static <T> Deserializer<T> getJsonDeserializer(Class<T> classs, boolean isKey) {
  Deserializer<T> result = new KafkaJsonDeserializer<>();
  String typeConfigProperty = isKey
      ? KafkaJsonDeserializerConfig.JSON_KEY_TYPE
      : KafkaJsonDeserializerConfig.JSON_VALUE_TYPE;

  Map<String, ?> props = Collections.singletonMap(
      typeConfigProperty,
      classs
  );
  result.configure(props, isKey);
  return result;
}
 
開發者ID:confluentinc,項目名稱:ksql,代碼行數:14,代碼來源:KsqlRestApplication.java

示例4: getGenericRowSerde

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
@Override
public Serde<GenericRow> getGenericRowSerde(Schema schema, KsqlConfig ksqlConfig,
                                            boolean isInternal,
                                            SchemaRegistryClient schemaRegistryClient) {
  Map<String, Object> serdeProps = new HashMap<>();
  serdeProps.put("JsonPOJOClass", GenericRow.class);

  final Serializer<GenericRow> genericRowSerializer = new KsqlJsonSerializer(schema);
  genericRowSerializer.configure(serdeProps, false);

  final Deserializer<GenericRow> genericRowDeserializer = new KsqlJsonDeserializer(schema);
  genericRowDeserializer.configure(serdeProps, false);

  return Serdes.serdeFrom(genericRowSerializer, genericRowDeserializer);
}
 
開發者ID:confluentinc,項目名稱:ksql,代碼行數:16,代碼來源:KsqlJsonTopicSerDe.java

示例5: getGenericRowSerde

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
@Override
public Serde<GenericRow> getGenericRowSerde(Schema schema, KsqlConfig ksqlConfig,
                                            boolean isInternal,
                                            SchemaRegistryClient schemaRegistryClient) {
  Map<String, Object> serdeProps = new HashMap<>();

  final Serializer<GenericRow> genericRowSerializer = new KsqlDelimitedSerializer(schema);
  genericRowSerializer.configure(serdeProps, false);

  final Deserializer<GenericRow> genericRowDeserializer = new KsqlDelimitedDeserializer(schema);
  genericRowDeserializer.configure(serdeProps, false);

  return Serdes.serdeFrom(genericRowSerializer, genericRowDeserializer);
}
 
開發者ID:confluentinc,項目名稱:ksql,代碼行數:15,代碼來源:KsqlDelimitedTopicSerDe.java

示例6: getDeserializer

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
private <T> Deserializer<T> getDeserializer(Properties properties, String className, boolean isKey) {
    Deserializer<T> deserializer = getConfiguredInstance(className, Deserializer.class);
    if (deserializer == null) {
        throw new PartitionConsumerException(String.format("Can't instantiate deserializer from %s", className));
    }
    Map<String, String> map = new HashMap<>();
    for (final String name: properties.stringPropertyNames()) {
        map.put(name, properties.getProperty(name));
    }
    deserializer.configure(map, isKey);
    return deserializer;
}
 
開發者ID:researchgate,項目名稱:kafka-metamorph,代碼行數:13,代碼來源:PartitionConsumerProvider.java

示例7: LiKafkaConsumerImpl

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
@SuppressWarnings("unchecked")
private LiKafkaConsumerImpl(LiKafkaConsumerConfig configs,
                            Deserializer<K> keyDeserializer,
                            Deserializer<V> valueDeserializer,
                            Deserializer<LargeMessageSegment> largeMessageSegmentDeserializer,
                            Auditor<K, V> consumerAuditor) {

  _autoCommitEnabled = configs.getBoolean(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG);
  _autoCommitInterval = configs.getInt(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG);
  _offsetResetStrategy =
      OffsetResetStrategy.valueOf(configs.getString(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG).toUpperCase(Locale.ROOT));
  _lastAutoCommitMs = System.currentTimeMillis();
  // We need to set the auto commit to false in KafkaConsumer because it is not large message aware.
  ByteArrayDeserializer byteArrayDeserializer = new ByteArrayDeserializer();
  _kafkaConsumer = new KafkaConsumer<>(configs.configForVanillaConsumer(),
                                       byteArrayDeserializer,
                                       byteArrayDeserializer);

  // Instantiate segment deserializer if needed.
  Deserializer segmentDeserializer = largeMessageSegmentDeserializer != null ? largeMessageSegmentDeserializer :
      configs.getConfiguredInstance(LiKafkaConsumerConfig.SEGMENT_DESERIALIZER_CLASS_CONFIG, Deserializer.class);
  segmentDeserializer.configure(configs.originals(), false);

  // Instantiate message assembler if needed.
  int messageAssemblerCapacity = configs.getInt(LiKafkaConsumerConfig.MESSAGE_ASSEMBLER_BUFFER_CAPACITY_CONFIG);
  int messageAssemblerExpirationOffsetGap = configs.getInt(LiKafkaConsumerConfig.MESSAGE_ASSEMBLER_EXPIRATION_OFFSET_GAP_CONFIG);
  boolean exceptionOnMessageDropped = configs.getBoolean(LiKafkaConsumerConfig.EXCEPTION_ON_MESSAGE_DROPPED_CONFIG);
  MessageAssembler assembler = new MessageAssemblerImpl(messageAssemblerCapacity, messageAssemblerExpirationOffsetGap,
                                                        exceptionOnMessageDropped, segmentDeserializer);

  // Instantiate delivered message offset tracker if needed.
  int maxTrackedMessagesPerPartition = configs.getInt(LiKafkaConsumerConfig.MAX_TRACKED_MESSAGES_PER_PARTITION_CONFIG);
  DeliveredMessageOffsetTracker messageOffsetTracker = new DeliveredMessageOffsetTracker(maxTrackedMessagesPerPartition);

  // Instantiate auditor if needed.
  Auditor<K, V> auditor = consumerAuditor != null ? consumerAuditor :
      configs.getConfiguredInstance(LiKafkaConsumerConfig.AUDITOR_CLASS_CONFIG, Auditor.class);
  auditor.configure(configs.originals());
  auditor.start();

  // Instantiate key and value deserializer if needed.
  Deserializer<K> kDeserializer = keyDeserializer != null ? keyDeserializer :
      configs.getConfiguredInstance(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, Deserializer.class);
  kDeserializer.configure(configs.originals(), true);
  Deserializer<V> vDeserializer = valueDeserializer != null ? valueDeserializer :
      configs.getConfiguredInstance(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, Deserializer.class);
  vDeserializer.configure(configs.originals(), false);

  // Instantiate consumer record processor
  _consumerRecordsProcessor = new ConsumerRecordsProcessor<>(assembler, kDeserializer, vDeserializer,
                                                             messageOffsetTracker, auditor);

  // Instantiate consumer rebalance listener
  _consumerRebalanceListener = new LiKafkaConsumerRebalanceListener<>(_consumerRecordsProcessor,
                                                                      this, _autoCommitEnabled);

  // Instantiate offset commit callback.
  _offsetCommitCallback = new LiKafkaOffsetCommitCallback();
}
 
開發者ID:becketqin,項目名稱:likafka-clients,代碼行數:60,代碼來源:LiKafkaConsumerImpl.java

示例8: LiKafkaConsumerImpl

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
@SuppressWarnings("unchecked")
private LiKafkaConsumerImpl(LiKafkaConsumerConfig configs,
                            Deserializer<K> keyDeserializer,
                            Deserializer<V> valueDeserializer,
                            Deserializer<LargeMessageSegment> largeMessageSegmentDeserializer,
                            Auditor<K, V> consumerAuditor) {

  _autoCommitEnabled = configs.getBoolean(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG);
  _autoCommitInterval = configs.getInt(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG);
  _offsetResetStrategy =
      OffsetResetStrategy.valueOf(configs.getString(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG).toUpperCase(Locale.ROOT));
  _lastAutoCommitMs = System.currentTimeMillis();
  // We need to set the auto commit to false in KafkaConsumer because it is not large message aware.
  ByteArrayDeserializer byteArrayDeserializer = new ByteArrayDeserializer();
  _kafkaConsumer = new KafkaConsumer<>(configs.configForVanillaConsumer(),
                                       byteArrayDeserializer,
                                       byteArrayDeserializer);
try {

  // Instantiate segment deserializer if needed.
  Deserializer segmentDeserializer = largeMessageSegmentDeserializer != null ? largeMessageSegmentDeserializer :
      configs.getConfiguredInstance(LiKafkaConsumerConfig.SEGMENT_DESERIALIZER_CLASS_CONFIG, Deserializer.class);
  segmentDeserializer.configure(configs.originals(), false);

  // Instantiate message assembler if needed.
  int messageAssemblerCapacity = configs.getInt(LiKafkaConsumerConfig.MESSAGE_ASSEMBLER_BUFFER_CAPACITY_CONFIG);
  int messageAssemblerExpirationOffsetGap = configs.getInt(LiKafkaConsumerConfig.MESSAGE_ASSEMBLER_EXPIRATION_OFFSET_GAP_CONFIG);
  boolean exceptionOnMessageDropped = configs.getBoolean(LiKafkaConsumerConfig.EXCEPTION_ON_MESSAGE_DROPPED_CONFIG);
  MessageAssembler assembler = new MessageAssemblerImpl(messageAssemblerCapacity, messageAssemblerExpirationOffsetGap,
                                                        exceptionOnMessageDropped, segmentDeserializer);

  // Instantiate delivered message offset tracker if needed.
  int maxTrackedMessagesPerPartition = configs.getInt(LiKafkaConsumerConfig.MAX_TRACKED_MESSAGES_PER_PARTITION_CONFIG);
  DeliveredMessageOffsetTracker messageOffsetTracker = new DeliveredMessageOffsetTracker(maxTrackedMessagesPerPartition);

  // Instantiate auditor if needed.
  Auditor<K, V> auditor;
  if (consumerAuditor != null) {
    auditor = consumerAuditor;
    auditor.configure(configs.originals());
  } else {
    auditor = configs.getConfiguredInstance(LiKafkaConsumerConfig.AUDITOR_CLASS_CONFIG, Auditor.class);
  }
  auditor.start();

  // Instantiate key and value deserializer if needed.
  Deserializer<K> kDeserializer = keyDeserializer != null ? keyDeserializer :
      configs.getConfiguredInstance(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, Deserializer.class);
  kDeserializer.configure(configs.originals(), true);
  Deserializer<V> vDeserializer = valueDeserializer != null ? valueDeserializer :
      configs.getConfiguredInstance(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, Deserializer.class);
  vDeserializer.configure(configs.originals(), false);

  // Instantiate consumer record processor
  _consumerRecordsProcessor = new ConsumerRecordsProcessor<>(assembler, kDeserializer, vDeserializer, messageOffsetTracker, auditor);

  // Instantiate consumer rebalance listener
  _consumerRebalanceListener = new LiKafkaConsumerRebalanceListener<>(_consumerRecordsProcessor,
                                                                      this, _autoCommitEnabled);

  // Instantiate offset commit callback.
  _offsetCommitCallback = new LiKafkaOffsetCommitCallback();
  _lastProcessedResult = null;
  } catch (Exception e) {
    _kafkaConsumer.close();
    throw e;
  }
}
 
開發者ID:linkedin,項目名稱:li-apache-kafka-clients,代碼行數:69,代碼來源:LiKafkaConsumerImpl.java


注:本文中的org.apache.kafka.common.serialization.Deserializer.configure方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。