当前位置: 首页>>代码示例>>Java>>正文


Java Deserializer.configure方法代码示例

本文整理汇总了Java中org.apache.kafka.common.serialization.Deserializer.configure方法的典型用法代码示例。如果您正苦于以下问题:Java Deserializer.configure方法的具体用法?Java Deserializer.configure怎么用?Java Deserializer.configure使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.common.serialization.Deserializer的用法示例。


在下文中一共展示了Deserializer.configure方法的8个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: afterPropertiesSet

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
@Override
@SuppressWarnings("unchecked")
public void afterPropertiesSet() throws Exception {
    if (topics == null && topicPatternString == null) {
        throw new IllegalArgumentException("topic info must not be null");
    }
    Assert.notEmpty(configs, "configs must not be null");
    Assert.notNull(payloadListener, "payloadListener must be null");
    String valueDeserializerKlass = (String) configs.get("value.deserializer");
    configs.put("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
    Consumer<String, byte[]> consumer = new KafkaConsumer<>(configs);

    Deserializer valueDeserializer = createDeserializer(valueDeserializerKlass);
    valueDeserializer.configure(configs, false);

    if (topics != null) {
        listenableConsumer =
                new ListenableTracingConsumer<>(consumer, Arrays.asList(topics), valueDeserializer);
    } else {
        listenableConsumer =
                new ListenableTracingConsumer<>(consumer, Pattern.compile(topicPatternString), valueDeserializer);
    }
    if (payloadListener != null) {
        listenableConsumer.addListener(payloadListener);
    }
    listenableConsumer.start();
}
 
开发者ID:YanXs,项目名称:nighthawk,代码行数:28,代码来源:ListenableConsumerFactoryBean.java

示例2: main

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
public static void main(String[] args) throws InterruptedException {

        Properties props = new Properties();
        props.put(APPLICATION_ID_CONFIG, "my-stream-processing-application");
        props.put(BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.serializer", JsonPOJOSerializer.class.getName());
        props.put("value.deserializer", JsonPOJODeserializer.class.getName());

        Map<String, Object> serdeProps = new HashMap<>();
        serdeProps.put("JsonPOJOClass", Messung.class);

        final Serializer<Messung> serializer = new JsonPOJOSerializer<>();
        serializer.configure(serdeProps, false);

        final Deserializer<Messung> deserializer = new JsonPOJODeserializer<>();
        deserializer.configure(serdeProps, false);

        final Serde<Messung> serde = Serdes.serdeFrom(serializer, deserializer);

        StreamsConfig config = new StreamsConfig(props);

        KStreamBuilder builder = new KStreamBuilder();

        builder.stream(Serdes.String(), serde, "produktion")
                .filter( (k,v) -> v.type.equals("Biogas"))
                .to(Serdes.String(), serde,"produktion2");

        KafkaStreams streams = new KafkaStreams(builder, config);
        streams.start();
    }
 
开发者ID:predic8,项目名称:apache-kafka-demos,代码行数:33,代码来源:FilterStream.java

示例3: getJsonDeserializer

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
private static <T> Deserializer<T> getJsonDeserializer(Class<T> classs, boolean isKey) {
  Deserializer<T> result = new KafkaJsonDeserializer<>();
  String typeConfigProperty = isKey
      ? KafkaJsonDeserializerConfig.JSON_KEY_TYPE
      : KafkaJsonDeserializerConfig.JSON_VALUE_TYPE;

  Map<String, ?> props = Collections.singletonMap(
      typeConfigProperty,
      classs
  );
  result.configure(props, isKey);
  return result;
}
 
开发者ID:confluentinc,项目名称:ksql,代码行数:14,代码来源:KsqlRestApplication.java

示例4: getGenericRowSerde

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
@Override
public Serde<GenericRow> getGenericRowSerde(Schema schema, KsqlConfig ksqlConfig,
                                            boolean isInternal,
                                            SchemaRegistryClient schemaRegistryClient) {
  Map<String, Object> serdeProps = new HashMap<>();
  serdeProps.put("JsonPOJOClass", GenericRow.class);

  final Serializer<GenericRow> genericRowSerializer = new KsqlJsonSerializer(schema);
  genericRowSerializer.configure(serdeProps, false);

  final Deserializer<GenericRow> genericRowDeserializer = new KsqlJsonDeserializer(schema);
  genericRowDeserializer.configure(serdeProps, false);

  return Serdes.serdeFrom(genericRowSerializer, genericRowDeserializer);
}
 
开发者ID:confluentinc,项目名称:ksql,代码行数:16,代码来源:KsqlJsonTopicSerDe.java

示例5: getGenericRowSerde

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
@Override
public Serde<GenericRow> getGenericRowSerde(Schema schema, KsqlConfig ksqlConfig,
                                            boolean isInternal,
                                            SchemaRegistryClient schemaRegistryClient) {
  Map<String, Object> serdeProps = new HashMap<>();

  final Serializer<GenericRow> genericRowSerializer = new KsqlDelimitedSerializer(schema);
  genericRowSerializer.configure(serdeProps, false);

  final Deserializer<GenericRow> genericRowDeserializer = new KsqlDelimitedDeserializer(schema);
  genericRowDeserializer.configure(serdeProps, false);

  return Serdes.serdeFrom(genericRowSerializer, genericRowDeserializer);
}
 
开发者ID:confluentinc,项目名称:ksql,代码行数:15,代码来源:KsqlDelimitedTopicSerDe.java

示例6: getDeserializer

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
private <T> Deserializer<T> getDeserializer(Properties properties, String className, boolean isKey) {
    Deserializer<T> deserializer = getConfiguredInstance(className, Deserializer.class);
    if (deserializer == null) {
        throw new PartitionConsumerException(String.format("Can't instantiate deserializer from %s", className));
    }
    Map<String, String> map = new HashMap<>();
    for (final String name: properties.stringPropertyNames()) {
        map.put(name, properties.getProperty(name));
    }
    deserializer.configure(map, isKey);
    return deserializer;
}
 
开发者ID:researchgate,项目名称:kafka-metamorph,代码行数:13,代码来源:PartitionConsumerProvider.java

示例7: LiKafkaConsumerImpl

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
@SuppressWarnings("unchecked")
private LiKafkaConsumerImpl(LiKafkaConsumerConfig configs,
                            Deserializer<K> keyDeserializer,
                            Deserializer<V> valueDeserializer,
                            Deserializer<LargeMessageSegment> largeMessageSegmentDeserializer,
                            Auditor<K, V> consumerAuditor) {

  _autoCommitEnabled = configs.getBoolean(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG);
  _autoCommitInterval = configs.getInt(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG);
  _offsetResetStrategy =
      OffsetResetStrategy.valueOf(configs.getString(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG).toUpperCase(Locale.ROOT));
  _lastAutoCommitMs = System.currentTimeMillis();
  // We need to set the auto commit to false in KafkaConsumer because it is not large message aware.
  ByteArrayDeserializer byteArrayDeserializer = new ByteArrayDeserializer();
  _kafkaConsumer = new KafkaConsumer<>(configs.configForVanillaConsumer(),
                                       byteArrayDeserializer,
                                       byteArrayDeserializer);

  // Instantiate segment deserializer if needed.
  Deserializer segmentDeserializer = largeMessageSegmentDeserializer != null ? largeMessageSegmentDeserializer :
      configs.getConfiguredInstance(LiKafkaConsumerConfig.SEGMENT_DESERIALIZER_CLASS_CONFIG, Deserializer.class);
  segmentDeserializer.configure(configs.originals(), false);

  // Instantiate message assembler if needed.
  int messageAssemblerCapacity = configs.getInt(LiKafkaConsumerConfig.MESSAGE_ASSEMBLER_BUFFER_CAPACITY_CONFIG);
  int messageAssemblerExpirationOffsetGap = configs.getInt(LiKafkaConsumerConfig.MESSAGE_ASSEMBLER_EXPIRATION_OFFSET_GAP_CONFIG);
  boolean exceptionOnMessageDropped = configs.getBoolean(LiKafkaConsumerConfig.EXCEPTION_ON_MESSAGE_DROPPED_CONFIG);
  MessageAssembler assembler = new MessageAssemblerImpl(messageAssemblerCapacity, messageAssemblerExpirationOffsetGap,
                                                        exceptionOnMessageDropped, segmentDeserializer);

  // Instantiate delivered message offset tracker if needed.
  int maxTrackedMessagesPerPartition = configs.getInt(LiKafkaConsumerConfig.MAX_TRACKED_MESSAGES_PER_PARTITION_CONFIG);
  DeliveredMessageOffsetTracker messageOffsetTracker = new DeliveredMessageOffsetTracker(maxTrackedMessagesPerPartition);

  // Instantiate auditor if needed.
  Auditor<K, V> auditor = consumerAuditor != null ? consumerAuditor :
      configs.getConfiguredInstance(LiKafkaConsumerConfig.AUDITOR_CLASS_CONFIG, Auditor.class);
  auditor.configure(configs.originals());
  auditor.start();

  // Instantiate key and value deserializer if needed.
  Deserializer<K> kDeserializer = keyDeserializer != null ? keyDeserializer :
      configs.getConfiguredInstance(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, Deserializer.class);
  kDeserializer.configure(configs.originals(), true);
  Deserializer<V> vDeserializer = valueDeserializer != null ? valueDeserializer :
      configs.getConfiguredInstance(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, Deserializer.class);
  vDeserializer.configure(configs.originals(), false);

  // Instantiate consumer record processor
  _consumerRecordsProcessor = new ConsumerRecordsProcessor<>(assembler, kDeserializer, vDeserializer,
                                                             messageOffsetTracker, auditor);

  // Instantiate consumer rebalance listener
  _consumerRebalanceListener = new LiKafkaConsumerRebalanceListener<>(_consumerRecordsProcessor,
                                                                      this, _autoCommitEnabled);

  // Instantiate offset commit callback.
  _offsetCommitCallback = new LiKafkaOffsetCommitCallback();
}
 
开发者ID:becketqin,项目名称:likafka-clients,代码行数:60,代码来源:LiKafkaConsumerImpl.java

示例8: LiKafkaConsumerImpl

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
@SuppressWarnings("unchecked")
private LiKafkaConsumerImpl(LiKafkaConsumerConfig configs,
                            Deserializer<K> keyDeserializer,
                            Deserializer<V> valueDeserializer,
                            Deserializer<LargeMessageSegment> largeMessageSegmentDeserializer,
                            Auditor<K, V> consumerAuditor) {

  _autoCommitEnabled = configs.getBoolean(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG);
  _autoCommitInterval = configs.getInt(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG);
  _offsetResetStrategy =
      OffsetResetStrategy.valueOf(configs.getString(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG).toUpperCase(Locale.ROOT));
  _lastAutoCommitMs = System.currentTimeMillis();
  // We need to set the auto commit to false in KafkaConsumer because it is not large message aware.
  ByteArrayDeserializer byteArrayDeserializer = new ByteArrayDeserializer();
  _kafkaConsumer = new KafkaConsumer<>(configs.configForVanillaConsumer(),
                                       byteArrayDeserializer,
                                       byteArrayDeserializer);
try {

  // Instantiate segment deserializer if needed.
  Deserializer segmentDeserializer = largeMessageSegmentDeserializer != null ? largeMessageSegmentDeserializer :
      configs.getConfiguredInstance(LiKafkaConsumerConfig.SEGMENT_DESERIALIZER_CLASS_CONFIG, Deserializer.class);
  segmentDeserializer.configure(configs.originals(), false);

  // Instantiate message assembler if needed.
  int messageAssemblerCapacity = configs.getInt(LiKafkaConsumerConfig.MESSAGE_ASSEMBLER_BUFFER_CAPACITY_CONFIG);
  int messageAssemblerExpirationOffsetGap = configs.getInt(LiKafkaConsumerConfig.MESSAGE_ASSEMBLER_EXPIRATION_OFFSET_GAP_CONFIG);
  boolean exceptionOnMessageDropped = configs.getBoolean(LiKafkaConsumerConfig.EXCEPTION_ON_MESSAGE_DROPPED_CONFIG);
  MessageAssembler assembler = new MessageAssemblerImpl(messageAssemblerCapacity, messageAssemblerExpirationOffsetGap,
                                                        exceptionOnMessageDropped, segmentDeserializer);

  // Instantiate delivered message offset tracker if needed.
  int maxTrackedMessagesPerPartition = configs.getInt(LiKafkaConsumerConfig.MAX_TRACKED_MESSAGES_PER_PARTITION_CONFIG);
  DeliveredMessageOffsetTracker messageOffsetTracker = new DeliveredMessageOffsetTracker(maxTrackedMessagesPerPartition);

  // Instantiate auditor if needed.
  Auditor<K, V> auditor;
  if (consumerAuditor != null) {
    auditor = consumerAuditor;
    auditor.configure(configs.originals());
  } else {
    auditor = configs.getConfiguredInstance(LiKafkaConsumerConfig.AUDITOR_CLASS_CONFIG, Auditor.class);
  }
  auditor.start();

  // Instantiate key and value deserializer if needed.
  Deserializer<K> kDeserializer = keyDeserializer != null ? keyDeserializer :
      configs.getConfiguredInstance(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, Deserializer.class);
  kDeserializer.configure(configs.originals(), true);
  Deserializer<V> vDeserializer = valueDeserializer != null ? valueDeserializer :
      configs.getConfiguredInstance(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, Deserializer.class);
  vDeserializer.configure(configs.originals(), false);

  // Instantiate consumer record processor
  _consumerRecordsProcessor = new ConsumerRecordsProcessor<>(assembler, kDeserializer, vDeserializer, messageOffsetTracker, auditor);

  // Instantiate consumer rebalance listener
  _consumerRebalanceListener = new LiKafkaConsumerRebalanceListener<>(_consumerRecordsProcessor,
                                                                      this, _autoCommitEnabled);

  // Instantiate offset commit callback.
  _offsetCommitCallback = new LiKafkaOffsetCommitCallback();
  _lastProcessedResult = null;
  } catch (Exception e) {
    _kafkaConsumer.close();
    throw e;
  }
}
 
开发者ID:linkedin,项目名称:li-apache-kafka-clients,代码行数:69,代码来源:LiKafkaConsumerImpl.java


注:本文中的org.apache.kafka.common.serialization.Deserializer.configure方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。