當前位置: 首頁>>代碼示例>>Java>>正文


Java StringDeserializer類代碼示例

本文整理匯總了Java中org.apache.kafka.common.serialization.StringDeserializer的典型用法代碼示例。如果您正苦於以下問題:Java StringDeserializer類的具體用法?Java StringDeserializer怎麽用?Java StringDeserializer使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


StringDeserializer類屬於org.apache.kafka.common.serialization包,在下文中一共展示了StringDeserializer類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: providesKafkaInputStream

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
@Provides
JavaInputDStream<ConsumerRecord<String, RawRating>> providesKafkaInputStream(JavaStreamingContext streamingContext) {
    Map<String, Object> kafkaParams = new HashedMap();
    kafkaParams.put("bootstrap.servers", "localhost:9092");
    kafkaParams.put("key.deserializer", StringDeserializer.class);
    kafkaParams.put("value.deserializer", JsonDeserializer.class);
    kafkaParams.put("serializedClass", RawRating.class);
    kafkaParams.put("group.id", "rating_stream");
    kafkaParams.put("auto.offset.reset", "latest");
    kafkaParams.put("enable.auto.commit", false);
    Collection<String> topics = Arrays.asList("topicA", "topicB");

    return KafkaUtils.createDirectStream(
            streamingContext,
            LocationStrategies.PreferConsistent(),
            ConsumerStrategies.<String, RawRating>Subscribe(topics, kafkaParams)
    );
}
 
開發者ID:cosminseceleanu,項目名稱:movie-recommender,代碼行數:19,代碼來源:SparkModule.java

示例2: getConsumer

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
private KafkaConsumer<String, Serializable> getConsumer(String groupId) {

        KafkaConsumer<String, Serializable> kafkaConsumer = null;
        if ((kafkaConsumer = kafkaConsumers.get(groupId)) != null)
            return kafkaConsumer;

        Properties properties = new Properties();
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
        properties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "30000");
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
            StringDeserializer.class.getName());
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
            StringDeserializer.class.getName());

        kafkaConsumer = new KafkaConsumer<String, Serializable>(properties);
        kafkaConsumers.put(groupId, kafkaConsumer);
        return kafkaConsumer;

    }
 
開發者ID:warlock-china,項目名稱:azeroth,代碼行數:22,代碼來源:KafkaConsumerCommand.java

示例3: ConsumerLoop

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
public ConsumerLoop(int id,
                    String groupId,
                    List<String> topics,
                    DBClient dbClient) {
    this.id = id;
    this.topics = topics;
    Properties props = new Properties();

    String kafkaHostName = System.getenv("KAFKA_HOST_NAME");
    log.info("Kafka host: " + kafkaHostName);
    props.put("bootstrap.servers", kafkaHostName + ":9092");
    props.put("group.id", groupId);
    props.put("key.deserializer", StringDeserializer.class.getName());
    props.put("value.deserializer", StringDeserializer.class.getName());
    this.consumer = new KafkaConsumer<>(props);
    this.dbClient = dbClient;
}
 
開發者ID:sjsucohort6,項目名稱:amigo-chatbot,代碼行數:18,代碼來源:ConsumerLoop.java

示例4: main

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
public static void main(String[] args) throws InterruptedException {
    Map<String, Object> kafkaParams = new HashMap<>();
    kafkaParams.put("bootstrap.servers", "localhost:9092");
    kafkaParams.put("key.deserializer", StringDeserializer.class);
    kafkaParams.put("value.deserializer", StringDeserializer.class);
    kafkaParams.put("group.id", "use_a_separate_group_id_for_each_stream");
    kafkaParams.put("auto.offset.reset", "latest");
    kafkaParams.put("enable.auto.commit", false);

    Collection<String> topics = Arrays.asList("data-in");

    SparkConf sparkConf = new SparkConf().setAppName("JavaKafkaSpark");
    JavaStreamingContext streamingContext = new JavaStreamingContext(sparkConf, Durations.seconds(5));

    final JavaInputDStream<ConsumerRecord<String, String>> stream =
            KafkaUtils.createDirectStream(
                    streamingContext,
                    LocationStrategies.PreferConsistent(),
                    ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams)
            );

    JavaPairDStream<String, Integer>  countOfMessageKeys = stream
            .map((ConsumerRecord<String, String> record) -> record.key())
            .mapToPair((String s) -> new Tuple2<>(s, 1))
            .reduceByKey((Integer i1, Integer i2)-> i1 + i2);

    countOfMessageKeys.print();

    // Start the computation
    streamingContext.start();
    streamingContext.awaitTermination();
}
 
開發者ID:ebi-wp,項目名稱:kafka-streams-api-websockets,代碼行數:33,代碼來源:SparkConsume.java

示例5: verifyTopicsExist

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
public boolean verifyTopicsExist(String kafkaBrokers, Set<String> requiredTopics,
                                 boolean checkPartitionCounts) {
    Properties props = new Properties();
    props.put("bootstrap.servers", kafkaBrokers);
    props.put("group.id", UUID.randomUUID().toString());
    props.put("key.deserializer", StringDeserializer.class.getName());
    props.put("value.deserializer", StringDeserializer.class.getName());
    KafkaConsumer consumer = new KafkaConsumer(props);
    try {
        @SuppressWarnings("unchecked")
        Map<String, List<PartitionInfo>> topics = consumer.listTopics();

        Set<Integer> partitionCount = new HashSet<>();
        for (String requiredTopic : requiredTopics) {
            List<PartitionInfo> partitions = topics.get(requiredTopic);
            if (partitions == null) {
                logger.info("Required kafka topic {} not present", requiredTopic);
                return false;
            }
            partitionCount.add(partitions.size());
        }
        if (checkPartitionCounts && partitionCount.size() > 1) {
            logger.warn("Partition count mismatch in topics {}",
                    Arrays.toString(requiredTopics.toArray()));
            return false;
        }
        return true;
    } finally {
        consumer.close();
    }
}
 
開發者ID:Sixt,項目名稱:ja-micro,代碼行數:32,代碼來源:TopicVerification.java

示例6: Consumer

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
Consumer(Topic topic, String consumerGroupId, Properties props, PartitionProcessorFactory processorFactory) {
    this.topic = topic;
    this.consumerGroupId = consumerGroupId;

    // Mandatory settings, not changeable
    props.put("group.id", consumerGroupId);
    props.put("key.deserializer", StringDeserializer.class.getName());
    props.put("value.deserializer", ByteArrayDeserializer.class.getName());

    kafka = new KafkaConsumer<>(props);
    partitions = new AssignedPartitions(processorFactory);

    long now = System.currentTimeMillis();

    // start it
    consumerLoopExecutor.execute(new ConsumerLoop());
}
 
開發者ID:Sixt,項目名稱:ja-micro,代碼行數:18,代碼來源:Consumer.java

示例7: consumerConfigs

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
/**
 * reads the config info from yml file
 * 
 * @return
 */
@Bean
public Map<String, Object> consumerConfigs() {
	Map<String, Object> propsMap = new HashMap<>();
	propsMap.put("bootstrap.servers",
			env.getProperty("kafka.broker"));
	propsMap.put("enable.auto.commit",
			env.getProperty("enable.auto.commit"));
	propsMap.put("auto.commit.interval.ms",
			env.getProperty("auto.commit.interval.ms"));
	propsMap.put("key.deserializer",
			StringDeserializer.class);
	propsMap.put("value.deserializer",
			JsonDeserializer.class);
	propsMap.put("group.id",
			env.getProperty("group.id"));
	propsMap.put("auto.offset.reset",
			env.getProperty("kafka.auto.offset.reset"));
	return propsMap;

}
 
開發者ID:sarojrout,項目名稱:spring-tutorial,代碼行數:26,代碼來源:StudentConsumerConfig.java

示例8: main

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
public static void main(String[] args) throws Exception {

    Options options =
        PipelineOptionsFactory.fromArgs(args).withValidation().as(Options.class);
    Pipeline pipeline = Pipeline.create(options);

    pipeline
    .apply(KafkaIO.<String, String>read()
        .withBootstrapServers(options.getKafkaBootstrapServer())
        .withTopic(options.getTopic())
        .withKeyDeserializer(StringDeserializer.class)
        .withValueDeserializer(StringDeserializer.class)
        .withTimestampFn(new SetTimestampFn()))
    .apply("Values", ParDo.of(new ValuesFn()))

    .apply("FixedWindows", Window.<String>into(FixedWindows.of(FIVE_MINUTES))
        .triggering(AfterWatermark.pastEndOfWindow()
            .withEarlyFirings(AfterProcessingTime.pastFirstElementInPane()
                .plusDelayOf(TWO_MINUTES))
            .withLateFirings(AfterPane.elementCountAtLeast(1)))
        .withAllowedLateness(TEN_MINUTES)
        .accumulatingFiredPanes())

    .apply("TeamScore", new CalculateTeamScores(options.getOutputPrefix()));

    pipeline.run();
  }
 
開發者ID:davorbonaci,項目名稱:beam-portability-demo,代碼行數:28,代碼來源:LeaderBoard.java

示例9: createDefaultMessageFormats

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
/**
 * Creates default message formats.
 */
private void createDefaultMessageFormats() {
    final Map<String, String> defaultFormats = new HashMap<>();
    defaultFormats.put("Short", ShortDeserializer.class.getName());
    defaultFormats.put("ByteArray", ByteArrayDeserializer.class.getName());
    defaultFormats.put("Bytes", BytesDeserializer.class.getName());
    defaultFormats.put("Double", DoubleDeserializer.class.getName());
    defaultFormats.put("Float", FloatDeserializer.class.getName());
    defaultFormats.put("Integer", IntegerDeserializer.class.getName());
    defaultFormats.put("Long", LongDeserializer.class.getName());
    defaultFormats.put("String", StringDeserializer.class.getName());

    // Create if needed.
    for (final Map.Entry<String, String> entry : defaultFormats.entrySet()) {
        MessageFormat messageFormat = messageFormatRepository.findByName(entry.getKey());
        if (messageFormat == null) {
            messageFormat = new MessageFormat();
        }
        messageFormat.setName(entry.getKey());
        messageFormat.setClasspath(entry.getValue());
        messageFormat.setJar("n/a");
        messageFormat.setDefaultFormat(true);
        messageFormatRepository.save(messageFormat);
    }
}
 
開發者ID:SourceLabOrg,項目名稱:kafka-webview,代碼行數:28,代碼來源:DataLoaderConfig.java

示例10: consumerFactory

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
public ConsumerFactory<String, String> consumerFactory() {

        Map<String, Object> properties = new HashMap<String, Object>();

        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers);
        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
        properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
        properties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, group);
        properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");

        return new DefaultKafkaConsumerFactory<String, String>(properties);

    }
 
開發者ID:cwenao,項目名稱:springboot_cwenao,代碼行數:17,代碼來源:KafkaConsumerConfig.java

示例11: consumeRecords

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
private static void consumeRecords(String bootstrapServers) {
    Properties props = new Properties();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "avro-consumer");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());

    Consumer<String, byte[]> consumer = new KafkaConsumer<>(props);

    consumer.subscribe(Arrays.asList(TOPIC));

    ConsumerRecords<String, byte[]> records = consumer.poll(10000);

    for (ConsumerRecord<String, byte[]> record : records)
        out.printf(
                "key = %s value = %s%n",
                record.key(),
                UserAvroSerdes.deserialize(record.value()).getName().toString());

    consumer.close();
}
 
開發者ID:jeqo,項目名稱:talk-kafka-messaging-logs,代碼行數:23,代碼來源:ProduceConsumeStringAvroRecord.java

示例12: consumeRecords

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
private static void consumeRecords(String bootstrapServers) {
    Properties props = new Properties();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "string-consumer");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class.getName());
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());

    Consumer<Integer, String> consumer = new KafkaConsumer<>(props);

    consumer.subscribe(Arrays.asList(TOPIC));

    ConsumerRecords<Integer, String> records = consumer.poll(10000);

    for (ConsumerRecord<Integer, String> record : records)
        out.printf(
                "key = %s value = %s%n",
                record.key(),
                record.value());

    consumer.close();
}
 
開發者ID:jeqo,項目名稱:talk-kafka-messaging-logs,代碼行數:23,代碼來源:ProduceConsumeIntegerStringRecord.java

示例13: consumeRecords

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
private static void consumeRecords(String bootstrapServers) {
    Properties props = new Properties();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "metadata-consumer");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class.getName());
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());

    Consumer<Integer, String> consumer = new KafkaConsumer<>(props);

    consumer.subscribe(Arrays.asList(TOPIC));

    ConsumerRecords<Integer, String> records = consumer.poll(10000);

    for (ConsumerRecord<Integer, String> record : records) {
        System.out.printf("key = %s value = %s\t", record.key(), record.value());
        System.out.printf("ProducerRecord: topic=>%s partition=>%s offset=>%s timestamp=>%s checksum=>%s",
                record.topic(),
                record.partition(),
                record.offset(),
                FORMATTER.format(Instant.ofEpochMilli(record.timestamp())),
                record.checksum());
        System.out.println();
    }
    consumer.close();
}
 
開發者ID:jeqo,項目名稱:talk-kafka-messaging-logs,代碼行數:27,代碼來源:ProduceConsumeRecordMetadata.java

示例14: setupAndCreateKafkaBasedLog

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
KafkaBasedLog<String, byte[]> setupAndCreateKafkaBasedLog(String topic, final WorkerConfig config) {
    Map<String, Object> producerProps = new HashMap<>();
    producerProps.putAll(config.originals());
    producerProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
    producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getName());
    producerProps.put(ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE);

    Map<String, Object> consumerProps = new HashMap<>();
    consumerProps.putAll(config.originals());
    consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
    consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());

    Map<String, Object> adminProps = new HashMap<>(config.originals());
    NewTopic topicDescription = TopicAdmin.defineTopic(topic).
            compacted().
            partitions(1).
            replicationFactor(config.getShort(DistributedConfig.CONFIG_STORAGE_REPLICATION_FACTOR_CONFIG)).
            build();

    return createKafkaBasedLog(topic, producerProps, consumerProps, new ConsumeCallback(), topicDescription, adminProps);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:22,代碼來源:KafkaConfigBackingStore.java

示例15: commitInvalidOffsets

import org.apache.kafka.common.serialization.StringDeserializer; //導入依賴的package包/類
private void commitInvalidOffsets() {
    final KafkaConsumer consumer = new KafkaConsumer(TestUtils.consumerConfig(
        CLUSTER.bootstrapServers(),
        streamsConfiguration.getProperty(StreamsConfig.APPLICATION_ID_CONFIG),
        StringDeserializer.class,
        StringDeserializer.class));

    final Map<TopicPartition, OffsetAndMetadata> invalidOffsets = new HashMap<>();
    invalidOffsets.put(new TopicPartition(TOPIC_1_2, 0), new OffsetAndMetadata(5, null));
    invalidOffsets.put(new TopicPartition(TOPIC_2_2, 0), new OffsetAndMetadata(5, null));
    invalidOffsets.put(new TopicPartition(TOPIC_A_2, 0), new OffsetAndMetadata(5, null));
    invalidOffsets.put(new TopicPartition(TOPIC_C_2, 0), new OffsetAndMetadata(5, null));
    invalidOffsets.put(new TopicPartition(TOPIC_Y_2, 0), new OffsetAndMetadata(5, null));
    invalidOffsets.put(new TopicPartition(TOPIC_Z_2, 0), new OffsetAndMetadata(5, null));

    consumer.commitSync(invalidOffsets);

    consumer.close();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:20,代碼來源:KStreamsFineGrainedAutoResetIntegrationTest.java


注:本文中的org.apache.kafka.common.serialization.StringDeserializer類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。