当前位置: 首页>>代码示例>>Java>>正文


Java KafkaConsumer.subscribe方法代码示例

本文整理汇总了Java中org.apache.kafka.clients.consumer.KafkaConsumer.subscribe方法的典型用法代码示例。如果您正苦于以下问题:Java KafkaConsumer.subscribe方法的具体用法?Java KafkaConsumer.subscribe怎么用?Java KafkaConsumer.subscribe使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.clients.consumer.KafkaConsumer的用法示例。


在下文中一共展示了KafkaConsumer.subscribe方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
public static void main(String[] args) {
    KafkaConsumer<String, String> consumer = createConsumer();
    consumer.subscribe(Arrays.asList(TOPIC));

    boolean flag = true;


    while (true) {
        ConsumerRecords<String, String> records = consumer.poll(100);
        if (flag) {
            Set<TopicPartition> assignments = consumer.assignment();
            assignments.forEach(topicPartition ->
                    consumer.seek(
                            topicPartition,
                            90));
            flag = false;
        }


        for (ConsumerRecord<String, String> record : records)
            System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
    }


}
 
开发者ID:jeqo,项目名称:post-kafka-rewind-consumer-offset,代码行数:26,代码来源:KafkaConsumerFromOffset.java

示例2: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
public static void main(String[] args) throws Exception {
    Properties props = new Properties();
    props.setProperty("bootstrap.servers", args[0]);
    props.setProperty("group.id", UUID.randomUUID().toString());
    props.setProperty("key.deserializer", LongDeserializer.class.getName());
    props.setProperty("value.deserializer", TradeDeserializer.class.getName());
    props.setProperty("auto.offset.reset", "earliest");
    KafkaConsumer<Long, Trade> consumer = new KafkaConsumer<>(props);
    List<String> topics = Arrays.asList(args[1]);
    consumer.subscribe(topics);
    System.out.println("Subscribed to topics " + topics);
    long count = 0;
    long start = System.nanoTime();
    while (true) {
        ConsumerRecords<Long, Trade> poll = consumer.poll(5000);
        System.out.println("Partitions in batch: " + poll.partitions());
        LongSummaryStatistics stats = StreamSupport.stream(poll.spliterator(), false)
                                                                   .mapToLong(r -> r.value().getTime()).summaryStatistics();
        System.out.println("Oldest record time: " + stats.getMin() + ", newest record: " + stats.getMax());
        count += poll.count();
        long elapsed = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start);
        long rate = (long) ((double) count / elapsed * 1000);
        System.out.printf("Total count: %,d in %,dms. Average rate: %,d records/s %n", count, elapsed, rate);

    }
}
 
开发者ID:hazelcast,项目名称:big-data-benchmark,代码行数:27,代码来源:TradeTestConsumer.java

示例3: loopUntilRecordReceived

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
private static void loopUntilRecordReceived(final String kafka, final boolean eosEnabled) {
    final Properties consumerProperties = new Properties();
    consumerProperties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka);
    consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "broker-compatibility-consumer");
    consumerProperties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    consumerProperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    consumerProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    if (eosEnabled) {
        consumerProperties.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, IsolationLevel.READ_COMMITTED.name().toLowerCase(Locale.ROOT));
    }

    final KafkaConsumer<String, String> consumer = new KafkaConsumer<>(consumerProperties);
    consumer.subscribe(Collections.singletonList(SINK_TOPIC));

    while (true) {
        final ConsumerRecords<String, String> records = consumer.poll(100);
        for (final ConsumerRecord<String, String> record : records) {
            if (record.key().equals("key") && record.value().equals("value")) {
                consumer.close();
                return;
            }
        }
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:25,代码来源:BrokerCompatibilityTest.java

示例4: readKeyValues

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
/**
 * Returns up to `maxMessages` by reading via the provided consumer (the topic(s) to read from are
 * already configured in the consumer).
 *
 * @param topic          Kafka topic to read messages from
 * @param consumerConfig Kafka consumer configuration
 * @param maxMessages    Maximum number of messages to read via the consumer
 * @return The KeyValue elements retrieved via the consumer
 */
public static <K, V> List<KeyValue<K, V>> readKeyValues(String topic, Properties consumerConfig, int maxMessages) {
  KafkaConsumer<K, V> consumer = new KafkaConsumer<>(consumerConfig);
  consumer.subscribe(Collections.singletonList(topic));
  int pollIntervalMs = 100;
  int maxTotalPollTimeMs = 2000;
  int totalPollTimeMs = 0;
  List<KeyValue<K, V>> consumedValues = new ArrayList<>();
  while (totalPollTimeMs < maxTotalPollTimeMs && continueConsuming(consumedValues.size(), maxMessages)) {
    totalPollTimeMs += pollIntervalMs;
    ConsumerRecords<K, V> records = consumer.poll(pollIntervalMs);
    for (ConsumerRecord<K, V> record : records) {
      consumedValues.add(new KeyValue<>(record.key(), record.value()));
    }
  }
  consumer.close();
  return consumedValues;
}
 
开发者ID:kaiwaehner,项目名称:kafka-streams-machine-learning-examples,代码行数:27,代码来源:IntegrationTestUtils.java

示例5: fetch

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
@SuppressWarnings("unchecked")
@Override
public List<EntityCommand<?>> fetch(String txId) {
	List<EntityCommand<?>> transactionOperations = new ArrayList<EntityCommand<?>>();

	Map<String, Object> consumerConfigs = (Map<String, Object>)configuration.get("kafkaConsumerConfiguration");
	consumerConfigs.put(ConsumerConfig.GROUP_ID_CONFIG, UUID.randomUUID().toString());
	
	KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<String, String>(consumerConfigs);
	kafkaConsumer.subscribe(Arrays.asList(txId));
	
	ConsumerRecords<String, String> records = kafkaConsumer.poll(kafkaConsumerPollTimeout);
	for (ConsumerRecord<String, String> record : records){
		LOG.info("offset = {}, key = {}, value = {}", record.offset(), record.key(), record.value());
		try {
			transactionOperations.add(serializer.readFromString(record.value()));
		} catch (SerializationFailedException e) {
			LOG.error("Unable to deserialize [{}] because of: {}", record.value(), e.getMessage());
		}
	}
	
	kafkaConsumer.close();
		
	return transactionOperations;
}
 
开发者ID:jotorren,项目名称:microservices-transactions-tcc,代码行数:26,代码来源:CompositeTransactionManagerKafkaImpl.java

示例6: getSubscriber

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
/**
 * Get a Subscriber that reads from the given partitions. If partitions is null, the Subscriber reads from the topic
 * corresponding to topicName.
 *
 * @param partitions The list of partitions to read from.
 * @param topicName The topic to subscribe to if partitions are not given.
 * @return The Subscriber reading from the appropriate topic/partitions.
 */
private Subscriber getSubscriber(List<TopicPartition> partitions, String topicName) throws PubSubException {
    Map<String, Object> properties = getProperties(CONSUMER_NAMESPACE, KAFKA_CONSUMER_PROPERTIES);

    // Get the PubSub Consumer specific properties
    Number maxUnackedMessages = getRequiredConfig(Number.class, KafkaConfig.MAX_UNCOMMITTED_MESSAGES);

    // Is autocommit on
    String autoCommit = getRequiredConfig(String.class, KafkaConfig.ENABLE_AUTO_COMMIT);
    boolean enableAutoCommit = KafkaConfig.TRUE.equalsIgnoreCase(autoCommit);

    KafkaConsumer<String, byte[]> consumer = new KafkaConsumer<>(properties);
    // Subscribe to the topic if partitions are not set in the config.
    if (partitions == null) {
        consumer.subscribe(Collections.singleton(topicName));
    } else {
        consumer.assign(partitions);
    }
    return new KafkaSubscriber(consumer, maxUnackedMessages.intValue(), !enableAutoCommit);
}
 
开发者ID:yahoo,项目名称:bullet-kafka,代码行数:28,代码来源:KafkaPubSub.java

示例7: buildConsumer

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
private KafkaConsumer<StatEventKey, StatAggregate> buildConsumer() {

        try {
            Map<String, Object> props = getConsumerProps();
            LOGGER.debug(() ->
                "Starting aggregation consumer [" + instanceId + "] with properties:\n" + props.entrySet().stream()
                        .map(entry -> "    " + entry.getKey() + ": " + entry.getValue().toString())
                        .collect(Collectors.joining("\n"))
            );
            KafkaConsumer<StatEventKey, StatAggregate> kafkaConsumer = new KafkaConsumer<>(
                    props,
                    statKeySerde.deserializer(),
                    statAggregateSerde.deserializer());

            StatisticsAggregationRebalanceListener rebalanceListener = new StatisticsAggregationRebalanceListener(
                    this,
                    kafkaConsumer);

            kafkaConsumer.subscribe(Collections.singletonList(inputTopic), rebalanceListener);

            //Update our collection of partitions for later health check use
//            assignedPartitions = kafkaConsumer.partitionsFor(inputTopic).stream()
//                    .map(PartitionInfo::partition)
//                    .collect(Collectors.toList());
            setAssignedPartitions(kafkaConsumer.assignment());

            return kafkaConsumer;
        } catch (Exception e) {
            LOGGER.error(String.format("Error building consumer for topic %s on processor %s", inputTopic, this), e);
            throw e;
        }
    }
 
开发者ID:gchq,项目名称:stroom-stats,代码行数:33,代码来源:StatisticsAggregationProcessor.java

示例8: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
public static void main(String[] args) {
    Map<String, Object> configs = new HashMap<String, Object>();
    // bootstrap.servers指定一个或多个broker,不用指定全部的broker,它将自动发现集群中的其余的borker。
    configs.put("bootstrap.servers", "192.168.0.107:9092,192.168.0.108:9092,192.168.0.109:9092");
    configs.put("group.id", "kafka-test");
    // 是否自动确认offset
    configs.put("enable.auto.commit", "false");
    // 自动确认offset的时间间隔
    configs.put("auto.commit.interval.ms", "1000");
    configs.put("session.timeout.ms", "30000");

    configs.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    configs.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

    KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(configs);
    // 消费者订阅的topic, 可同时订阅多个
    consumer.subscribe(Arrays.asList("kafka-test"));

    final int minBatchSize = 200;
    List<ConsumerRecord<String, String>> buffer = new ArrayList<ConsumerRecord<String, String>>();

    while (true) {
        ConsumerRecords<String, String> records = consumer.poll(Long.MAX_VALUE);
        for (TopicPartition partition : records.partitions()) {
            List<ConsumerRecord<String, String>> partitionRecords = records.records(partition);
            for (ConsumerRecord<String, String> record : partitionRecords) {
                System.out.println(record.offset() + ": " + record.value());
            }
            /* 同步确认某个分区的特定offset */
            long lastOffset = partitionRecords.get(partitionRecords.size() - 1).offset();
            consumer.commitSync(Collections.singletonMap(partition, new OffsetAndMetadata(lastOffset + 1)));
        }
    }
}
 
开发者ID:wngn123,项目名称:wngn-jms-kafka,代码行数:35,代码来源:ComsumerDemo3.java

示例9: run

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
@Override
public void run() {
    KafkaConsumer<String, String> consumer = new KafkaConsumer<>(kafkaProps);
    consumer.subscribe(topics);

    while (true) {
        ConsumerRecords<String, String> records = consumer.poll(100);
        for (ConsumerRecord<String, String> record : records) {
            logger.trace("received message: {} - {}", record.offset(), record.value());
            parseRecordExecutor.execute(new ParseRecord(record));
        }
    }
}
 
开发者ID:telstra,项目名称:open-kilda,代码行数:14,代码来源:KafkaMessageCollector.java

示例10: iterator

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
@Override
public CloseableIterator<KeyMessage<String,String>> iterator() {
  KafkaConsumer<String,String> consumer = new KafkaConsumer<>(
      ConfigUtils.keyValueToProperties(
        "group.id", "OryxGroup-ConsumeData",
        "bootstrap.servers", "localhost:" + kafkaPort,
        "key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer",
        "value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer",
        "max.partition.fetch.bytes", maxMessageSize,
        "auto.offset.reset", "earliest" // For tests, always start at the beginning
      ));
  consumer.subscribe(Collections.singletonList(topic));
  return new ConsumeDataIterator<>(consumer);
}
 
开发者ID:oncewang,项目名称:oryx2,代码行数:15,代码来源:ConsumeData.java

示例11: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
public static void main(String[] args) throws InterruptedException {

        Properties props = new Properties();
        props.put(BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put(GROUP_ID_CONFIG, "a");
        props.put(ENABLE_AUTO_COMMIT_CONFIG, "true");
        props.put(AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
        props.put(SESSION_TIMEOUT_MS_CONFIG, "30000");
        props.put(KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);

        consumer.subscribe(Arrays.asList("produktion"), new OffsetBeginningRebalanceListener(consumer, "produktion"));

        while(true) {

            ConsumerRecords<String, String> records = consumer.poll(1000);
            if (records.count() == 0)
                continue;

            System.out.println(" Count: " + records.count());

            for (ConsumerRecord<String, String> record : records)
                System.out.printf("offset= %d, key= %s, value= %s\n", record.offset(), record.key(), record.value());

        }
    }
 
开发者ID:predic8,项目名称:apache-kafka-demos,代码行数:29,代码来源:OffsetConsumer.java

示例12: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
public static void main(String[] args) {

		ArrayList<String> topicsList = new ArrayList<String>();

		HashMap<String, Object> kafkaProperties = new HashMap<String, Object>();

		topicsList.add("proteus-realtime");
		kafkaProperties.put("bootstrap.servers", "192.168.4.246:6667,192.168.4.247:6667,192.168.4.248:6667");
		kafkaProperties.put("key.deserializer", "org.apache.kafka.common.serialization.IntegerDeserializer");
		kafkaProperties.put("value.deserializer", ProteusSerializer.class.getName());
		kafkaProperties.put("group.id", "proteus");

		KafkaConsumer<Integer, Measurement> kafkaConsumer;

		kafkaConsumer = new KafkaConsumer<Integer, Measurement>(kafkaProperties, new IntegerDeserializer(),
				new ProteusSerializer());
		kafkaConsumer.subscribe(topicsList);

		try {
			while (true) {
				ConsumerRecords<Integer, Measurement> records = kafkaConsumer.poll(1);
				for (ConsumerRecord<Integer, Measurement> record : records) {
					System.out.println("record realtime: " + record.toString());
				}

			}
		} finally {
			kafkaConsumer.close();
		}

	}
 
开发者ID:proteus-h2020,项目名称:proteus-consumer-couchbase,代码行数:32,代码来源:ExampleRealtime.java

示例13: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
public static void main(String[] args) throws InterruptedException {

        Properties props = new Properties();
        props.put(BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put(GROUP_ID_CONFIG, "a");
        props.put(ENABLE_AUTO_COMMIT_CONFIG, "true");
        props.put(AUTO_COMMIT_INTERVAL_MS_CONFIG, 1000);
        props.put(SESSION_TIMEOUT_MS_CONFIG, 30000);
        props.put(KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);

        consumer.subscribe(Arrays.asList("produktion"), new SeekToBeginningRebalanceListener(consumer));

        int num = 0;
        int numOld = -1;
        while (num != numOld) {
            ConsumerRecords<String, String> records = consumer.poll(1000);

            numOld = num;
            num += records.count();

            System.out.println("Gelesene Nachrichten: " + num);

        }

        consumer.close();

    }
 
开发者ID:predic8,项目名称:apache-kafka-demos,代码行数:31,代码来源:RetentionDeleteConsumer.java

示例14: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
public static void main(String[] args) {
  final String tweetsEndpoint = System.getenv("TWEETS_ENDPOINT");

  if (tweetsEndpoint == null || tweetsEndpoint.trim().isEmpty()) {
    throw new RuntimeException("TWEETS_ENDPOINT env variable empty");
  }

  final Properties consumerConfigs = new Properties();
  consumerConfigs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "tweets-source-kafka:9092");
  consumerConfigs.put(ConsumerConfig.GROUP_ID_CONFIG, System.getenv("GROUP_ID"));
  consumerConfigs.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
  consumerConfigs.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);

  final KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(consumerConfigs, new StringDeserializer(), new StringDeserializer());

  kafkaConsumer.subscribe(Collections.singletonList("tweets"));

  final HttpClient httpClient = HttpClientBuilder.create().build();

  while (true) {
    final ConsumerRecords<String, String> consumerRecords = kafkaConsumer.poll(Long.MAX_VALUE);

    for (final ConsumerRecord<String, String> consumerRecord : consumerRecords) {
      final String value = consumerRecord.value();

      try {
        final JsonNode valueNode = objectMapper.readTree(value);
        out.println(valueNode.toString());
        final JsonNode payloadNode = valueNode.get("payload");
        ObjectNode node = (ObjectNode) payloadNode;
        node.remove("lang");
        ((ObjectNode) node.get("entities")).remove("user_mentions");
        ((ObjectNode) node.get("entities")).remove("media");
        ((ObjectNode) node.get("entities")).remove("urls");
        ((ObjectNode) node.get("user")).remove("friends_count");
        ((ObjectNode) node.get("user")).remove("followers_count");
        ((ObjectNode) node.get("user")).remove("statuses_count");
        out.println(node.toString());
        final String payloadValue = node.toString();
        final HttpPost httpPost = new HttpPost(tweetsEndpoint);
        final HttpEntity entity = new NStringEntity(payloadValue, ContentType.APPLICATION_JSON);
        httpPost.setEntity(entity);
        HttpResponse response = httpClient.execute(httpPost);
        out.println("Response: " + response.getStatusLine().getStatusCode());
        out.println("Response: " + IOUtils.toString(response.getEntity().getContent(), "UTF-8"));
      } catch (Exception e) {
        e.printStackTrace();
      }

    }

    kafkaConsumer.commitSync();
  }
}
 
开发者ID:jeqo,项目名称:talk-observing-distributed-systems,代码行数:55,代码来源:TweetsProducer.java

示例15: consume

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入方法依赖的package包/类
private void consume() {
        Properties consumerProps = new Properties();
        consumerProps.put("bootstrap.servers", "stroom.kafka:9092");
        consumerProps.put("group.id", "consumerGroup");
        consumerProps.put("enable.auto.commit", "true");
        consumerProps.put("auto.commit.interval.ms", "1000");
        consumerProps.put("session.timeout.ms", "30000");
        consumerProps.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        consumerProps.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        Serde<String> stringSerde = Serdes.String();
        Serde<Long> longSerde = Serdes.Long();
//        LongAggregatorSerializer longAggregatorSerialiser = new LongAggregatorSerializer();
//        LongAggregatorDeserializer longAggregatorDeserialiser = new LongAggregatorDeserializer();
//        Serde<LongAggregator> longAggregatorSerde = Serdes.serdeFrom(longAggregatorSerialiser, longAggregatorDeserialiser);
        Serde<LongAggregator> longAggregatorSerde = SerdeUtils.buildBasicSerde(
                (topic, data) -> Bytes.toBytes(data.getAggregateVal()),
                (topic, bData) -> new LongAggregator(Bytes.toLong(bData)));

        SerdeUtils.verify(longAggregatorSerde, new LongAggregator(123));

        WindowedSerializer<Long> longWindowedSerializer = new WindowedSerializer<>(longSerde.serializer());
        WindowedDeserializer<Long> longWindowedDeserializer = new WindowedDeserializer<>(longSerde.deserializer());
        Serde<Windowed<Long>> windowedSerde = Serdes.serdeFrom(longWindowedSerializer, longWindowedDeserializer);

        KafkaConsumer<Windowed<Long>, LongAggregator> consumer = new KafkaConsumer<>(
                consumerProps,
                windowedSerde.deserializer(),
//                longSerde.deserializer(),
                longAggregatorSerde.deserializer());

        consumer.subscribe(Collections.singletonList(DEST_TOPIC));

        ExecutorService executorService = Executors.newSingleThreadExecutor();


        @SuppressWarnings("FutureReturnValueIgnored")
        Future future = executorService.submit(() -> {
            LOGGER.info("Consumer about to poll");
            Instant terminationTime = null;
//            while (!isTerminated.get() || Instant.now().isBefore(terminationTime.plusSeconds(10))) {
            while (true) {
                try {
//                    ConsumerRecords<Windowed<Long>, LongAggregator> records = consumer.poll(100);
                    ConsumerRecords<Windowed<Long>, LongAggregator> records = consumer.poll(100);
//                LOGGER.info("Received {} messages in batch", records.count());
                    for (ConsumerRecord<Windowed<Long>, LongAggregator> record : records) {
//                    for (ConsumerRecord<Long, LongAggregator> record : records) {
                        //                    System.out.printf("offset = %d, key = %s, value = %s\n", record.offset(), record.key(), record.value());
                        LOGGER.info("Received message key: {} winStart: {} winEnd {} winDuration: {} val: {}",
                                epochMsToString(record.key().key()),
                                epochMsToString(record.key().window().start()),
                                epochMsToString(record.key().window().end()),
                                record.key().window().end() - record.key().window().start(),
                                record.value().getAggregateVal());
//                        LOGGER.info("Received message key: {} val: {}",
//                                epochMsToString(record.key()),
//                                record.value().getAggregateVal());
//                        outputData.computeIfAbsent(record.key(),aLong -> new AtomicLong()).addAndGet(record.value().getAggregateVal());
                        outputData.computeIfAbsent(record.key().key(), aLong -> new AtomicLong()).addAndGet(record.value().getAggregateVal());
                    }
                } catch (Exception e) {
                    LOGGER.error("Error polling topic {} ", DEST_TOPIC, e);
                }
                if (isTerminated.get()) {
                    terminationTime = Instant.now();
                }
            }
//            consumer.close();
//            LOGGER.info("Consumer closed");

        });
        LOGGER.info("Consumer started");
    }
 
开发者ID:gchq,项目名称:stroom-stats,代码行数:75,代码来源:KafkaStreamsSandbox.java


注:本文中的org.apache.kafka.clients.consumer.KafkaConsumer.subscribe方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。