當前位置: 首頁>>代碼示例>>Java>>正文


Java KafkaConsumer類代碼示例

本文整理匯總了Java中org.apache.kafka.clients.consumer.KafkaConsumer的典型用法代碼示例。如果您正苦於以下問題:Java KafkaConsumer類的具體用法?Java KafkaConsumer怎麽用?Java KafkaConsumer使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


KafkaConsumer類屬於org.apache.kafka.clients.consumer包,在下文中一共展示了KafkaConsumer類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: receive

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
public String receive() {
    KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(properties);
    consumer.subscribe(Arrays.asList(properties.getProperty("topic")));
    final int minBatchSize = 200;
    List<ConsumerRecord<String, String>> buffer = new ArrayList<ConsumerRecord<String, String>>();
    while (true) {
        ConsumerRecords<String, String> records = consumer.poll(100);

        for (ConsumerRecord<String, String> record : records) {
            buffer.add(record);
            System.err.println(buffer.size() + "----->" + record);

        }
        if (buffer.size() >= minBatchSize) {
            writeFileToHadoop(buffer);//先把buffer寫入文件中
            consumer.commitSync();
            buffer.clear();
        }
    }
}
 
開發者ID:wanghan0501,項目名稱:WiFiProbeAnalysis,代碼行數:21,代碼來源:KafkaConsumerForHive.java

示例2: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
public static void main(String[] args) {
    KafkaConsumer<String, String> consumer = createConsumer();
    consumer.subscribe(Arrays.asList(TOPIC));

    boolean flag = true;


    while (true) {
        ConsumerRecords<String, String> records = consumer.poll(100);
        if (flag) {
            Set<TopicPartition> assignments = consumer.assignment();
            assignments.forEach(topicPartition ->
                    consumer.seek(
                            topicPartition,
                            90));
            flag = false;
        }


        for (ConsumerRecord<String, String> record : records)
            System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
    }


}
 
開發者ID:jeqo,項目名稱:post-kafka-rewind-consumer-offset,代碼行數:26,代碼來源:KafkaConsumerFromOffset.java

示例3: onScheduled

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
@OnScheduled
public void onScheduled(final ProcessContext context) {
    try {
        topic = context.getProperty(TOPIC).getValue();
        groupName = context.getProperty(CONSUMER_GROUP_NAME).getValue();
        brokerIP = context.getProperty(BROKERIP).getValue();
        props = new Properties();
        props.put("bootstrap.servers", brokerIP);
        props.put("group.id", groupName);
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms", "1000");
        props.put("session.timeout.ms", "30000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("auto.offset.reset", "earliest");
        consumer = new KafkaConsumer<String, String>(props);
        consumer.subscribe(Arrays.asList(topic));
    } catch (Exception e) {
        e.printStackTrace();
    }
}
 
開發者ID:dream-lab,項目名稱:echo,代碼行數:22,代碼來源:KafkaFlowFilesConsumer.java

示例4: KafkaConsumerEvent

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
public KafkaConsumerEvent(String topic) {
    super(0l);
    this.topic = topic;
    Properties props = HeartBeatConfigContainer.getInstance().getKafkaConsumerConfig();
    Properties producerProps = HeartBeatConfigContainer.getInstance().getKafkaProducerConfig();
    try {
        dataConsumer = new KafkaConsumer<>(props);
        partition0 = new TopicPartition(this.topic, 0);
        dataConsumer.assign(Arrays.asList(partition0));
        dataConsumer.seekToEnd(Arrays.asList(partition0));
        KafkaConsumerContainer.getInstances().putConsumer(this.topic, dataConsumer);

        statProducer = new KafkaProducer<>(producerProps);
    } catch (Exception e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }
    startTime = System.currentTimeMillis();
}
 
開發者ID:BriData,項目名稱:DBus,代碼行數:20,代碼來源:KafkaConsumerEvent.java

示例5: init

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
@Override
public void init(AbstractConfiguration config, String brokerId, BrokerListenerFactory factory) {
    init(config);

    BROKER_TOPIC = BROKER_TOPIC_PREFIX + "." + brokerId;

    logger.trace("Initializing Kafka consumer ...");

    // consumer config
    Properties props = new Properties();
    props.put("bootstrap.servers", config.getString("bootstrap.servers"));
    props.put("group.id", UUIDs.shortUuid());
    props.put("enable.auto.commit", "true");
    props.put("key.serializer", StringSerializer.class.getName());
    props.put("value.serializer", InternalMessageSerializer.class.getName());

    // consumer
    this.consumer = new KafkaConsumer<>(props);

    // consumer worker
    this.worker = new KafkaBrokerWorker(this.consumer, BROKER_TOPIC, factory.newListener());
    this.executor.submit(this.worker);
}
 
開發者ID:12315jack,項目名稱:j1st-mqtt,代碼行數:24,代碼來源:KafkaBrokerCommunicator.java

示例6: receive

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
public List<String> receive() {
    KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(properties);
    consumer.subscribe(Arrays.asList(properties.getProperty("topic")));
    List<String> buffer = new ArrayList<String>();
    String msg = "";
    while (true) {
        System.err.println("consumer receive------------------");
        ConsumerRecords<String, String> records = consumer.poll(100);
        for (ConsumerRecord<String, String> record : records) {
            buffer.add(record.value());
        }
        consumer.close();
        return buffer;
    }


}
 
開發者ID:wanghan0501,項目名稱:WiFiProbeAnalysis,代碼行數:18,代碼來源:KafkaConsumers.java

示例7: maybeSeekToEnd

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
private void maybeSeekToEnd(final KafkaConsumer<byte[], byte[]> client, final Set<TopicPartition> intermediateTopicPartitions) {

        final String groupId = options.valueOf(applicationIdOption);
        final List<String> intermediateTopics = options.valuesOf(intermediateTopicsOption);

        if (intermediateTopicPartitions.size() > 0) {
            if (!dryRun) {
                client.seekToEnd(intermediateTopicPartitions);
            } else {
                System.out.println("Following intermediate topics offsets will be reset to end (for consumer group " + groupId + ")");
                for (final String topic : intermediateTopics) {
                    if (allTopics.contains(topic)) {
                        System.out.println("Topic: " + topic);
                    }
                }
            }
        }

    }
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:20,代碼來源:StreamsResetter.java

示例8: loopUntilRecordReceived

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
private static void loopUntilRecordReceived(final String kafka, final boolean eosEnabled) {
    final Properties consumerProperties = new Properties();
    consumerProperties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka);
    consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "broker-compatibility-consumer");
    consumerProperties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    consumerProperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    consumerProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    if (eosEnabled) {
        consumerProperties.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, IsolationLevel.READ_COMMITTED.name().toLowerCase(Locale.ROOT));
    }

    final KafkaConsumer<String, String> consumer = new KafkaConsumer<>(consumerProperties);
    consumer.subscribe(Collections.singletonList(SINK_TOPIC));

    while (true) {
        final ConsumerRecords<String, String> records = consumer.poll(100);
        for (final ConsumerRecord<String, String> record : records) {
            if (record.key().equals("key") && record.value().equals("value")) {
                consumer.close();
                return;
            }
        }
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:25,代碼來源:BrokerCompatibilityTest.java

示例9: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
public static void main(String[] args) throws Exception {
    Properties props = new Properties();
    props.setProperty("bootstrap.servers", args[0]);
    props.setProperty("group.id", UUID.randomUUID().toString());
    props.setProperty("key.deserializer", LongDeserializer.class.getName());
    props.setProperty("value.deserializer", TradeDeserializer.class.getName());
    props.setProperty("auto.offset.reset", "earliest");
    KafkaConsumer<Long, Trade> consumer = new KafkaConsumer<>(props);
    List<String> topics = Arrays.asList(args[1]);
    consumer.subscribe(topics);
    System.out.println("Subscribed to topics " + topics);
    long count = 0;
    long start = System.nanoTime();
    while (true) {
        ConsumerRecords<Long, Trade> poll = consumer.poll(5000);
        System.out.println("Partitions in batch: " + poll.partitions());
        LongSummaryStatistics stats = StreamSupport.stream(poll.spliterator(), false)
                                                                   .mapToLong(r -> r.value().getTime()).summaryStatistics();
        System.out.println("Oldest record time: " + stats.getMin() + ", newest record: " + stats.getMax());
        count += poll.count();
        long elapsed = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start);
        long rate = (long) ((double) count / elapsed * 1000);
        System.out.printf("Total count: %,d in %,dms. Average rate: %,d records/s %n", count, elapsed, rate);

    }
}
 
開發者ID:hazelcast,項目名稱:big-data-benchmark,代碼行數:27,代碼來源:TradeTestConsumer.java

示例10: readKafkaTopic

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
@GET
@Path("/readKafkaTopic")
public Response readKafkaTopic(Map<String, Object > map) {
    try {
        Properties properties = PropertiesUtils.getProps("consumer.properties");
        properties.setProperty("client.id","readKafkaTopic");
        properties.setProperty("group.id","readKafkaTopic");
        //properties.setProperty("bootstrap.servers", "localhost:9092");
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);
        String topic = map.get("topic").toString();
        //System.out.println("topic="+topic);
        TopicPartition topicPartition = new TopicPartition(topic, 0);
        List<TopicPartition> topics = Arrays.asList(topicPartition);
        consumer.assign(topics);
        consumer.seekToEnd(topics);
        long current = consumer.position(topicPartition);
        long end = current;
        current -= 1000;
        if(current < 0) current = 0;
        consumer.seek(topicPartition, current);
        List<String> result = new ArrayList<>();
        while (current < end) {
            //System.out.println("topic position = "+current);
            ConsumerRecords<String, String> records = consumer.poll(1000);
            for (ConsumerRecord<String, String> record : records) {
                result.add(record.value());
                //System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
            }
            current = consumer.position(topicPartition);
        }
        consumer.close();
        return Response.ok().entity(result).build();
    } catch (Exception e) {
        logger.error("Error encountered while readKafkaTopic with parameter:{}", JSON.toJSONString(map), e);
        return Response.status(204).entity(new Result(-1, e.getMessage())).build();
    }
}
 
開發者ID:BriData,項目名稱:DBus,代碼行數:38,代碼來源:DataTableResource.java

示例11: createConsumer

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
/**
 * createConsumer - create a new consumer
 * @return
 * @throws Exception
 */
private Consumer<String, String> createConsumer() throws Exception {

    // Seek to end automatically
    TopicPartition dataTopicPartition = new TopicPartition(topicName, 0);
    List<TopicPartition> topics = Arrays.asList(dataTopicPartition);

    Properties props = ConfUtils.getProps(CONSUMER_PROPS);
    Consumer<String, String> consumer = new KafkaConsumer<>(props);
    consumer.assign(topics);

    if(offset == -1){
        consumer.seekToEnd(topics);
        logger.info("Consumer seek to end");
    }else{
        consumer.seek(dataTopicPartition, offset);
        logger.info(String.format("read changed as offset: %s", consumer.position(dataTopicPartition)));
    }
    return consumer;
}
 
開發者ID:BriData,項目名稱:DBus,代碼行數:25,代碼來源:KafkaReader.java

示例12: cleanUp

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
private void cleanUp(KafkaConsumer<StatEventKey, StatAggregate> kafkaConsumer, int unCommittedRecCount) {

        //force a flush of anything in the aggregator
        if (statAggregator != null) {
            LOGGER.debug("Forcing a flush of aggregator {} on processor {}", statAggregator, this);
            flushAggregator();
        }
        if (kafkaConsumer != null) {
            if (unCommittedRecCount > 0) {
                LOGGER.debug("Committing kafka offset on processor {}", this);
                kafkaConsumer.commitSync();
            }
            LOGGER.debug("Closing kafka consumer on processor {}", this);
            kafkaConsumer.close();
        }
    }
 
開發者ID:gchq,項目名稱:stroom-stats,代碼行數:17,代碼來源:StatisticsAggregationProcessor.java

示例13: KafkaMessageConsumer2

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
@SuppressWarnings("unchecked")
public KafkaMessageConsumer2(final Map<String, Object> consumerConfig, final String topic, final ISenderProvider<T> senderProvider,
		final MessagePublisherProvider<T, K, V> publisherProvider) {
	this.consumer = new KafkaConsumer<>(consumerConfig);
	this.pollTimeout = (Long) consumerConfig.get(Constants.KAFKA_POLL_TIMEOUT);
	this.senderProvider = senderProvider;
	this.publisherProvider = publisherProvider;

	final List<PartitionInfo> partitionInfos = consumer.partitionsFor(topic);

	final List<String> subscribedPartitions = (List<String>) consumerConfig.get(Constants.KAFKA_SUBSCRIBED_PARTITIONS);
	final Collection<TopicPartition> partitions = partitionInfos.stream().filter(p -> subscribedPartitions.contains(Integer.valueOf(p.partition())))
			.map(p -> new TopicPartition(p.topic(), p.partition())).collect(Collectors.toList());
	LOG.info("Assigning to topic={}, partitions={}", topic, partitions);
	this.consumer.assign(partitions);
}
 
開發者ID:dcsolutions,項目名稱:kalinka,代碼行數:17,代碼來源:KafkaMessageConsumer2.java

示例14: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
public static void main(String[] args) throws Exception {
    Properties props = new Properties();

    props.put("bootstrap.servers", "192.168.77.7:9092,192.168.77.7:9093,192.168.77.7:9094");
    props.put("group.id", "test-group-id");
    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);

    consumer.subscribe(Collections.singletonList("test"));

    System.out.println("Subscribed to topic test");

    while (true) {
        ConsumerRecords<String, String> records = consumer.poll(100);
        for (ConsumerRecord<String, String> record : records)

            System.out.println(String.format("offset = %s, key = %s, value = %s", record.offset(), record.key(), record.value()));
    }
}
 
開發者ID:bpark,項目名稱:kafka-docker-demo,代碼行數:21,代碼來源:ConsumerDemo.java

示例15: verifyTopicsExist

import org.apache.kafka.clients.consumer.KafkaConsumer; //導入依賴的package包/類
public boolean verifyTopicsExist(String kafkaBrokers, Set<String> requiredTopics,
                                 boolean checkPartitionCounts) {
    Properties props = new Properties();
    props.put("bootstrap.servers", kafkaBrokers);
    props.put("group.id", UUID.randomUUID().toString());
    props.put("key.deserializer", StringDeserializer.class.getName());
    props.put("value.deserializer", StringDeserializer.class.getName());
    KafkaConsumer consumer = new KafkaConsumer(props);
    try {
        @SuppressWarnings("unchecked")
        Map<String, List<PartitionInfo>> topics = consumer.listTopics();

        Set<Integer> partitionCount = new HashSet<>();
        for (String requiredTopic : requiredTopics) {
            List<PartitionInfo> partitions = topics.get(requiredTopic);
            if (partitions == null) {
                logger.info("Required kafka topic {} not present", requiredTopic);
                return false;
            }
            partitionCount.add(partitions.size());
        }
        if (checkPartitionCounts && partitionCount.size() > 1) {
            logger.warn("Partition count mismatch in topics {}",
                    Arrays.toString(requiredTopics.toArray()));
            return false;
        }
        return true;
    } finally {
        consumer.close();
    }
}
 
開發者ID:Sixt,項目名稱:ja-micro,代碼行數:32,代碼來源:TopicVerification.java


注:本文中的org.apache.kafka.clients.consumer.KafkaConsumer類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。