当前位置: 首页>>代码示例>>Java>>正文


Java KafkaConsumer类代码示例

本文整理汇总了Java中org.apache.kafka.clients.consumer.KafkaConsumer的典型用法代码示例。如果您正苦于以下问题:Java KafkaConsumer类的具体用法?Java KafkaConsumer怎么用?Java KafkaConsumer使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


KafkaConsumer类属于org.apache.kafka.clients.consumer包,在下文中一共展示了KafkaConsumer类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: receive

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
public String receive() {
    KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(properties);
    consumer.subscribe(Arrays.asList(properties.getProperty("topic")));
    final int minBatchSize = 200;
    List<ConsumerRecord<String, String>> buffer = new ArrayList<ConsumerRecord<String, String>>();
    while (true) {
        ConsumerRecords<String, String> records = consumer.poll(100);

        for (ConsumerRecord<String, String> record : records) {
            buffer.add(record);
            System.err.println(buffer.size() + "----->" + record);

        }
        if (buffer.size() >= minBatchSize) {
            writeFileToHadoop(buffer);//先把buffer写入文件中
            consumer.commitSync();
            buffer.clear();
        }
    }
}
 
开发者ID:wanghan0501,项目名称:WiFiProbeAnalysis,代码行数:21,代码来源:KafkaConsumerForHive.java

示例2: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
public static void main(String[] args) {
    KafkaConsumer<String, String> consumer = createConsumer();
    consumer.subscribe(Arrays.asList(TOPIC));

    boolean flag = true;


    while (true) {
        ConsumerRecords<String, String> records = consumer.poll(100);
        if (flag) {
            Set<TopicPartition> assignments = consumer.assignment();
            assignments.forEach(topicPartition ->
                    consumer.seek(
                            topicPartition,
                            90));
            flag = false;
        }


        for (ConsumerRecord<String, String> record : records)
            System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
    }


}
 
开发者ID:jeqo,项目名称:post-kafka-rewind-consumer-offset,代码行数:26,代码来源:KafkaConsumerFromOffset.java

示例3: onScheduled

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
@OnScheduled
public void onScheduled(final ProcessContext context) {
    try {
        topic = context.getProperty(TOPIC).getValue();
        groupName = context.getProperty(CONSUMER_GROUP_NAME).getValue();
        brokerIP = context.getProperty(BROKERIP).getValue();
        props = new Properties();
        props.put("bootstrap.servers", brokerIP);
        props.put("group.id", groupName);
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms", "1000");
        props.put("session.timeout.ms", "30000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("auto.offset.reset", "earliest");
        consumer = new KafkaConsumer<String, String>(props);
        consumer.subscribe(Arrays.asList(topic));
    } catch (Exception e) {
        e.printStackTrace();
    }
}
 
开发者ID:dream-lab,项目名称:echo,代码行数:22,代码来源:KafkaFlowFilesConsumer.java

示例4: KafkaConsumerEvent

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
public KafkaConsumerEvent(String topic) {
    super(0l);
    this.topic = topic;
    Properties props = HeartBeatConfigContainer.getInstance().getKafkaConsumerConfig();
    Properties producerProps = HeartBeatConfigContainer.getInstance().getKafkaProducerConfig();
    try {
        dataConsumer = new KafkaConsumer<>(props);
        partition0 = new TopicPartition(this.topic, 0);
        dataConsumer.assign(Arrays.asList(partition0));
        dataConsumer.seekToEnd(Arrays.asList(partition0));
        KafkaConsumerContainer.getInstances().putConsumer(this.topic, dataConsumer);

        statProducer = new KafkaProducer<>(producerProps);
    } catch (Exception e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }
    startTime = System.currentTimeMillis();
}
 
开发者ID:BriData,项目名称:DBus,代码行数:20,代码来源:KafkaConsumerEvent.java

示例5: init

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
@Override
public void init(AbstractConfiguration config, String brokerId, BrokerListenerFactory factory) {
    init(config);

    BROKER_TOPIC = BROKER_TOPIC_PREFIX + "." + brokerId;

    logger.trace("Initializing Kafka consumer ...");

    // consumer config
    Properties props = new Properties();
    props.put("bootstrap.servers", config.getString("bootstrap.servers"));
    props.put("group.id", UUIDs.shortUuid());
    props.put("enable.auto.commit", "true");
    props.put("key.serializer", StringSerializer.class.getName());
    props.put("value.serializer", InternalMessageSerializer.class.getName());

    // consumer
    this.consumer = new KafkaConsumer<>(props);

    // consumer worker
    this.worker = new KafkaBrokerWorker(this.consumer, BROKER_TOPIC, factory.newListener());
    this.executor.submit(this.worker);
}
 
开发者ID:12315jack,项目名称:j1st-mqtt,代码行数:24,代码来源:KafkaBrokerCommunicator.java

示例6: receive

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
public List<String> receive() {
    KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(properties);
    consumer.subscribe(Arrays.asList(properties.getProperty("topic")));
    List<String> buffer = new ArrayList<String>();
    String msg = "";
    while (true) {
        System.err.println("consumer receive------------------");
        ConsumerRecords<String, String> records = consumer.poll(100);
        for (ConsumerRecord<String, String> record : records) {
            buffer.add(record.value());
        }
        consumer.close();
        return buffer;
    }


}
 
开发者ID:wanghan0501,项目名称:WiFiProbeAnalysis,代码行数:18,代码来源:KafkaConsumers.java

示例7: maybeSeekToEnd

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
private void maybeSeekToEnd(final KafkaConsumer<byte[], byte[]> client, final Set<TopicPartition> intermediateTopicPartitions) {

        final String groupId = options.valueOf(applicationIdOption);
        final List<String> intermediateTopics = options.valuesOf(intermediateTopicsOption);

        if (intermediateTopicPartitions.size() > 0) {
            if (!dryRun) {
                client.seekToEnd(intermediateTopicPartitions);
            } else {
                System.out.println("Following intermediate topics offsets will be reset to end (for consumer group " + groupId + ")");
                for (final String topic : intermediateTopics) {
                    if (allTopics.contains(topic)) {
                        System.out.println("Topic: " + topic);
                    }
                }
            }
        }

    }
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:20,代码来源:StreamsResetter.java

示例8: loopUntilRecordReceived

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
private static void loopUntilRecordReceived(final String kafka, final boolean eosEnabled) {
    final Properties consumerProperties = new Properties();
    consumerProperties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka);
    consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "broker-compatibility-consumer");
    consumerProperties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    consumerProperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    consumerProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    if (eosEnabled) {
        consumerProperties.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, IsolationLevel.READ_COMMITTED.name().toLowerCase(Locale.ROOT));
    }

    final KafkaConsumer<String, String> consumer = new KafkaConsumer<>(consumerProperties);
    consumer.subscribe(Collections.singletonList(SINK_TOPIC));

    while (true) {
        final ConsumerRecords<String, String> records = consumer.poll(100);
        for (final ConsumerRecord<String, String> record : records) {
            if (record.key().equals("key") && record.value().equals("value")) {
                consumer.close();
                return;
            }
        }
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:25,代码来源:BrokerCompatibilityTest.java

示例9: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
public static void main(String[] args) throws Exception {
    Properties props = new Properties();
    props.setProperty("bootstrap.servers", args[0]);
    props.setProperty("group.id", UUID.randomUUID().toString());
    props.setProperty("key.deserializer", LongDeserializer.class.getName());
    props.setProperty("value.deserializer", TradeDeserializer.class.getName());
    props.setProperty("auto.offset.reset", "earliest");
    KafkaConsumer<Long, Trade> consumer = new KafkaConsumer<>(props);
    List<String> topics = Arrays.asList(args[1]);
    consumer.subscribe(topics);
    System.out.println("Subscribed to topics " + topics);
    long count = 0;
    long start = System.nanoTime();
    while (true) {
        ConsumerRecords<Long, Trade> poll = consumer.poll(5000);
        System.out.println("Partitions in batch: " + poll.partitions());
        LongSummaryStatistics stats = StreamSupport.stream(poll.spliterator(), false)
                                                                   .mapToLong(r -> r.value().getTime()).summaryStatistics();
        System.out.println("Oldest record time: " + stats.getMin() + ", newest record: " + stats.getMax());
        count += poll.count();
        long elapsed = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start);
        long rate = (long) ((double) count / elapsed * 1000);
        System.out.printf("Total count: %,d in %,dms. Average rate: %,d records/s %n", count, elapsed, rate);

    }
}
 
开发者ID:hazelcast,项目名称:big-data-benchmark,代码行数:27,代码来源:TradeTestConsumer.java

示例10: readKafkaTopic

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
@GET
@Path("/readKafkaTopic")
public Response readKafkaTopic(Map<String, Object > map) {
    try {
        Properties properties = PropertiesUtils.getProps("consumer.properties");
        properties.setProperty("client.id","readKafkaTopic");
        properties.setProperty("group.id","readKafkaTopic");
        //properties.setProperty("bootstrap.servers", "localhost:9092");
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);
        String topic = map.get("topic").toString();
        //System.out.println("topic="+topic);
        TopicPartition topicPartition = new TopicPartition(topic, 0);
        List<TopicPartition> topics = Arrays.asList(topicPartition);
        consumer.assign(topics);
        consumer.seekToEnd(topics);
        long current = consumer.position(topicPartition);
        long end = current;
        current -= 1000;
        if(current < 0) current = 0;
        consumer.seek(topicPartition, current);
        List<String> result = new ArrayList<>();
        while (current < end) {
            //System.out.println("topic position = "+current);
            ConsumerRecords<String, String> records = consumer.poll(1000);
            for (ConsumerRecord<String, String> record : records) {
                result.add(record.value());
                //System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
            }
            current = consumer.position(topicPartition);
        }
        consumer.close();
        return Response.ok().entity(result).build();
    } catch (Exception e) {
        logger.error("Error encountered while readKafkaTopic with parameter:{}", JSON.toJSONString(map), e);
        return Response.status(204).entity(new Result(-1, e.getMessage())).build();
    }
}
 
开发者ID:BriData,项目名称:DBus,代码行数:38,代码来源:DataTableResource.java

示例11: createConsumer

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
/**
 * createConsumer - create a new consumer
 * @return
 * @throws Exception
 */
private Consumer<String, String> createConsumer() throws Exception {

    // Seek to end automatically
    TopicPartition dataTopicPartition = new TopicPartition(topicName, 0);
    List<TopicPartition> topics = Arrays.asList(dataTopicPartition);

    Properties props = ConfUtils.getProps(CONSUMER_PROPS);
    Consumer<String, String> consumer = new KafkaConsumer<>(props);
    consumer.assign(topics);

    if(offset == -1){
        consumer.seekToEnd(topics);
        logger.info("Consumer seek to end");
    }else{
        consumer.seek(dataTopicPartition, offset);
        logger.info(String.format("read changed as offset: %s", consumer.position(dataTopicPartition)));
    }
    return consumer;
}
 
开发者ID:BriData,项目名称:DBus,代码行数:25,代码来源:KafkaReader.java

示例12: cleanUp

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
private void cleanUp(KafkaConsumer<StatEventKey, StatAggregate> kafkaConsumer, int unCommittedRecCount) {

        //force a flush of anything in the aggregator
        if (statAggregator != null) {
            LOGGER.debug("Forcing a flush of aggregator {} on processor {}", statAggregator, this);
            flushAggregator();
        }
        if (kafkaConsumer != null) {
            if (unCommittedRecCount > 0) {
                LOGGER.debug("Committing kafka offset on processor {}", this);
                kafkaConsumer.commitSync();
            }
            LOGGER.debug("Closing kafka consumer on processor {}", this);
            kafkaConsumer.close();
        }
    }
 
开发者ID:gchq,项目名称:stroom-stats,代码行数:17,代码来源:StatisticsAggregationProcessor.java

示例13: KafkaMessageConsumer2

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
@SuppressWarnings("unchecked")
public KafkaMessageConsumer2(final Map<String, Object> consumerConfig, final String topic, final ISenderProvider<T> senderProvider,
		final MessagePublisherProvider<T, K, V> publisherProvider) {
	this.consumer = new KafkaConsumer<>(consumerConfig);
	this.pollTimeout = (Long) consumerConfig.get(Constants.KAFKA_POLL_TIMEOUT);
	this.senderProvider = senderProvider;
	this.publisherProvider = publisherProvider;

	final List<PartitionInfo> partitionInfos = consumer.partitionsFor(topic);

	final List<String> subscribedPartitions = (List<String>) consumerConfig.get(Constants.KAFKA_SUBSCRIBED_PARTITIONS);
	final Collection<TopicPartition> partitions = partitionInfos.stream().filter(p -> subscribedPartitions.contains(Integer.valueOf(p.partition())))
			.map(p -> new TopicPartition(p.topic(), p.partition())).collect(Collectors.toList());
	LOG.info("Assigning to topic={}, partitions={}", topic, partitions);
	this.consumer.assign(partitions);
}
 
开发者ID:dcsolutions,项目名称:kalinka,代码行数:17,代码来源:KafkaMessageConsumer2.java

示例14: main

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
public static void main(String[] args) throws Exception {
    Properties props = new Properties();

    props.put("bootstrap.servers", "192.168.77.7:9092,192.168.77.7:9093,192.168.77.7:9094");
    props.put("group.id", "test-group-id");
    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);

    consumer.subscribe(Collections.singletonList("test"));

    System.out.println("Subscribed to topic test");

    while (true) {
        ConsumerRecords<String, String> records = consumer.poll(100);
        for (ConsumerRecord<String, String> record : records)

            System.out.println(String.format("offset = %s, key = %s, value = %s", record.offset(), record.key(), record.value()));
    }
}
 
开发者ID:bpark,项目名称:kafka-docker-demo,代码行数:21,代码来源:ConsumerDemo.java

示例15: verifyTopicsExist

import org.apache.kafka.clients.consumer.KafkaConsumer; //导入依赖的package包/类
public boolean verifyTopicsExist(String kafkaBrokers, Set<String> requiredTopics,
                                 boolean checkPartitionCounts) {
    Properties props = new Properties();
    props.put("bootstrap.servers", kafkaBrokers);
    props.put("group.id", UUID.randomUUID().toString());
    props.put("key.deserializer", StringDeserializer.class.getName());
    props.put("value.deserializer", StringDeserializer.class.getName());
    KafkaConsumer consumer = new KafkaConsumer(props);
    try {
        @SuppressWarnings("unchecked")
        Map<String, List<PartitionInfo>> topics = consumer.listTopics();

        Set<Integer> partitionCount = new HashSet<>();
        for (String requiredTopic : requiredTopics) {
            List<PartitionInfo> partitions = topics.get(requiredTopic);
            if (partitions == null) {
                logger.info("Required kafka topic {} not present", requiredTopic);
                return false;
            }
            partitionCount.add(partitions.size());
        }
        if (checkPartitionCounts && partitionCount.size() > 1) {
            logger.warn("Partition count mismatch in topics {}",
                    Arrays.toString(requiredTopics.toArray()));
            return false;
        }
        return true;
    } finally {
        consumer.close();
    }
}
 
开发者ID:Sixt,项目名称:ja-micro,代码行数:32,代码来源:TopicVerification.java


注:本文中的org.apache.kafka.clients.consumer.KafkaConsumer类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。