当前位置: 首页>>代码示例>>Java>>正文


Java ConsumerRecords.isEmpty方法代码示例

本文整理汇总了Java中org.apache.kafka.clients.consumer.ConsumerRecords.isEmpty方法的典型用法代码示例。如果您正苦于以下问题:Java ConsumerRecords.isEmpty方法的具体用法?Java ConsumerRecords.isEmpty怎么用?Java ConsumerRecords.isEmpty使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.clients.consumer.ConsumerRecords的用法示例。


在下文中一共展示了ConsumerRecords.isEmpty方法的14个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: pollChangeSet

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
@Override
public ChangeSet pollChangeSet() throws BiremeException {
  ConsumerRecords<String, String> records = consumer.poll(POLL_TIMEOUT);

  if (cxt.stop || records.isEmpty()) {
    return null;
  }

  KafkaCommitCallback callback = new KafkaCommitCallback();

  if (!commitCallbacks.offer(callback)) {
    String Message = "Can't add CommitCallback to queue.";
    throw new BiremeException(Message);
  }

  stat.recordCount.mark(records.count());

  return packRecords(records, callback);
}
 
开发者ID:HashDataInc,项目名称:bireme,代码行数:20,代码来源:KafkaPipeLine.java

示例2: consumeAllRecordsFromTopic

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
/**
 * This will consume all records from only the partitions given.
 * @param topic Topic to consume from.
 * @param partitionIds Collection of PartitionIds to consume.
 * @return List of ConsumerRecords consumed.
 */
public List<ConsumerRecord<byte[], byte[]>> consumeAllRecordsFromTopic(final String topic, Collection<Integer> partitionIds) {
    // Create topic Partitions
    List<TopicPartition> topicPartitions = new ArrayList<>();
    for (Integer partitionId: partitionIds) {
        topicPartitions.add(new TopicPartition(topic, partitionId));
    }

    // Connect Consumer
    KafkaConsumer<byte[], byte[]> kafkaConsumer =
        kafkaTestServer.getKafkaConsumer(ByteArrayDeserializer.class, ByteArrayDeserializer.class);

    // Assign topic partitions & seek to head of them
    kafkaConsumer.assign(topicPartitions);
    kafkaConsumer.seekToBeginning(topicPartitions);

    // Pull records from kafka, keep polling until we get nothing back
    final List<ConsumerRecord<byte[], byte[]>> allRecords = new ArrayList<>();
    ConsumerRecords<byte[], byte[]> records;
    do {
        // Grab records from kafka
        records = kafkaConsumer.poll(2000L);
        logger.info("Found {} records in kafka", records.count());

        // Add to our array list
        records.forEach(allRecords::add);

    }
    while (!records.isEmpty());

    // close consumer
    kafkaConsumer.close();

    // return all records
    return allRecords;
}
 
开发者ID:salesforce,项目名称:kafka-junit,代码行数:42,代码来源:KafkaTestUtils.java

示例3: pollCommunicateOnce

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
private void pollCommunicateOnce(Consumer<ByteBuffer, ByteBuffer> consumer) {
    ConsumerRecords<ByteBuffer, ByteBuffer> records = consumer.poll(POLL_TIMEOUT);

    if (records.isEmpty()) {
        if (!stalled && checkStalled(consumer)) {
            LOGGER.info("[I] Loader stalled {} / {}", f(leadId), f(localLoaderId));
            stalled = true;
            lead.notifyLocalLoaderStalled(leadId, localLoaderId);
        }
        // ToDo: Consider sending empty messages for heartbeat sake.
        return;
    }
    if (stalled) {
        stalled = false;
    }
    MutableLongList committedIds = new LongArrayList(records.count());

    for (ConsumerRecord<ByteBuffer, ByteBuffer> record : records) {
        committedIds.add(record.timestamp());
    }
    committedIds.sortThis();
    lead.updateInitialContext(localLoaderId, committedIds);
    consumer.commitSync();
}
 
开发者ID:epam,项目名称:Lagerta,代码行数:25,代码来源:LocalLeadContextLoader.java

示例4: keepPolling

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
private void keepPolling() throws InterruptedException {
    // keep on polling until shutdown for this thread is called.
    while (!shutdown) {
        ConsumerRecords<K, V> records = consumer.poll(pollingTime);

        // if polling gave no tasks, then sleep this thread for n seconds.
        if (records.isEmpty()) {
            log.debug("NO RECORDS fetched from queue. Putting current THREAD to SLEEP.");
            Thread.sleep(sleepTime);
            continue;
        }

        log.info("Processing a batch of records.");
        if (!processor.process(records)) {
            log.error("ERROR occurred while PROCESSING RECORDS.");
        }
    }
}
 
开发者ID:dixantmittal,项目名称:scalable-task-scheduler,代码行数:19,代码来源:Consumer.java

示例5: retrieveOneMessage

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
private static ConsumerRecord<byte[], byte[]> retrieveOneMessage(KafkaConsumer kafkaConsumer,
                                                                 TopicPartition topicPartition,
                                                                 long offset) {
  kafkaConsumer.seek(topicPartition, offset);
  ConsumerRecords<byte[], byte[]> records;
  ConsumerRecord<byte[], byte[]> record = null;
  while (record == null) {
    records = kafkaConsumer.poll(100);
    if (!records.isEmpty()) {
      LOG.debug("records.count() = {}", records.count());
      List<ConsumerRecord<byte[], byte[]>> reclist = records.records(topicPartition);
      if (reclist != null && !reclist.isEmpty()) {
        record = reclist.get(0);
        break;
      } else {
        LOG.info("recList is null or empty");
      }
    }
  }
  return record;
}
 
开发者ID:pinterest,项目名称:doctorkafka,代码行数:22,代码来源:ReplicaStatsManager.java

示例6: runLoop

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
/**
 * Main event loop for polling, and processing records through topologies.
 */
private void runLoop() {
    long recordsProcessedBeforeCommit = UNLIMITED_RECORDS;
    consumer.subscribe(sourceTopicPattern, rebalanceListener);

    while (stillRunning()) {
        timerStartedMs = time.milliseconds();

        // try to fetch some records if necessary
        final ConsumerRecords<byte[], byte[]> records = pollRequests();
        if (records != null && !records.isEmpty() && !activeTasks.isEmpty()) {
            streamsMetrics.pollTimeSensor.record(computeLatency(), timerStartedMs);
            addRecordsToTasks(records);
            final long totalProcessed = processAndPunctuate(activeTasks, recordsProcessedBeforeCommit);
            if (totalProcessed > 0) {
                final long processLatency = computeLatency();
                streamsMetrics.processTimeSensor.record(processLatency / (double) totalProcessed,
                    timerStartedMs);
                recordsProcessedBeforeCommit = adjustRecordsProcessedBeforeCommit(recordsProcessedBeforeCommit, totalProcessed,
                    processLatency, commitTimeMs);
            }
        }

        maybeCommit(timerStartedMs);
        maybeUpdateStandbyTasks();
        maybeClean(timerStartedMs);
    }
    log.info("{} Shutting down at user request", logPrefix);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:32,代码来源:StreamThread.java

示例7: addRecordsToTasks

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
/**
 * Take records and add them to each respective task
 * @param records Records, can be null
 */
private void addRecordsToTasks(final ConsumerRecords<byte[], byte[]> records) {
    if (records != null && !records.isEmpty()) {
        int numAddedRecords = 0;

        for (final TopicPartition partition : records.partitions()) {
            final StreamTask task = activeTasksByPartition.get(partition);
            numAddedRecords += task.addRecords(partition, records.records(partition));
        }
        streamsMetrics.skippedRecordsSensor.record(records.count() - numAddedRecords, timerStartedMs);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:16,代码来源:StreamThread.java

示例8: run

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
public void run() {
  while (true) {
    ConsumerRecords<String, String> records = kafkaConsumer.poll(60000);
    if (records != null && !records.isEmpty()) {
      log.info("records size:{}", records.count());

      boolean success = consume(records);
      if (success) {
        log.info("now commit offset");
        kafkaConsumer.commitSync();
      }
    }
  }
}
 
开发者ID:osswangxining,项目名称:iotplatform,代码行数:15,代码来源:BaseKafkaMsgReceiver.java

示例9: nextTuple

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
@Override
public void nextTuple() {
    if (!reloadSpout()) return;  // 判断是否重新加载了缓存,如果重新加载则直接返回
    if (flowLimitation()) return; // 如果读取的流量过大则要sleep一下
    // 读取kafka消息
    ConsumerRecords<String, byte[]> records = consumer.getMessages();

    if (records.isEmpty()) {
        bubble();
        return;
    }

    //TODO 判断是否为事件,如果消息为事件则需要等待消息队列中的数据处理完之后再处理,
    //TODO 由于mysql需要将消息数据完全解析完成后才能获取是否为表结构变更事件,暂时不实现
    /*Set<String> set = new HashSet<>();
    // 判断是否为事件,如果消息为事件则需要等待消息队列中的数据处理完之后再处理
    Iterator<ConsumerRecord<String, byte[]>> it = records.iterator();
    List<ConsumerRecord<String, byte[]>> list = new ArrayList<>();
    while (it.hasNext()) {
        ConsumerRecord<String, byte[]> record = it.next();

        // 记录在set中的topic partition的消息将本次将不会被处理
        if (set.contains(record.topic() + record.partition())) {
            continue;
        }

        // 验证是否为事件/判断队列中是否包含待处理的消息(需要区分topic和partition)
        if (isEvent(record) && !msgQueueMgr.isAllMessageProcessed(record)) {
            // 如果队列中还有待处理的消息,则需要等待队列中所有消息处理完之后才能处理该事件,
            // 实现等待的方式:seek到record所在的位置
            consumer.seek(record);
            logger.info("received an event[{}], seek to [topic:{},partition:{},offset:{}] to " +
                    "waiting until messages processed in the queue.", record.key(), record.topic(), record.partition(), record.offset());

            // 记录事件产生的topic和和partition,
            // records记录列表中在record之后的消息(和record的topic和partition相同的消息)
            // 将不会被:messageHandler.handleMessages(records)处理
            set.add(record.topic() + record.partition());
        } else {
            list.add(record);
        }

    }
    if (!list.isEmpty()) {
        messageHandler.handleMessages(list);
    }*/

    messageHandler.handleMessages(records);
}
 
开发者ID:BriData,项目名称:DBus,代码行数:50,代码来源:DbusKafkaSpout.java

示例10: testProducerAndConsumer

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
/**
 * Test that KafkaServer works as expected!
 *
 * This also serves as a decent example of how to use the producer and consumer.
 */
@Test
public void testProducerAndConsumer() throws Exception {
    final int partitionId = 0;

    // Define our message
    final String expectedKey = "my-key";
    final String expectedValue = "my test message";

    // Define the record we want to produce
    ProducerRecord<String, String> producerRecord = new ProducerRecord<>(topicName, partitionId, expectedKey, expectedValue);

    // Create a new producer
    KafkaProducer<String, String> producer = getKafkaTestServer().getKafkaProducer(StringSerializer.class, StringSerializer.class);

    // Produce it & wait for it to complete.
    Future<RecordMetadata> future = producer.send(producerRecord);
    producer.flush();
    while (!future.isDone()) {
        Thread.sleep(500L);
    }
    logger.info("Produce completed");

    // Close producer!
    producer.close();

    KafkaConsumer<String, String> kafkaConsumer =
        getKafkaTestServer().getKafkaConsumer(StringDeserializer.class, StringDeserializer.class);

    final List<TopicPartition> topicPartitionList = Lists.newArrayList();
    for (final PartitionInfo partitionInfo: kafkaConsumer.partitionsFor(topicName)) {
        topicPartitionList.add(new TopicPartition(partitionInfo.topic(), partitionInfo.partition()));
    }
    kafkaConsumer.assign(topicPartitionList);
    kafkaConsumer.seekToBeginning(topicPartitionList);

    // Pull records from kafka, keep polling until we get nothing back
    ConsumerRecords<String, String> records;
    do {
        records = kafkaConsumer.poll(2000L);
        logger.info("Found {} records in kafka", records.count());
        for (ConsumerRecord<String, String> record: records) {
            // Validate
            assertEquals("Key matches expected", expectedKey, record.key());
            assertEquals("value matches expected", expectedValue, record.value());
        }
    }
    while (!records.isEmpty());

    // close consumer
    kafkaConsumer.close();
}
 
开发者ID:salesforce,项目名称:kafka-junit,代码行数:57,代码来源:KafkaTestServerTest.java

示例11: run

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
@Override
public void run() {
    // Rename thread.
    Thread.currentThread().setName("WebSocket Consumer: " + clientConfig.getConsumerId());
    logger.info("Starting socket consumer for {}", clientConfig.getConsumerId());

    // Determine where to start from.
    initializeStartingPosition(clientConfig.getStartingPosition());

    do {
        // Start trying to consume messages from kafka
        final ConsumerRecords consumerRecords = kafkaConsumer.poll(POLL_TIMEOUT_MS);

        // If no records found
        if (consumerRecords.isEmpty()) {
            // Sleep for a bit
            sleep(POLL_TIMEOUT_MS);

            // Skip to next iteration of loop.
            continue;
        }

        // Push messages onto output queue
        for (final ConsumerRecord consumerRecord : (Iterable<ConsumerRecord>) consumerRecords) {
            // Translate record
            final KafkaResult kafkaResult = new KafkaResult(
                consumerRecord.partition(),
                consumerRecord.offset(),
                consumerRecord.timestamp(),
                consumerRecord.key(),
                consumerRecord.value()
            );

            // Add to the queue, this operation may block, effectively preventing the consumer from
            // consuming unbounded-ly.
            try {
                outputQueue.put(kafkaResult);
            } catch (final InterruptedException interruptedException) {
                // InterruptedException means we should shut down.
                requestStop();
            }
        }

        // Sleep for a bit
        sleep(DWELL_TIME_MS);
    }
    while (!requestStop);

    // requestStop
    kafkaConsumer.close();

    logger.info("Shutdown consumer {}", clientConfig.getConsumerId());
}
 
开发者ID:SourceLabOrg,项目名称:kafka-webview,代码行数:54,代码来源:SocketKafkaConsumer.java

示例12: consume

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
public void consume(String topic) throws Exception {
    if (maybeSetupPhase(topic, "simple-benchmark-consumer-load", true)) {
        return;
    }

    Properties props = setProduceConsumeProperties("simple-benchmark-consumer");

    KafkaConsumer<Integer, byte[]> consumer = new KafkaConsumer<>(props);

    List<TopicPartition> partitions = getAllPartitions(consumer, topic);
    consumer.assign(partitions);
    consumer.seekToBeginning(partitions);

    Integer key = null;

    long startTime = System.currentTimeMillis();

    while (true) {
        ConsumerRecords<Integer, byte[]> records = consumer.poll(POLL_MS);
        if (records.isEmpty()) {
            if (processedRecords == numRecords)
                break;
        } else {
            for (ConsumerRecord<Integer, byte[]> record : records) {
                processedRecords++;
                processedBytes += record.value().length + Integer.SIZE;
                Integer recKey = record.key();
                if (key == null || key < recKey)
                    key = recKey;
                if (processedRecords == numRecords)
                    break;
            }
        }
        if (processedRecords == numRecords)
            break;
    }

    long endTime = System.currentTimeMillis();

    consumer.close();
    printResults("Consumer Performance [records/latency/rec-sec/MB-sec read]: ", endTime - startTime);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:43,代码来源:SimpleBenchmark.java

示例13: run

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
public void run() {

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(config());

        consumer.subscribe(Arrays.asList("articles"), new OffsetBeginningRebalanceListener(consumer, "articles"));

        JsonParser parser = new JsonParser();

        try {

            System.out.println("Starting Listener!");

            while (true) {

                ConsumerRecords<String, String> records = consumer.poll(1000);

                if (records.isEmpty())
                    continue;

                for (ConsumerRecord<String, String> cr : records) {

                    JsonObject json = parser.parse(cr.value()).getAsJsonObject();

                    String action = json.getAsJsonPrimitive("action").getAsString();

                    JsonObject object = json.getAsJsonObject("object");

                    Article article = gson.fromJson(object, Article.class);

                    switch (action) {
                        case "update":
                        case "create":
                            article.setId(cr.key());
                            store.save(article);
                            break;
                        case "delete":
                            store.delete(cr.key());
                            break;

                    }


                }
            }
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            consumer.close();
        }
    }
 
开发者ID:predic8,项目名称:eventsourcing-kafka-sample,代码行数:51,代码来源:KafkaListenerRunner.java

示例14: run

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
public void run() {

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(config());

        consumer.subscribe(Arrays.asList("articles"), new OffsetBeginningRebalanceListener(consumer, "articles"));

        JsonParser parser = new JsonParser();

        try {

            while (true) {

                ConsumerRecords<String, String> records = consumer.poll(1000);

                if (records.isEmpty())
                    continue;



                for (ConsumerRecord<String, String> cr : records) {


                    //

                    //  @Consumer(topic="articles")

                    JsonObject json = parser.parse(cr.value()).getAsJsonObject();

                    String action = json.getAsJsonPrimitive("action").getAsString();

                    JsonObject object = json.getAsJsonObject("object");

                    System.out.println("----------------------------------------------------------------------------------");
                    System.out.println("Offset: " + cr.offset());
                    System.out.println("Key: "+ cr.key());
                    System.out.println("Action: " + action);
                    System.out.println("Object: " + object);

                    Article article = gson.fromJson(object, Article.class);

                    switch (action) {
                        case "update":
                        case "create":
                            article.setId(cr.key());
                            store.save(article);
                            break;
                        case "delete":
                            store.delete(cr.key());
                            break;

                    }


                }
            }
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            consumer.close();
        }
    }
 
开发者ID:predic8,项目名称:eventsourcing-kafka-sample,代码行数:62,代码来源:KafkaListenerRunner.java


注:本文中的org.apache.kafka.clients.consumer.ConsumerRecords.isEmpty方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。