當前位置: 首頁>>代碼示例>>Java>>正文


Java ConsumerRecords.isEmpty方法代碼示例

本文整理匯總了Java中org.apache.kafka.clients.consumer.ConsumerRecords.isEmpty方法的典型用法代碼示例。如果您正苦於以下問題:Java ConsumerRecords.isEmpty方法的具體用法?Java ConsumerRecords.isEmpty怎麽用?Java ConsumerRecords.isEmpty使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.kafka.clients.consumer.ConsumerRecords的用法示例。


在下文中一共展示了ConsumerRecords.isEmpty方法的14個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: pollChangeSet

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
@Override
public ChangeSet pollChangeSet() throws BiremeException {
  ConsumerRecords<String, String> records = consumer.poll(POLL_TIMEOUT);

  if (cxt.stop || records.isEmpty()) {
    return null;
  }

  KafkaCommitCallback callback = new KafkaCommitCallback();

  if (!commitCallbacks.offer(callback)) {
    String Message = "Can't add CommitCallback to queue.";
    throw new BiremeException(Message);
  }

  stat.recordCount.mark(records.count());

  return packRecords(records, callback);
}
 
開發者ID:HashDataInc,項目名稱:bireme,代碼行數:20,代碼來源:KafkaPipeLine.java

示例2: consumeAllRecordsFromTopic

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
/**
 * This will consume all records from only the partitions given.
 * @param topic Topic to consume from.
 * @param partitionIds Collection of PartitionIds to consume.
 * @return List of ConsumerRecords consumed.
 */
public List<ConsumerRecord<byte[], byte[]>> consumeAllRecordsFromTopic(final String topic, Collection<Integer> partitionIds) {
    // Create topic Partitions
    List<TopicPartition> topicPartitions = new ArrayList<>();
    for (Integer partitionId: partitionIds) {
        topicPartitions.add(new TopicPartition(topic, partitionId));
    }

    // Connect Consumer
    KafkaConsumer<byte[], byte[]> kafkaConsumer =
        kafkaTestServer.getKafkaConsumer(ByteArrayDeserializer.class, ByteArrayDeserializer.class);

    // Assign topic partitions & seek to head of them
    kafkaConsumer.assign(topicPartitions);
    kafkaConsumer.seekToBeginning(topicPartitions);

    // Pull records from kafka, keep polling until we get nothing back
    final List<ConsumerRecord<byte[], byte[]>> allRecords = new ArrayList<>();
    ConsumerRecords<byte[], byte[]> records;
    do {
        // Grab records from kafka
        records = kafkaConsumer.poll(2000L);
        logger.info("Found {} records in kafka", records.count());

        // Add to our array list
        records.forEach(allRecords::add);

    }
    while (!records.isEmpty());

    // close consumer
    kafkaConsumer.close();

    // return all records
    return allRecords;
}
 
開發者ID:salesforce,項目名稱:kafka-junit,代碼行數:42,代碼來源:KafkaTestUtils.java

示例3: pollCommunicateOnce

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
private void pollCommunicateOnce(Consumer<ByteBuffer, ByteBuffer> consumer) {
    ConsumerRecords<ByteBuffer, ByteBuffer> records = consumer.poll(POLL_TIMEOUT);

    if (records.isEmpty()) {
        if (!stalled && checkStalled(consumer)) {
            LOGGER.info("[I] Loader stalled {} / {}", f(leadId), f(localLoaderId));
            stalled = true;
            lead.notifyLocalLoaderStalled(leadId, localLoaderId);
        }
        // ToDo: Consider sending empty messages for heartbeat sake.
        return;
    }
    if (stalled) {
        stalled = false;
    }
    MutableLongList committedIds = new LongArrayList(records.count());

    for (ConsumerRecord<ByteBuffer, ByteBuffer> record : records) {
        committedIds.add(record.timestamp());
    }
    committedIds.sortThis();
    lead.updateInitialContext(localLoaderId, committedIds);
    consumer.commitSync();
}
 
開發者ID:epam,項目名稱:Lagerta,代碼行數:25,代碼來源:LocalLeadContextLoader.java

示例4: keepPolling

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
private void keepPolling() throws InterruptedException {
    // keep on polling until shutdown for this thread is called.
    while (!shutdown) {
        ConsumerRecords<K, V> records = consumer.poll(pollingTime);

        // if polling gave no tasks, then sleep this thread for n seconds.
        if (records.isEmpty()) {
            log.debug("NO RECORDS fetched from queue. Putting current THREAD to SLEEP.");
            Thread.sleep(sleepTime);
            continue;
        }

        log.info("Processing a batch of records.");
        if (!processor.process(records)) {
            log.error("ERROR occurred while PROCESSING RECORDS.");
        }
    }
}
 
開發者ID:dixantmittal,項目名稱:scalable-task-scheduler,代碼行數:19,代碼來源:Consumer.java

示例5: retrieveOneMessage

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
private static ConsumerRecord<byte[], byte[]> retrieveOneMessage(KafkaConsumer kafkaConsumer,
                                                                 TopicPartition topicPartition,
                                                                 long offset) {
  kafkaConsumer.seek(topicPartition, offset);
  ConsumerRecords<byte[], byte[]> records;
  ConsumerRecord<byte[], byte[]> record = null;
  while (record == null) {
    records = kafkaConsumer.poll(100);
    if (!records.isEmpty()) {
      LOG.debug("records.count() = {}", records.count());
      List<ConsumerRecord<byte[], byte[]>> reclist = records.records(topicPartition);
      if (reclist != null && !reclist.isEmpty()) {
        record = reclist.get(0);
        break;
      } else {
        LOG.info("recList is null or empty");
      }
    }
  }
  return record;
}
 
開發者ID:pinterest,項目名稱:doctorkafka,代碼行數:22,代碼來源:ReplicaStatsManager.java

示例6: runLoop

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
/**
 * Main event loop for polling, and processing records through topologies.
 */
private void runLoop() {
    long recordsProcessedBeforeCommit = UNLIMITED_RECORDS;
    consumer.subscribe(sourceTopicPattern, rebalanceListener);

    while (stillRunning()) {
        timerStartedMs = time.milliseconds();

        // try to fetch some records if necessary
        final ConsumerRecords<byte[], byte[]> records = pollRequests();
        if (records != null && !records.isEmpty() && !activeTasks.isEmpty()) {
            streamsMetrics.pollTimeSensor.record(computeLatency(), timerStartedMs);
            addRecordsToTasks(records);
            final long totalProcessed = processAndPunctuate(activeTasks, recordsProcessedBeforeCommit);
            if (totalProcessed > 0) {
                final long processLatency = computeLatency();
                streamsMetrics.processTimeSensor.record(processLatency / (double) totalProcessed,
                    timerStartedMs);
                recordsProcessedBeforeCommit = adjustRecordsProcessedBeforeCommit(recordsProcessedBeforeCommit, totalProcessed,
                    processLatency, commitTimeMs);
            }
        }

        maybeCommit(timerStartedMs);
        maybeUpdateStandbyTasks();
        maybeClean(timerStartedMs);
    }
    log.info("{} Shutting down at user request", logPrefix);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:32,代碼來源:StreamThread.java

示例7: addRecordsToTasks

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
/**
 * Take records and add them to each respective task
 * @param records Records, can be null
 */
private void addRecordsToTasks(final ConsumerRecords<byte[], byte[]> records) {
    if (records != null && !records.isEmpty()) {
        int numAddedRecords = 0;

        for (final TopicPartition partition : records.partitions()) {
            final StreamTask task = activeTasksByPartition.get(partition);
            numAddedRecords += task.addRecords(partition, records.records(partition));
        }
        streamsMetrics.skippedRecordsSensor.record(records.count() - numAddedRecords, timerStartedMs);
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:16,代碼來源:StreamThread.java

示例8: run

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
public void run() {
  while (true) {
    ConsumerRecords<String, String> records = kafkaConsumer.poll(60000);
    if (records != null && !records.isEmpty()) {
      log.info("records size:{}", records.count());

      boolean success = consume(records);
      if (success) {
        log.info("now commit offset");
        kafkaConsumer.commitSync();
      }
    }
  }
}
 
開發者ID:osswangxining,項目名稱:iotplatform,代碼行數:15,代碼來源:BaseKafkaMsgReceiver.java

示例9: nextTuple

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
@Override
public void nextTuple() {
    if (!reloadSpout()) return;  // 判斷是否重新加載了緩存,如果重新加載則直接返回
    if (flowLimitation()) return; // 如果讀取的流量過大則要sleep一下
    // 讀取kafka消息
    ConsumerRecords<String, byte[]> records = consumer.getMessages();

    if (records.isEmpty()) {
        bubble();
        return;
    }

    //TODO 判斷是否為事件,如果消息為事件則需要等待消息隊列中的數據處理完之後再處理,
    //TODO 由於mysql需要將消息數據完全解析完成後才能獲取是否為表結構變更事件,暫時不實現
    /*Set<String> set = new HashSet<>();
    // 判斷是否為事件,如果消息為事件則需要等待消息隊列中的數據處理完之後再處理
    Iterator<ConsumerRecord<String, byte[]>> it = records.iterator();
    List<ConsumerRecord<String, byte[]>> list = new ArrayList<>();
    while (it.hasNext()) {
        ConsumerRecord<String, byte[]> record = it.next();

        // 記錄在set中的topic partition的消息將本次將不會被處理
        if (set.contains(record.topic() + record.partition())) {
            continue;
        }

        // 驗證是否為事件/判斷隊列中是否包含待處理的消息(需要區分topic和partition)
        if (isEvent(record) && !msgQueueMgr.isAllMessageProcessed(record)) {
            // 如果隊列中還有待處理的消息,則需要等待隊列中所有消息處理完之後才能處理該事件,
            // 實現等待的方式:seek到record所在的位置
            consumer.seek(record);
            logger.info("received an event[{}], seek to [topic:{},partition:{},offset:{}] to " +
                    "waiting until messages processed in the queue.", record.key(), record.topic(), record.partition(), record.offset());

            // 記錄事件產生的topic和和partition,
            // records記錄列表中在record之後的消息(和record的topic和partition相同的消息)
            // 將不會被:messageHandler.handleMessages(records)處理
            set.add(record.topic() + record.partition());
        } else {
            list.add(record);
        }

    }
    if (!list.isEmpty()) {
        messageHandler.handleMessages(list);
    }*/

    messageHandler.handleMessages(records);
}
 
開發者ID:BriData,項目名稱:DBus,代碼行數:50,代碼來源:DbusKafkaSpout.java

示例10: testProducerAndConsumer

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
/**
 * Test that KafkaServer works as expected!
 *
 * This also serves as a decent example of how to use the producer and consumer.
 */
@Test
public void testProducerAndConsumer() throws Exception {
    final int partitionId = 0;

    // Define our message
    final String expectedKey = "my-key";
    final String expectedValue = "my test message";

    // Define the record we want to produce
    ProducerRecord<String, String> producerRecord = new ProducerRecord<>(topicName, partitionId, expectedKey, expectedValue);

    // Create a new producer
    KafkaProducer<String, String> producer = getKafkaTestServer().getKafkaProducer(StringSerializer.class, StringSerializer.class);

    // Produce it & wait for it to complete.
    Future<RecordMetadata> future = producer.send(producerRecord);
    producer.flush();
    while (!future.isDone()) {
        Thread.sleep(500L);
    }
    logger.info("Produce completed");

    // Close producer!
    producer.close();

    KafkaConsumer<String, String> kafkaConsumer =
        getKafkaTestServer().getKafkaConsumer(StringDeserializer.class, StringDeserializer.class);

    final List<TopicPartition> topicPartitionList = Lists.newArrayList();
    for (final PartitionInfo partitionInfo: kafkaConsumer.partitionsFor(topicName)) {
        topicPartitionList.add(new TopicPartition(partitionInfo.topic(), partitionInfo.partition()));
    }
    kafkaConsumer.assign(topicPartitionList);
    kafkaConsumer.seekToBeginning(topicPartitionList);

    // Pull records from kafka, keep polling until we get nothing back
    ConsumerRecords<String, String> records;
    do {
        records = kafkaConsumer.poll(2000L);
        logger.info("Found {} records in kafka", records.count());
        for (ConsumerRecord<String, String> record: records) {
            // Validate
            assertEquals("Key matches expected", expectedKey, record.key());
            assertEquals("value matches expected", expectedValue, record.value());
        }
    }
    while (!records.isEmpty());

    // close consumer
    kafkaConsumer.close();
}
 
開發者ID:salesforce,項目名稱:kafka-junit,代碼行數:57,代碼來源:KafkaTestServerTest.java

示例11: run

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
@Override
public void run() {
    // Rename thread.
    Thread.currentThread().setName("WebSocket Consumer: " + clientConfig.getConsumerId());
    logger.info("Starting socket consumer for {}", clientConfig.getConsumerId());

    // Determine where to start from.
    initializeStartingPosition(clientConfig.getStartingPosition());

    do {
        // Start trying to consume messages from kafka
        final ConsumerRecords consumerRecords = kafkaConsumer.poll(POLL_TIMEOUT_MS);

        // If no records found
        if (consumerRecords.isEmpty()) {
            // Sleep for a bit
            sleep(POLL_TIMEOUT_MS);

            // Skip to next iteration of loop.
            continue;
        }

        // Push messages onto output queue
        for (final ConsumerRecord consumerRecord : (Iterable<ConsumerRecord>) consumerRecords) {
            // Translate record
            final KafkaResult kafkaResult = new KafkaResult(
                consumerRecord.partition(),
                consumerRecord.offset(),
                consumerRecord.timestamp(),
                consumerRecord.key(),
                consumerRecord.value()
            );

            // Add to the queue, this operation may block, effectively preventing the consumer from
            // consuming unbounded-ly.
            try {
                outputQueue.put(kafkaResult);
            } catch (final InterruptedException interruptedException) {
                // InterruptedException means we should shut down.
                requestStop();
            }
        }

        // Sleep for a bit
        sleep(DWELL_TIME_MS);
    }
    while (!requestStop);

    // requestStop
    kafkaConsumer.close();

    logger.info("Shutdown consumer {}", clientConfig.getConsumerId());
}
 
開發者ID:SourceLabOrg,項目名稱:kafka-webview,代碼行數:54,代碼來源:SocketKafkaConsumer.java

示例12: consume

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
public void consume(String topic) throws Exception {
    if (maybeSetupPhase(topic, "simple-benchmark-consumer-load", true)) {
        return;
    }

    Properties props = setProduceConsumeProperties("simple-benchmark-consumer");

    KafkaConsumer<Integer, byte[]> consumer = new KafkaConsumer<>(props);

    List<TopicPartition> partitions = getAllPartitions(consumer, topic);
    consumer.assign(partitions);
    consumer.seekToBeginning(partitions);

    Integer key = null;

    long startTime = System.currentTimeMillis();

    while (true) {
        ConsumerRecords<Integer, byte[]> records = consumer.poll(POLL_MS);
        if (records.isEmpty()) {
            if (processedRecords == numRecords)
                break;
        } else {
            for (ConsumerRecord<Integer, byte[]> record : records) {
                processedRecords++;
                processedBytes += record.value().length + Integer.SIZE;
                Integer recKey = record.key();
                if (key == null || key < recKey)
                    key = recKey;
                if (processedRecords == numRecords)
                    break;
            }
        }
        if (processedRecords == numRecords)
            break;
    }

    long endTime = System.currentTimeMillis();

    consumer.close();
    printResults("Consumer Performance [records/latency/rec-sec/MB-sec read]: ", endTime - startTime);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:43,代碼來源:SimpleBenchmark.java

示例13: run

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
public void run() {

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(config());

        consumer.subscribe(Arrays.asList("articles"), new OffsetBeginningRebalanceListener(consumer, "articles"));

        JsonParser parser = new JsonParser();

        try {

            System.out.println("Starting Listener!");

            while (true) {

                ConsumerRecords<String, String> records = consumer.poll(1000);

                if (records.isEmpty())
                    continue;

                for (ConsumerRecord<String, String> cr : records) {

                    JsonObject json = parser.parse(cr.value()).getAsJsonObject();

                    String action = json.getAsJsonPrimitive("action").getAsString();

                    JsonObject object = json.getAsJsonObject("object");

                    Article article = gson.fromJson(object, Article.class);

                    switch (action) {
                        case "update":
                        case "create":
                            article.setId(cr.key());
                            store.save(article);
                            break;
                        case "delete":
                            store.delete(cr.key());
                            break;

                    }


                }
            }
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            consumer.close();
        }
    }
 
開發者ID:predic8,項目名稱:eventsourcing-kafka-sample,代碼行數:51,代碼來源:KafkaListenerRunner.java

示例14: run

import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
public void run() {

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(config());

        consumer.subscribe(Arrays.asList("articles"), new OffsetBeginningRebalanceListener(consumer, "articles"));

        JsonParser parser = new JsonParser();

        try {

            while (true) {

                ConsumerRecords<String, String> records = consumer.poll(1000);

                if (records.isEmpty())
                    continue;



                for (ConsumerRecord<String, String> cr : records) {


                    //

                    //  @Consumer(topic="articles")

                    JsonObject json = parser.parse(cr.value()).getAsJsonObject();

                    String action = json.getAsJsonPrimitive("action").getAsString();

                    JsonObject object = json.getAsJsonObject("object");

                    System.out.println("----------------------------------------------------------------------------------");
                    System.out.println("Offset: " + cr.offset());
                    System.out.println("Key: "+ cr.key());
                    System.out.println("Action: " + action);
                    System.out.println("Object: " + object);

                    Article article = gson.fromJson(object, Article.class);

                    switch (action) {
                        case "update":
                        case "create":
                            article.setId(cr.key());
                            store.save(article);
                            break;
                        case "delete":
                            store.delete(cr.key());
                            break;

                    }


                }
            }
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            consumer.close();
        }
    }
 
開發者ID:predic8,項目名稱:eventsourcing-kafka-sample,代碼行數:62,代碼來源:KafkaListenerRunner.java


注:本文中的org.apache.kafka.clients.consumer.ConsumerRecords.isEmpty方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。