本文整理匯總了Java中org.apache.kafka.clients.consumer.ConsumerRecords.forEach方法的典型用法代碼示例。如果您正苦於以下問題:Java ConsumerRecords.forEach方法的具體用法?Java ConsumerRecords.forEach怎麽用?Java ConsumerRecords.forEach使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類org.apache.kafka.clients.consumer.ConsumerRecords
的用法示例。
在下文中一共展示了ConsumerRecords.forEach方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。
示例1: consumeAllRecordsFromTopic
import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
/**
* This will consume all records from only the partitions given.
* @param topic Topic to consume from.
* @param partitionIds Collection of PartitionIds to consume.
* @return List of ConsumerRecords consumed.
*/
public List<ConsumerRecord<byte[], byte[]>> consumeAllRecordsFromTopic(final String topic, Collection<Integer> partitionIds) {
// Create topic Partitions
List<TopicPartition> topicPartitions = new ArrayList<>();
for (Integer partitionId: partitionIds) {
topicPartitions.add(new TopicPartition(topic, partitionId));
}
// Connect Consumer
KafkaConsumer<byte[], byte[]> kafkaConsumer =
kafkaTestServer.getKafkaConsumer(ByteArrayDeserializer.class, ByteArrayDeserializer.class);
// Assign topic partitions & seek to head of them
kafkaConsumer.assign(topicPartitions);
kafkaConsumer.seekToBeginning(topicPartitions);
// Pull records from kafka, keep polling until we get nothing back
final List<ConsumerRecord<byte[], byte[]>> allRecords = new ArrayList<>();
ConsumerRecords<byte[], byte[]> records;
do {
// Grab records from kafka
records = kafkaConsumer.poll(2000L);
logger.info("Found {} records in kafka", records.count());
// Add to our array list
records.forEach(allRecords::add);
}
while (!records.isEmpty());
// close consumer
kafkaConsumer.close();
// return all records
return allRecords;
}
示例2: enqueue
import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
void enqueue(ConsumerRecords<String, byte[]> records) {
records.forEach((record) -> {
TopicPartition partitionKey = new TopicPartition(record.topic(), record.partition());
PartitionProcessor processor = processors.get(partitionKey);
if (processor == null) {
processor = assignNewPartition(partitionKey);
}
processor.enqueue(record);
});
}
示例3: run
import org.apache.kafka.clients.consumer.ConsumerRecords; //導入方法依賴的package包/類
@Override
public void run() {
LOGGER.debug("Kafka consumer started.");
try {
subscriptionService = ServiceLocator
.findService(SubscriptionService.class);
} catch (Exception e) {
e.printStackTrace();
return;
}
while (true) {
try {
final ConsumerRecords<UUID, Object> consumerRecords = consumer
.poll(1000);
consumerRecords.forEach(record -> {
System.out.printf("Consumer Record:(%s, %s, %d, %d)\n",
record.key(), record.value(), record.partition(),
record.offset());
processRecord(record);
});
consumer.commitAsync();
} catch (Throwable t) {
consumer.commitAsync();
t.printStackTrace();
}
}
}