当前位置: 首页>>代码示例>>Java>>正文


Java ConsumerRecords.iterator方法代码示例

本文整理汇总了Java中org.apache.kafka.clients.consumer.ConsumerRecords.iterator方法的典型用法代码示例。如果您正苦于以下问题:Java ConsumerRecords.iterator方法的具体用法?Java ConsumerRecords.iterator怎么用?Java ConsumerRecords.iterator使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.clients.consumer.ConsumerRecords的用法示例。


在下文中一共展示了ConsumerRecords.iterator方法的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: computeNext

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
@Override
protected KeyMessage<K,V> computeNext() {
  if (iterator == null || !iterator.hasNext()) {
    try {
      long timeout = MIN_POLL_MS;
      ConsumerRecords<K, V> records;

      while ((records = consumer.poll(timeout)).isEmpty()) {
        timeout = Math.min(MAX_POLL_MS, timeout * 2);
      }
      iterator = records.iterator();
    } catch (Exception e) {
      consumer.close();
      return endOfData();
    }
  }
  ConsumerRecord<K,V> mm = iterator.next();
  return new KeyMessageImpl<>(mm.key(), mm.value());
}
 
开发者ID:oncewang,项目名称:oryx2,代码行数:20,代码来源:ConsumeDataIterator.java

示例2: pollAndDispatchMessage

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
private synchronized void pollAndDispatchMessage() throws InterruptedException {
	// 处理记录过程中,不能修改consumer相关的设定
	// 拉取需要处理的记录
	ConsumerRecords<String, byte[]> allRecords = consumer.poll(10000);

	// 为每个消息都封装成CALLABLE的形式,并进行调用处理
	Iterator<ConsumerRecord<String, byte[]>> iterator = allRecords.iterator();
	List<MessageHandler> listJob = new LinkedList<>();
	while (iterator.hasNext()) {
		listJob.add(new MessageHandler(iterator.next()));
	}
	executeJobs(listJob);
	// 全部调用成功,更新消费坐标
	consumer.commitAsync();
}
 
开发者ID:QNJR-GROUP,项目名称:EasyTransaction,代码行数:16,代码来源:KafkaEasyTransMsgConsumerImpl.java

示例3: consume

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
private List<KafkaResult> consume() {
    final List<KafkaResult> kafkaResultList = new ArrayList<>();
    final ConsumerRecords consumerRecords = kafkaConsumer.poll(clientConfig.getPollTimeoutMs());

    logger.info("Consumed {} records", consumerRecords.count());
    final Iterator<ConsumerRecord> recordIterator = consumerRecords.iterator();
    while (recordIterator.hasNext()) {
        // Get next record
        final ConsumerRecord consumerRecord = recordIterator.next();

        // Convert to KafkaResult.
        final KafkaResult kafkaResult = new KafkaResult(
            consumerRecord.partition(),
            consumerRecord.offset(),
            consumerRecord.timestamp(),
            consumerRecord.key(),
            consumerRecord.value()
        );

        // Add to list.
        kafkaResultList.add(kafkaResult);
    }

    // Commit offsets
    commit();
    return kafkaResultList;
}
 
开发者ID:SourceLabOrg,项目名称:kafka-webview,代码行数:28,代码来源:WebKafkaConsumer.java

示例4: readLine

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
@Override
public String readLine() {
	String line = "";
	ConsumerRecords<String, String> records = this.consumer.poll(1000);
	Iterator<ConsumerRecord<String, String>> iterator = records.iterator();
	if (iterator.hasNext()) {
		ConsumerRecord<String, String> record = iterator.next();
		logger.info(String.format("offset = %d, key = %s, value = %s\n", record.offset(), record.key(), record.value()));
		line = record.value();
	}
	return line;
}
 
开发者ID:netkiller,项目名称:ipo,代码行数:13,代码来源:KafkaInput.java

示例5: onConsume

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
@Override
public ConsumerRecords onConsume(final ConsumerRecords records) {

    final Map<TopicPartition, List<ConsumerRecord>> filteredRecords = new HashMap<>();

    // Iterate thru records
    final Iterator<ConsumerRecord> recordIterator = records.iterator();
    while (recordIterator.hasNext()) {
        final ConsumerRecord record = recordIterator.next();

        boolean result = true;

        // Iterate through filters
        for (final RecordFilterDefinition recordFilterDefinition : recordFilterDefinitions) {
            // Pass through filter
            result = recordFilterDefinition.getRecordFilter().includeRecord(
                record.topic(),
                record.partition(),
                record.offset(),
                record.key(),
                record.value()
            );

            // If we return false
            if (!result) {
                // break out of loop
                break;
            }
        }

        // If filter return true
        if (result) {
            // Include it in the results
            final TopicPartition topicPartition = new TopicPartition(record.topic(), record.partition());
            filteredRecords.putIfAbsent(topicPartition, new ArrayList<>());
            filteredRecords.get(topicPartition).add(record);
        }
    }

    // return filtered results
    return new ConsumerRecords(filteredRecords);
}
 
开发者ID:SourceLabOrg,项目名称:kafka-webview,代码行数:43,代码来源:RecordFilterInterceptor.java

示例6: testFilterMessages

import org.apache.kafka.clients.consumer.ConsumerRecords; //导入方法依赖的package包/类
/**
 * Test that filters can filter messages.
 */
@Test
public void testFilterMessages() {
    final int totalRecords = 5;

    // Create mock Filters
    final RecordFilter mockFilter1 = mock(RecordFilter.class);
    final RecordFilter mockFilter2 = mock(RecordFilter.class);

    when(mockFilter1.includeRecord(eq("MyTopic"), eq(0), anyLong(), anyObject(), anyObject()))
        .thenReturn(true, false, true, true, true);
    when(mockFilter2.includeRecord(eq("MyTopic"), eq(0), anyLong(), anyObject(), anyObject()))
        .thenReturn(true, true, false, true);

    final RecordFilterDefinition recordFilterDefinition1 = new RecordFilterDefinition(mockFilter1, new HashMap<>());
    final RecordFilterDefinition recordFilterDefinition2 = new RecordFilterDefinition(mockFilter2, new HashMap<>());

    // Create ConsumerConfigs
    final Map<String, Object> consumerConfigs = new HashMap<>();
    consumerConfigs.put(RecordFilterInterceptor.CONFIG_KEY, Lists.newArrayList(recordFilterDefinition1, recordFilterDefinition2));

    // Create interceptor.
    final RecordFilterInterceptor interceptor = new RecordFilterInterceptor();

    // Call configure
    interceptor.configure(consumerConfigs);

    // Create ConsumerRecords
    final ConsumerRecords consumerRecords = createConsumerRecords(totalRecords);

    // Pass through interceptor
    final ConsumerRecords results = interceptor.onConsume(consumerRecords);

    // Validate we got the expected results
    assertEquals("Should have 3 records", totalRecords - 2, results.count());

    for (Iterator<ConsumerRecord> it = results.iterator(); it.hasNext(); ) {
        final ConsumerRecord consumerRecord = it.next();
        assertNotEquals("Should not have offsets 1 and 3", 1, consumerRecord.offset());
        assertNotEquals("Should not have offsets 1 and 3", 3, consumerRecord.offset());
    }

    // Verify mocks
    verify(mockFilter1, times(totalRecords))
        .includeRecord(eq("MyTopic"), eq(0), anyLong(), anyObject(), anyObject());
    verify(mockFilter2, times(totalRecords - 1))
        .includeRecord(eq("MyTopic"), eq(0), anyLong(), anyObject(), anyObject());
}
 
开发者ID:SourceLabOrg,项目名称:kafka-webview,代码行数:51,代码来源:RecordFilterInterceptorTest.java


注:本文中的org.apache.kafka.clients.consumer.ConsumerRecords.iterator方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。