当前位置: 首页>>代码示例>>Java>>正文


Java ConsumerRecord.timestamp方法代码示例

本文整理汇总了Java中org.apache.kafka.clients.consumer.ConsumerRecord.timestamp方法的典型用法代码示例。如果您正苦于以下问题:Java ConsumerRecord.timestamp方法的具体用法?Java ConsumerRecord.timestamp怎么用?Java ConsumerRecord.timestamp使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.clients.consumer.ConsumerRecord的用法示例。


在下文中一共展示了ConsumerRecord.timestamp方法的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: consume

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
private List<KafkaResult> consume() {
    final List<KafkaResult> kafkaResultList = new ArrayList<>();
    final ConsumerRecords consumerRecords = kafkaConsumer.poll(clientConfig.getPollTimeoutMs());

    logger.info("Consumed {} records", consumerRecords.count());
    final Iterator<ConsumerRecord> recordIterator = consumerRecords.iterator();
    while (recordIterator.hasNext()) {
        // Get next record
        final ConsumerRecord consumerRecord = recordIterator.next();

        // Convert to KafkaResult.
        final KafkaResult kafkaResult = new KafkaResult(
            consumerRecord.partition(),
            consumerRecord.offset(),
            consumerRecord.timestamp(),
            consumerRecord.key(),
            consumerRecord.value()
        );

        // Add to list.
        kafkaResultList.add(kafkaResult);
    }

    // Commit offsets
    commit();
    return kafkaResultList;
}
 
开发者ID:SourceLabOrg,项目名称:kafka-webview,代码行数:28,代码来源:WebKafkaConsumer.java

示例2: processSingleRecord

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
private void processSingleRecord(List<Long> txIds, ConsumerRecord<ByteBuffer, ByteBuffer> record) {
    long txId = record.timestamp();
    boolean found = txIds.remove(txId);
    if (found) {
        ProducerRecord<ByteBuffer, ByteBuffer> producerRecord =
                new ProducerRecord<>(clusterConfig.getGapTopic(), record.key(), record.value());
        producer.send(producerRecord);
    }
}
 
开发者ID:epam,项目名称:Lagerta,代码行数:10,代码来源:ReconcilerImpl.java

示例3: start

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
public void start() {
    running = true;

    Runnable onStop = new Runnable() {
        @Override
        public void run() {
            //todo add logic
        }
    };

    try (Consumer<ByteBuffer, ByteBuffer> consumer = kafkaFactory.consumer(localConsumerProperties, onStop);
         Producer<ByteBuffer, ByteBuffer> producer = kafkaFactory.producer(replicaProducerProperties)) {
        int partitions = producer.partitionsFor(reconciliationTopic).size();

        consumer.subscribe(Collections.singletonList(localTopic));
        while (running) {
            ConsumerRecords<ByteBuffer, ByteBuffer> records = consumer.poll(POLL_TIMEOUT);

            for (ConsumerRecord<ByteBuffer, ByteBuffer> record : records) {
                long transactionId = record.timestamp();
                int partition = TransactionMessageUtil.partitionFor(transactionId, partitions);

                producer.send(new ProducerRecord<>(reconciliationTopic, partition, record.key(), record.value()));
            }
        }
    }
}
 
开发者ID:epam,项目名称:Lagerta,代码行数:28,代码来源:ReconciliationWriter.java

示例4: update

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
@SuppressWarnings("unchecked")
@Override
public void update(final ConsumerRecord<byte[], byte[]> record) {
    final SourceNodeAndDeserializer sourceNodeAndDeserializer = deserializers.get(record.topic());
    final ConsumerRecord<Object, Object> deserialized = sourceNodeAndDeserializer.deserializer.deserialize(record);
    final ProcessorRecordContext recordContext =
            new ProcessorRecordContext(deserialized.timestamp(),
                                       deserialized.offset(),
                                       deserialized.partition(),
                                       deserialized.topic());
    processorContext.setRecordContext(recordContext);
    processorContext.setCurrentNode(sourceNodeAndDeserializer.sourceNode);
    sourceNodeAndDeserializer.sourceNode.process(deserialized.key(), deserialized.value());
    offsets.put(new TopicPartition(record.topic(), record.partition()), deserialized.offset() + 1);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:16,代码来源:GlobalStateUpdateTask.java

示例5: extract

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
/**
 * Extracts the embedded metadata timestamp from the given {@link ConsumerRecord}.
 *
 * @param record a data record
 * @param previousTimestamp the latest extracted valid timestamp of the current record's partition˙ (could be -1 if unknown)
 * @return the embedded metadata timestamp of the given {@link ConsumerRecord}
 */
@Override
public long extract(final ConsumerRecord<Object, Object> record, final long previousTimestamp) {
    final long timestamp = record.timestamp();

    if (timestamp < 0) {
        return onInvalidTimestamp(record, timestamp, previousTimestamp);
    }

    return timestamp;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:18,代码来源:ExtractRecordMetadataTimestamp.java

示例6: extract

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
@Override
public long extract(final ConsumerRecord<Object, Object> record, final long previousTimestamp) {
    if (record.value().toString().matches(".*@[0-9]+"))
        return Long.parseLong(record.value().toString().split("@")[1]);

    if (record.timestamp() > 0L)
        return record.timestamp();

    return timestamp;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:11,代码来源:ProcessorTopologyTest.java

示例7: run

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
@Override
public void run() {
    // Rename thread.
    Thread.currentThread().setName("WebSocket Consumer: " + clientConfig.getConsumerId());
    logger.info("Starting socket consumer for {}", clientConfig.getConsumerId());

    // Determine where to start from.
    initializeStartingPosition(clientConfig.getStartingPosition());

    do {
        // Start trying to consume messages from kafka
        final ConsumerRecords consumerRecords = kafkaConsumer.poll(POLL_TIMEOUT_MS);

        // If no records found
        if (consumerRecords.isEmpty()) {
            // Sleep for a bit
            sleep(POLL_TIMEOUT_MS);

            // Skip to next iteration of loop.
            continue;
        }

        // Push messages onto output queue
        for (final ConsumerRecord consumerRecord : (Iterable<ConsumerRecord>) consumerRecords) {
            // Translate record
            final KafkaResult kafkaResult = new KafkaResult(
                consumerRecord.partition(),
                consumerRecord.offset(),
                consumerRecord.timestamp(),
                consumerRecord.key(),
                consumerRecord.value()
            );

            // Add to the queue, this operation may block, effectively preventing the consumer from
            // consuming unbounded-ly.
            try {
                outputQueue.put(kafkaResult);
            } catch (final InterruptedException interruptedException) {
                // InterruptedException means we should shut down.
                requestStop();
            }
        }

        // Sleep for a bit
        sleep(DWELL_TIME_MS);
    }
    while (!requestStop);

    // requestStop
    kafkaConsumer.close();

    logger.info("Shutdown consumer {}", clientConfig.getConsumerId());
}
 
开发者ID:SourceLabOrg,项目名称:kafka-webview,代码行数:54,代码来源:SocketKafkaConsumer.java

示例8: extract

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
@Override
public long extract(ConsumerRecord<Object, Object> consumerRecord, long l) {
    return consumerRecord.timestamp();
}
 
开发者ID:gdibernardo,项目名称:streaming-engines-benchmark,代码行数:5,代码来源:ConsumerTimestampExtractor.java

示例9: extract

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
@Override
public long extract(ConsumerRecord<Object, Object> record, long previousTimestamp) {

    if (record.value() instanceof Integer) return Long.valueOf(String.valueOf(record.value()));
    return record.timestamp();
}
 
开发者ID:carlosmenezes,项目名称:mockafka,代码行数:7,代码来源:TestTimestampExtractor.java


注:本文中的org.apache.kafka.clients.consumer.ConsumerRecord.timestamp方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。