當前位置: 首頁>>代碼示例>>Java>>正文


Java ConsumerRecord.timestamp方法代碼示例

本文整理匯總了Java中org.apache.kafka.clients.consumer.ConsumerRecord.timestamp方法的典型用法代碼示例。如果您正苦於以下問題:Java ConsumerRecord.timestamp方法的具體用法?Java ConsumerRecord.timestamp怎麽用?Java ConsumerRecord.timestamp使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.kafka.clients.consumer.ConsumerRecord的用法示例。


在下文中一共展示了ConsumerRecord.timestamp方法的9個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: consume

import org.apache.kafka.clients.consumer.ConsumerRecord; //導入方法依賴的package包/類
private List<KafkaResult> consume() {
    final List<KafkaResult> kafkaResultList = new ArrayList<>();
    final ConsumerRecords consumerRecords = kafkaConsumer.poll(clientConfig.getPollTimeoutMs());

    logger.info("Consumed {} records", consumerRecords.count());
    final Iterator<ConsumerRecord> recordIterator = consumerRecords.iterator();
    while (recordIterator.hasNext()) {
        // Get next record
        final ConsumerRecord consumerRecord = recordIterator.next();

        // Convert to KafkaResult.
        final KafkaResult kafkaResult = new KafkaResult(
            consumerRecord.partition(),
            consumerRecord.offset(),
            consumerRecord.timestamp(),
            consumerRecord.key(),
            consumerRecord.value()
        );

        // Add to list.
        kafkaResultList.add(kafkaResult);
    }

    // Commit offsets
    commit();
    return kafkaResultList;
}
 
開發者ID:SourceLabOrg,項目名稱:kafka-webview,代碼行數:28,代碼來源:WebKafkaConsumer.java

示例2: processSingleRecord

import org.apache.kafka.clients.consumer.ConsumerRecord; //導入方法依賴的package包/類
private void processSingleRecord(List<Long> txIds, ConsumerRecord<ByteBuffer, ByteBuffer> record) {
    long txId = record.timestamp();
    boolean found = txIds.remove(txId);
    if (found) {
        ProducerRecord<ByteBuffer, ByteBuffer> producerRecord =
                new ProducerRecord<>(clusterConfig.getGapTopic(), record.key(), record.value());
        producer.send(producerRecord);
    }
}
 
開發者ID:epam,項目名稱:Lagerta,代碼行數:10,代碼來源:ReconcilerImpl.java

示例3: start

import org.apache.kafka.clients.consumer.ConsumerRecord; //導入方法依賴的package包/類
public void start() {
    running = true;

    Runnable onStop = new Runnable() {
        @Override
        public void run() {
            //todo add logic
        }
    };

    try (Consumer<ByteBuffer, ByteBuffer> consumer = kafkaFactory.consumer(localConsumerProperties, onStop);
         Producer<ByteBuffer, ByteBuffer> producer = kafkaFactory.producer(replicaProducerProperties)) {
        int partitions = producer.partitionsFor(reconciliationTopic).size();

        consumer.subscribe(Collections.singletonList(localTopic));
        while (running) {
            ConsumerRecords<ByteBuffer, ByteBuffer> records = consumer.poll(POLL_TIMEOUT);

            for (ConsumerRecord<ByteBuffer, ByteBuffer> record : records) {
                long transactionId = record.timestamp();
                int partition = TransactionMessageUtil.partitionFor(transactionId, partitions);

                producer.send(new ProducerRecord<>(reconciliationTopic, partition, record.key(), record.value()));
            }
        }
    }
}
 
開發者ID:epam,項目名稱:Lagerta,代碼行數:28,代碼來源:ReconciliationWriter.java

示例4: update

import org.apache.kafka.clients.consumer.ConsumerRecord; //導入方法依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public void update(final ConsumerRecord<byte[], byte[]> record) {
    final SourceNodeAndDeserializer sourceNodeAndDeserializer = deserializers.get(record.topic());
    final ConsumerRecord<Object, Object> deserialized = sourceNodeAndDeserializer.deserializer.deserialize(record);
    final ProcessorRecordContext recordContext =
            new ProcessorRecordContext(deserialized.timestamp(),
                                       deserialized.offset(),
                                       deserialized.partition(),
                                       deserialized.topic());
    processorContext.setRecordContext(recordContext);
    processorContext.setCurrentNode(sourceNodeAndDeserializer.sourceNode);
    sourceNodeAndDeserializer.sourceNode.process(deserialized.key(), deserialized.value());
    offsets.put(new TopicPartition(record.topic(), record.partition()), deserialized.offset() + 1);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:16,代碼來源:GlobalStateUpdateTask.java

示例5: extract

import org.apache.kafka.clients.consumer.ConsumerRecord; //導入方法依賴的package包/類
/**
 * Extracts the embedded metadata timestamp from the given {@link ConsumerRecord}.
 *
 * @param record a data record
 * @param previousTimestamp the latest extracted valid timestamp of the current record's partition˙ (could be -1 if unknown)
 * @return the embedded metadata timestamp of the given {@link ConsumerRecord}
 */
@Override
public long extract(final ConsumerRecord<Object, Object> record, final long previousTimestamp) {
    final long timestamp = record.timestamp();

    if (timestamp < 0) {
        return onInvalidTimestamp(record, timestamp, previousTimestamp);
    }

    return timestamp;
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:18,代碼來源:ExtractRecordMetadataTimestamp.java

示例6: extract

import org.apache.kafka.clients.consumer.ConsumerRecord; //導入方法依賴的package包/類
@Override
public long extract(final ConsumerRecord<Object, Object> record, final long previousTimestamp) {
    if (record.value().toString().matches(".*@[0-9]+"))
        return Long.parseLong(record.value().toString().split("@")[1]);

    if (record.timestamp() > 0L)
        return record.timestamp();

    return timestamp;
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:11,代碼來源:ProcessorTopologyTest.java

示例7: run

import org.apache.kafka.clients.consumer.ConsumerRecord; //導入方法依賴的package包/類
@Override
public void run() {
    // Rename thread.
    Thread.currentThread().setName("WebSocket Consumer: " + clientConfig.getConsumerId());
    logger.info("Starting socket consumer for {}", clientConfig.getConsumerId());

    // Determine where to start from.
    initializeStartingPosition(clientConfig.getStartingPosition());

    do {
        // Start trying to consume messages from kafka
        final ConsumerRecords consumerRecords = kafkaConsumer.poll(POLL_TIMEOUT_MS);

        // If no records found
        if (consumerRecords.isEmpty()) {
            // Sleep for a bit
            sleep(POLL_TIMEOUT_MS);

            // Skip to next iteration of loop.
            continue;
        }

        // Push messages onto output queue
        for (final ConsumerRecord consumerRecord : (Iterable<ConsumerRecord>) consumerRecords) {
            // Translate record
            final KafkaResult kafkaResult = new KafkaResult(
                consumerRecord.partition(),
                consumerRecord.offset(),
                consumerRecord.timestamp(),
                consumerRecord.key(),
                consumerRecord.value()
            );

            // Add to the queue, this operation may block, effectively preventing the consumer from
            // consuming unbounded-ly.
            try {
                outputQueue.put(kafkaResult);
            } catch (final InterruptedException interruptedException) {
                // InterruptedException means we should shut down.
                requestStop();
            }
        }

        // Sleep for a bit
        sleep(DWELL_TIME_MS);
    }
    while (!requestStop);

    // requestStop
    kafkaConsumer.close();

    logger.info("Shutdown consumer {}", clientConfig.getConsumerId());
}
 
開發者ID:SourceLabOrg,項目名稱:kafka-webview,代碼行數:54,代碼來源:SocketKafkaConsumer.java

示例8: extract

import org.apache.kafka.clients.consumer.ConsumerRecord; //導入方法依賴的package包/類
@Override
public long extract(ConsumerRecord<Object, Object> consumerRecord, long l) {
    return consumerRecord.timestamp();
}
 
開發者ID:gdibernardo,項目名稱:streaming-engines-benchmark,代碼行數:5,代碼來源:ConsumerTimestampExtractor.java

示例9: extract

import org.apache.kafka.clients.consumer.ConsumerRecord; //導入方法依賴的package包/類
@Override
public long extract(ConsumerRecord<Object, Object> record, long previousTimestamp) {

    if (record.value() instanceof Integer) return Long.valueOf(String.valueOf(record.value()));
    return record.timestamp();
}
 
開發者ID:carlosmenezes,項目名稱:mockafka,代碼行數:7,代碼來源:TestTimestampExtractor.java


注:本文中的org.apache.kafka.clients.consumer.ConsumerRecord.timestamp方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。