當前位置: 首頁>>代碼示例>>Java>>正文


Java OffsetAndMetadata類代碼示例

本文整理匯總了Java中org.apache.kafka.clients.consumer.OffsetAndMetadata的典型用法代碼示例。如果您正苦於以下問題:Java OffsetAndMetadata類的具體用法?Java OffsetAndMetadata怎麽用?Java OffsetAndMetadata使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


OffsetAndMetadata類屬於org.apache.kafka.clients.consumer包,在下文中一共展示了OffsetAndMetadata類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: doAutoCommitOffsetsAsync

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
private void doAutoCommitOffsetsAsync() {
    Map<TopicPartition, OffsetAndMetadata> allConsumedOffsets = subscriptions.allConsumed();
    log.debug("Sending asynchronous auto-commit of offsets {} for group {}", allConsumedOffsets, groupId);
    //rebalance
    commitOffsetsAsync(allConsumedOffsets, new OffsetCommitCallback() {
        @Override
        public void onComplete(Map<TopicPartition, OffsetAndMetadata> offsets, Exception exception) {
            if (exception != null) {
                log.warn("Auto-commit of offsets {} failed for group {}: {}", offsets, groupId,
                        exception.getMessage());
                if (exception instanceof RetriableException)
                    nextAutoCommitDeadline = Math.min(time.milliseconds() + retryBackoffMs, nextAutoCommitDeadline);
            } else {
                log.debug("Completed auto-commit of offsets {} for group {}", offsets, groupId);
            }
        }
    });
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:19,代碼來源:ConsumerCoordinator.java

示例2: send

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
@Override
public void send(Long k, byte[] v) {
    KafkaProducer<Long, byte[]> p = getWorker();
    p.initTransactions();
    p.beginTransaction();
    Future<RecordMetadata> res = worker.send(new ProducerRecord<Long, byte[]>(topic, k, v));
    RecordMetadata record;
    try {
        record = res.get();
        offsets.clear();
        offsets.put(new TopicPartition(topic, record.partition()), new OffsetAndMetadata(record.offset()));
        p.sendOffsetsToTransaction(offsets, MallConstants.ORDER_GROUP);
        p.commitTransaction();
    } catch (InterruptedException | ExecutionException e) {
        p.abortTransaction();
    }
}
 
開發者ID:jiumao-org,項目名稱:wechat-mall,代碼行數:18,代碼來源:OrderProducer.java

示例3: commit

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
@Override
public void commit() {
  HashMap<TopicPartition, OffsetAndMetadata> offsets =
      new HashMap<TopicPartition, OffsetAndMetadata>();

  partitionOffset.forEach((key, value) -> {
    String topic = key.split("\\+")[0];
    int partition = Integer.valueOf(key.split("\\+")[1]);
    offsets.put(new TopicPartition(topic, partition), new OffsetAndMetadata(value + 1));
  });

  consumer.commitSync(offsets);
  committed.set(true);
  partitionOffset.clear();

  // record the time being committed
  timerCTX.stop();

  stat.newestCompleted = newestRecord;
  stat.delay = new Date().getTime() - start.getTime();
}
 
開發者ID:HashDataInc,項目名稱:bireme,代碼行數:22,代碼來源:KafkaPipeLine.java

示例4: recommitOffsets

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
public void recommitOffsets() {
    LocalDateTime now = LocalDateTime.now(clock);
    if (now.isAfter(lastUpdateTime.plus(IDLE_DURATION))) {
        for (TopicPartition tp : offsetData.keySet()) {
            OffsetAndTime offsetAndTime = offsetData.get(tp);
            if (now.isAfter(offsetAndTime.time.plus(IDLE_DURATION))) {
                try {
                    consumer.commitSync(Collections.singletonMap(tp,
                            new OffsetAndMetadata(offsetAndTime.offset)));
                } catch (CommitFailedException covfefe) {
                    logger.info("Caught CommitFailedException attempting to commit {} {}",
                            tp, offsetAndTime.offset);
                }
                offsetAndTime.time = now;
            }
        }
        lastUpdateTime = now;
    }
}
 
開發者ID:Sixt,項目名稱:ja-micro,代碼行數:20,代碼來源:OffsetCommitter.java

示例5: getZookeeperOffsets

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
private Map<TopicPartition, OffsetAndMetadata> getZookeeperOffsets(ZkUtils client,
                                                                   String topicStr) {
  Map<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>();
  ZKGroupTopicDirs topicDirs = new ZKGroupTopicDirs(groupId, topicStr);
  List<String> partitions = asJavaListConverter(
      client.getChildrenParentMayNotExist(topicDirs.consumerOffsetDir())).asJava();
  for (String partition : partitions) {
    TopicPartition key = new TopicPartition(topicStr, Integer.valueOf(partition));
    Option<String> data = client.readDataMaybeNull(
        topicDirs.consumerOffsetDir() + "/" + partition)._1();
    if (data.isDefined()) {
      Long offset = Long.valueOf(data.get());
      offsets.put(key, new OffsetAndMetadata(offset));
    }
  }
  return offsets;
}
 
開發者ID:moueimei,項目名稱:flume-release-1.7.0,代碼行數:18,代碼來源:KafkaSource.java

示例6: sendOffsetFetchRequest

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
/**
 * Fetch the committed offsets for a set of partitions. This is a non-blocking call. The
 * returned future can be polled to get the actual offsets returned from the broker.
 *
 * @param partitions The set of partitions to get offsets for.
 * @return A request future containing the committed offsets.
 */
// 創建並緩存OffsetFethcRequest
private RequestFuture<Map<TopicPartition, OffsetAndMetadata>> sendOffsetFetchRequest(Set<TopicPartition> partitions) {
    Node coordinator = coordinator();
    if (coordinator == null)
        return RequestFuture.coordinatorNotAvailable();

    log.debug("Group {} fetching committed offsets for partitions: {}", groupId, partitions);
    // construct the request
    OffsetFetchRequest.Builder requestBuilder =
            new OffsetFetchRequest.Builder(this.groupId, new ArrayList<>(partitions));

    // send the request with a callback
    // 使用OffsetFetchResponseHandler來處理響應
    return client.send(coordinator, requestBuilder)
            .compose(new OffsetFetchResponseHandler());
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:24,代碼來源:ConsumerCoordinator.java

示例7: run

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
public void run() {
    try {
        printJson(new StartupComplete());
        consumer.subscribe(Collections.singletonList(topic), this);

        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(Long.MAX_VALUE);
            Map<TopicPartition, OffsetAndMetadata> offsets = onRecordsReceived(records);

            if (!useAutoCommit) {
                if (useAsyncCommit)
                    consumer.commitAsync(offsets, this);
                else
                    commitSync(offsets);
            }
        }
    } catch (WakeupException e) {
        // ignore, we are closing
    } finally {
        consumer.close();
        printJson(new ShutdownComplete());
        shutdownLatch.countDown();
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:25,代碼來源:VerifiableConsumer.java

示例8: testPutFlush

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
@Test
public void testPutFlush() {
    HashMap<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>();
    final String newLine = System.getProperty("line.separator"); 

    // We do not call task.start() since it would override the output stream

    task.put(Arrays.asList(
            new SinkRecord("topic1", 0, null, null, Schema.STRING_SCHEMA, "line1", 1)
    ));
    offsets.put(new TopicPartition("topic1", 0), new OffsetAndMetadata(1L));
    task.flush(offsets);
    assertEquals("line1" + newLine, os.toString());

    task.put(Arrays.asList(
            new SinkRecord("topic1", 0, null, null, Schema.STRING_SCHEMA, "line2", 2),
            new SinkRecord("topic2", 0, null, null, Schema.STRING_SCHEMA, "line3", 1)
    ));
    offsets.put(new TopicPartition("topic1", 0), new OffsetAndMetadata(2L));
    offsets.put(new TopicPartition("topic2", 0), new OffsetAndMetadata(1L));
    task.flush(offsets);
    assertEquals("line1" + newLine + "line2" + newLine + "line3" + newLine, os.toString());
}
 
開發者ID:wngn123,項目名稱:wngn-jms-kafka,代碼行數:24,代碼來源:FileStreamSinkTaskTest.java

示例9: flush

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
@Override
public void flush(Map<TopicPartition, OffsetAndMetadata> arg0) {
	// TODO Auto-generated method stub
	if (singleKinesisProducerPerPartition) {
		producerMap.values().forEach(producer -> {
			if (flushSync)
				producer.flushSync();
			else
				producer.flush();
		});
	} else {
		if (flushSync)
			kinesisProducer.flushSync();
		else
			kinesisProducer.flush();
	}
}
 
開發者ID:awslabs,項目名稱:kinesis-kafka-connector,代碼行數:18,代碼來源:AmazonKinesisSinkTask.java

示例10: testOnCommitChain

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
@Test
public void testOnCommitChain() {
    List<ConsumerInterceptor<Integer, Integer>> interceptorList = new ArrayList<>();
    // we are testing two different interceptors by configuring the same interceptor differently, which is not
    // how it would be done in KafkaConsumer, but ok for testing interceptor callbacks
    FilterConsumerInterceptor<Integer, Integer> interceptor1 = new FilterConsumerInterceptor<>(filterPartition1);
    FilterConsumerInterceptor<Integer, Integer> interceptor2 = new FilterConsumerInterceptor<>(filterPartition2);
    interceptorList.add(interceptor1);
    interceptorList.add(interceptor2);
    ConsumerInterceptors<Integer, Integer> interceptors = new ConsumerInterceptors<>(interceptorList);

    // verify that onCommit is called for all interceptors in the chain
    Map<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>();
    offsets.put(tp, new OffsetAndMetadata(0));
    interceptors.onCommit(offsets);
    assertEquals(2, onCommitCount);

    // verify that even if one of the interceptors throws an exception, all interceptors' onCommit are called
    interceptor1.injectOnCommitError(true);
    interceptors.onCommit(offsets);
    assertEquals(4, onCommitCount);

    interceptors.close();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:25,代碼來源:ConsumerInterceptorsTest.java

示例11: seekToMissingTransactions

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
private void seekToMissingTransactions(Map<TopicPartition, List<Long>> txByPartition) {
    Map<TopicPartition, Long> timestamps = txByPartition.entrySet().stream()
            .collect(Collectors.toMap(
                    Map.Entry::getKey,
                    entry -> Collections.min(entry.getValue())
            ));
    Map<TopicPartition, OffsetAndTimestamp> foundOffsets = consumer.offsetsForTimes(timestamps);
    Map<TopicPartition, OffsetAndMetadata> toCommit = foundOffsets.entrySet().stream()
            .collect(Collectors.toMap(
                    Map.Entry::getKey,
                    entry -> {
                        long offset = entry.getValue() != null? entry.getValue().offset() : 0;
                        return new OffsetAndMetadata(offset);
                    }
            ));
    consumer.commitSync(toCommit);
}
 
開發者ID:epam,項目名稱:Lagerta,代碼行數:18,代碼來源:ReconcilerImpl.java

示例12: seekToTransaction

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
public void seekToTransaction(DataRecoveryConfig config, long transactionId, KafkaFactory kafkaFactory,
    String groupId) {
    String topic = config.getLocalTopic();
    Properties consumerProperties = PropertiesUtil.propertiesForGroup(config.getConsumerConfig(), groupId);

    try (Consumer<ByteBuffer, ByteBuffer> consumer = kafkaFactory.consumer(consumerProperties)) {
        List<PartitionInfo> partitionInfos = consumer.partitionsFor(topic);
        Map<TopicPartition, Long> seekMap = new HashMap<>(partitionInfos.size());

        for (PartitionInfo partitionInfo : partitionInfos) {
            seekMap.put(new TopicPartition(topic, partitionInfo.partition()), transactionId);
        }
        consumer.assign(seekMap.keySet());
        Map<TopicPartition, OffsetAndTimestamp> foundOffsets = consumer.offsetsForTimes(seekMap);
        Map<TopicPartition, OffsetAndMetadata> offsetsToCommit = new HashMap<>();

        for (Map.Entry<TopicPartition, OffsetAndTimestamp> entry : foundOffsets.entrySet()) {
            if (entry.getValue() != null) {
                offsetsToCommit.put(entry.getKey(), new OffsetAndMetadata(entry.getValue().offset()));
            }
        }
        consumer.commitSync(offsetsToCommit);
    }
}
 
開發者ID:epam,項目名稱:Lagerta,代碼行數:25,代碼來源:PublisherKafkaService.java

示例13: calculateChangedOffsets

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
public Map<TopicPartition, OffsetAndMetadata> calculateChangedOffsets(List<List<TransactionWrapper>> txToCommit) {
    if (txToCommit.isEmpty()) {
        return Collections.emptyMap();
    }
    Lazy<TopicPartition, MutableLongList> offsetsFromTransactions = calculateOffsetsFromTransactions(txToCommit);
    Collection<TopicPartition> allTopics = new HashSet<>(offsets.keySet());
    allTopics.addAll(offsetsFromTransactions.keySet());
    Map<TopicPartition, OffsetAndMetadata> result = new HashMap<>();
    for (TopicPartition topic : allTopics) {
        OffsetHolder offsetHolder = offsets.get(topic);
        long currentOffset = offsetHolder.getLastDenseOffset();
        long updatedOffset = MergeHelper.mergeWithDenseCompaction(offsetsFromTransactions.get(topic),
            offsetHolder.getSparseCommittedOffsets(), currentOffset);
        if (updatedOffset != INITIAL_SYNC_POINT && updatedOffset != currentOffset) {
            offsetHolder.setLastDenseOffset(updatedOffset);
            result.put(topic, new OffsetAndMetadata(updatedOffset));
        }
    }
    return result;
}
 
開發者ID:epam,項目名稱:Lagerta,代碼行數:21,代碼來源:OffsetCalculator.java

示例14: commitAsync

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
@Override
public void commitAsync(final OffsetCommitCallback callback) {
    Retries.tryMe(new IgniteInClosure<RetryRunnableAsyncOnCallback>() {
        @Override
        public void apply(final RetryRunnableAsyncOnCallback retryRunnableAsyncOnCallback) {
            inner.commitAsync(new OffsetCommitCallback() {
                @Override
                public void onComplete(Map<TopicPartition, OffsetAndMetadata> offsets, Exception exception) {
                    callback.onComplete(offsets, exception);
                    if (exception != null) {
                        retryRunnableAsyncOnCallback.retry(exception);
                    }
                }
            });
        }
    }, strategy());
}
 
開發者ID:epam,項目名稱:Lagerta,代碼行數:18,代碼來源:ConsumerProxyRetry.java

示例15: topicSubscription

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //導入依賴的package包/類
@Test
public void topicSubscription() {
    state.subscribe(singleton(topic), rebalanceListener);
    assertEquals(1, state.subscription().size());
    assertTrue(state.assignedPartitions().isEmpty());
    assertTrue(state.partitionsAutoAssigned());
    state.assignFromSubscribed(singleton(tp0));
    state.seek(tp0, 1);
    state.committed(tp0, new OffsetAndMetadata(1));
    assertAllPositions(tp0, 1L);
    state.assignFromSubscribed(singleton(tp1));
    assertTrue(state.isAssigned(tp1));
    assertFalse(state.isAssigned(tp0));
    assertFalse(state.isFetchable(tp1));
    assertEquals(singleton(tp1), state.assignedPartitions());
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:17,代碼來源:SubscriptionStateTest.java


注:本文中的org.apache.kafka.clients.consumer.OffsetAndMetadata類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。