当前位置: 首页>>代码示例>>Java>>正文


Java OffsetAndMetadata类代码示例

本文整理汇总了Java中org.apache.kafka.clients.consumer.OffsetAndMetadata的典型用法代码示例。如果您正苦于以下问题:Java OffsetAndMetadata类的具体用法?Java OffsetAndMetadata怎么用?Java OffsetAndMetadata使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


OffsetAndMetadata类属于org.apache.kafka.clients.consumer包,在下文中一共展示了OffsetAndMetadata类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: doAutoCommitOffsetsAsync

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
private void doAutoCommitOffsetsAsync() {
    Map<TopicPartition, OffsetAndMetadata> allConsumedOffsets = subscriptions.allConsumed();
    log.debug("Sending asynchronous auto-commit of offsets {} for group {}", allConsumedOffsets, groupId);
    //rebalance
    commitOffsetsAsync(allConsumedOffsets, new OffsetCommitCallback() {
        @Override
        public void onComplete(Map<TopicPartition, OffsetAndMetadata> offsets, Exception exception) {
            if (exception != null) {
                log.warn("Auto-commit of offsets {} failed for group {}: {}", offsets, groupId,
                        exception.getMessage());
                if (exception instanceof RetriableException)
                    nextAutoCommitDeadline = Math.min(time.milliseconds() + retryBackoffMs, nextAutoCommitDeadline);
            } else {
                log.debug("Completed auto-commit of offsets {} for group {}", offsets, groupId);
            }
        }
    });
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:19,代码来源:ConsumerCoordinator.java

示例2: send

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
@Override
public void send(Long k, byte[] v) {
    KafkaProducer<Long, byte[]> p = getWorker();
    p.initTransactions();
    p.beginTransaction();
    Future<RecordMetadata> res = worker.send(new ProducerRecord<Long, byte[]>(topic, k, v));
    RecordMetadata record;
    try {
        record = res.get();
        offsets.clear();
        offsets.put(new TopicPartition(topic, record.partition()), new OffsetAndMetadata(record.offset()));
        p.sendOffsetsToTransaction(offsets, MallConstants.ORDER_GROUP);
        p.commitTransaction();
    } catch (InterruptedException | ExecutionException e) {
        p.abortTransaction();
    }
}
 
开发者ID:jiumao-org,项目名称:wechat-mall,代码行数:18,代码来源:OrderProducer.java

示例3: commit

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
@Override
public void commit() {
  HashMap<TopicPartition, OffsetAndMetadata> offsets =
      new HashMap<TopicPartition, OffsetAndMetadata>();

  partitionOffset.forEach((key, value) -> {
    String topic = key.split("\\+")[0];
    int partition = Integer.valueOf(key.split("\\+")[1]);
    offsets.put(new TopicPartition(topic, partition), new OffsetAndMetadata(value + 1));
  });

  consumer.commitSync(offsets);
  committed.set(true);
  partitionOffset.clear();

  // record the time being committed
  timerCTX.stop();

  stat.newestCompleted = newestRecord;
  stat.delay = new Date().getTime() - start.getTime();
}
 
开发者ID:HashDataInc,项目名称:bireme,代码行数:22,代码来源:KafkaPipeLine.java

示例4: recommitOffsets

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
public void recommitOffsets() {
    LocalDateTime now = LocalDateTime.now(clock);
    if (now.isAfter(lastUpdateTime.plus(IDLE_DURATION))) {
        for (TopicPartition tp : offsetData.keySet()) {
            OffsetAndTime offsetAndTime = offsetData.get(tp);
            if (now.isAfter(offsetAndTime.time.plus(IDLE_DURATION))) {
                try {
                    consumer.commitSync(Collections.singletonMap(tp,
                            new OffsetAndMetadata(offsetAndTime.offset)));
                } catch (CommitFailedException covfefe) {
                    logger.info("Caught CommitFailedException attempting to commit {} {}",
                            tp, offsetAndTime.offset);
                }
                offsetAndTime.time = now;
            }
        }
        lastUpdateTime = now;
    }
}
 
开发者ID:Sixt,项目名称:ja-micro,代码行数:20,代码来源:OffsetCommitter.java

示例5: getZookeeperOffsets

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
private Map<TopicPartition, OffsetAndMetadata> getZookeeperOffsets(ZkUtils client,
                                                                   String topicStr) {
  Map<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>();
  ZKGroupTopicDirs topicDirs = new ZKGroupTopicDirs(groupId, topicStr);
  List<String> partitions = asJavaListConverter(
      client.getChildrenParentMayNotExist(topicDirs.consumerOffsetDir())).asJava();
  for (String partition : partitions) {
    TopicPartition key = new TopicPartition(topicStr, Integer.valueOf(partition));
    Option<String> data = client.readDataMaybeNull(
        topicDirs.consumerOffsetDir() + "/" + partition)._1();
    if (data.isDefined()) {
      Long offset = Long.valueOf(data.get());
      offsets.put(key, new OffsetAndMetadata(offset));
    }
  }
  return offsets;
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:18,代码来源:KafkaSource.java

示例6: sendOffsetFetchRequest

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
/**
 * Fetch the committed offsets for a set of partitions. This is a non-blocking call. The
 * returned future can be polled to get the actual offsets returned from the broker.
 *
 * @param partitions The set of partitions to get offsets for.
 * @return A request future containing the committed offsets.
 */
// 创建并缓存OffsetFethcRequest
private RequestFuture<Map<TopicPartition, OffsetAndMetadata>> sendOffsetFetchRequest(Set<TopicPartition> partitions) {
    Node coordinator = coordinator();
    if (coordinator == null)
        return RequestFuture.coordinatorNotAvailable();

    log.debug("Group {} fetching committed offsets for partitions: {}", groupId, partitions);
    // construct the request
    OffsetFetchRequest.Builder requestBuilder =
            new OffsetFetchRequest.Builder(this.groupId, new ArrayList<>(partitions));

    // send the request with a callback
    // 使用OffsetFetchResponseHandler来处理响应
    return client.send(coordinator, requestBuilder)
            .compose(new OffsetFetchResponseHandler());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:24,代码来源:ConsumerCoordinator.java

示例7: run

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
public void run() {
    try {
        printJson(new StartupComplete());
        consumer.subscribe(Collections.singletonList(topic), this);

        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(Long.MAX_VALUE);
            Map<TopicPartition, OffsetAndMetadata> offsets = onRecordsReceived(records);

            if (!useAutoCommit) {
                if (useAsyncCommit)
                    consumer.commitAsync(offsets, this);
                else
                    commitSync(offsets);
            }
        }
    } catch (WakeupException e) {
        // ignore, we are closing
    } finally {
        consumer.close();
        printJson(new ShutdownComplete());
        shutdownLatch.countDown();
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:25,代码来源:VerifiableConsumer.java

示例8: testPutFlush

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
@Test
public void testPutFlush() {
    HashMap<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>();
    final String newLine = System.getProperty("line.separator"); 

    // We do not call task.start() since it would override the output stream

    task.put(Arrays.asList(
            new SinkRecord("topic1", 0, null, null, Schema.STRING_SCHEMA, "line1", 1)
    ));
    offsets.put(new TopicPartition("topic1", 0), new OffsetAndMetadata(1L));
    task.flush(offsets);
    assertEquals("line1" + newLine, os.toString());

    task.put(Arrays.asList(
            new SinkRecord("topic1", 0, null, null, Schema.STRING_SCHEMA, "line2", 2),
            new SinkRecord("topic2", 0, null, null, Schema.STRING_SCHEMA, "line3", 1)
    ));
    offsets.put(new TopicPartition("topic1", 0), new OffsetAndMetadata(2L));
    offsets.put(new TopicPartition("topic2", 0), new OffsetAndMetadata(1L));
    task.flush(offsets);
    assertEquals("line1" + newLine + "line2" + newLine + "line3" + newLine, os.toString());
}
 
开发者ID:wngn123,项目名称:wngn-jms-kafka,代码行数:24,代码来源:FileStreamSinkTaskTest.java

示例9: flush

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
@Override
public void flush(Map<TopicPartition, OffsetAndMetadata> arg0) {
	// TODO Auto-generated method stub
	if (singleKinesisProducerPerPartition) {
		producerMap.values().forEach(producer -> {
			if (flushSync)
				producer.flushSync();
			else
				producer.flush();
		});
	} else {
		if (flushSync)
			kinesisProducer.flushSync();
		else
			kinesisProducer.flush();
	}
}
 
开发者ID:awslabs,项目名称:kinesis-kafka-connector,代码行数:18,代码来源:AmazonKinesisSinkTask.java

示例10: testOnCommitChain

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
@Test
public void testOnCommitChain() {
    List<ConsumerInterceptor<Integer, Integer>> interceptorList = new ArrayList<>();
    // we are testing two different interceptors by configuring the same interceptor differently, which is not
    // how it would be done in KafkaConsumer, but ok for testing interceptor callbacks
    FilterConsumerInterceptor<Integer, Integer> interceptor1 = new FilterConsumerInterceptor<>(filterPartition1);
    FilterConsumerInterceptor<Integer, Integer> interceptor2 = new FilterConsumerInterceptor<>(filterPartition2);
    interceptorList.add(interceptor1);
    interceptorList.add(interceptor2);
    ConsumerInterceptors<Integer, Integer> interceptors = new ConsumerInterceptors<>(interceptorList);

    // verify that onCommit is called for all interceptors in the chain
    Map<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>();
    offsets.put(tp, new OffsetAndMetadata(0));
    interceptors.onCommit(offsets);
    assertEquals(2, onCommitCount);

    // verify that even if one of the interceptors throws an exception, all interceptors' onCommit are called
    interceptor1.injectOnCommitError(true);
    interceptors.onCommit(offsets);
    assertEquals(4, onCommitCount);

    interceptors.close();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:25,代码来源:ConsumerInterceptorsTest.java

示例11: seekToMissingTransactions

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
private void seekToMissingTransactions(Map<TopicPartition, List<Long>> txByPartition) {
    Map<TopicPartition, Long> timestamps = txByPartition.entrySet().stream()
            .collect(Collectors.toMap(
                    Map.Entry::getKey,
                    entry -> Collections.min(entry.getValue())
            ));
    Map<TopicPartition, OffsetAndTimestamp> foundOffsets = consumer.offsetsForTimes(timestamps);
    Map<TopicPartition, OffsetAndMetadata> toCommit = foundOffsets.entrySet().stream()
            .collect(Collectors.toMap(
                    Map.Entry::getKey,
                    entry -> {
                        long offset = entry.getValue() != null? entry.getValue().offset() : 0;
                        return new OffsetAndMetadata(offset);
                    }
            ));
    consumer.commitSync(toCommit);
}
 
开发者ID:epam,项目名称:Lagerta,代码行数:18,代码来源:ReconcilerImpl.java

示例12: seekToTransaction

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
public void seekToTransaction(DataRecoveryConfig config, long transactionId, KafkaFactory kafkaFactory,
    String groupId) {
    String topic = config.getLocalTopic();
    Properties consumerProperties = PropertiesUtil.propertiesForGroup(config.getConsumerConfig(), groupId);

    try (Consumer<ByteBuffer, ByteBuffer> consumer = kafkaFactory.consumer(consumerProperties)) {
        List<PartitionInfo> partitionInfos = consumer.partitionsFor(topic);
        Map<TopicPartition, Long> seekMap = new HashMap<>(partitionInfos.size());

        for (PartitionInfo partitionInfo : partitionInfos) {
            seekMap.put(new TopicPartition(topic, partitionInfo.partition()), transactionId);
        }
        consumer.assign(seekMap.keySet());
        Map<TopicPartition, OffsetAndTimestamp> foundOffsets = consumer.offsetsForTimes(seekMap);
        Map<TopicPartition, OffsetAndMetadata> offsetsToCommit = new HashMap<>();

        for (Map.Entry<TopicPartition, OffsetAndTimestamp> entry : foundOffsets.entrySet()) {
            if (entry.getValue() != null) {
                offsetsToCommit.put(entry.getKey(), new OffsetAndMetadata(entry.getValue().offset()));
            }
        }
        consumer.commitSync(offsetsToCommit);
    }
}
 
开发者ID:epam,项目名称:Lagerta,代码行数:25,代码来源:PublisherKafkaService.java

示例13: calculateChangedOffsets

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
public Map<TopicPartition, OffsetAndMetadata> calculateChangedOffsets(List<List<TransactionWrapper>> txToCommit) {
    if (txToCommit.isEmpty()) {
        return Collections.emptyMap();
    }
    Lazy<TopicPartition, MutableLongList> offsetsFromTransactions = calculateOffsetsFromTransactions(txToCommit);
    Collection<TopicPartition> allTopics = new HashSet<>(offsets.keySet());
    allTopics.addAll(offsetsFromTransactions.keySet());
    Map<TopicPartition, OffsetAndMetadata> result = new HashMap<>();
    for (TopicPartition topic : allTopics) {
        OffsetHolder offsetHolder = offsets.get(topic);
        long currentOffset = offsetHolder.getLastDenseOffset();
        long updatedOffset = MergeHelper.mergeWithDenseCompaction(offsetsFromTransactions.get(topic),
            offsetHolder.getSparseCommittedOffsets(), currentOffset);
        if (updatedOffset != INITIAL_SYNC_POINT && updatedOffset != currentOffset) {
            offsetHolder.setLastDenseOffset(updatedOffset);
            result.put(topic, new OffsetAndMetadata(updatedOffset));
        }
    }
    return result;
}
 
开发者ID:epam,项目名称:Lagerta,代码行数:21,代码来源:OffsetCalculator.java

示例14: commitAsync

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
@Override
public void commitAsync(final OffsetCommitCallback callback) {
    Retries.tryMe(new IgniteInClosure<RetryRunnableAsyncOnCallback>() {
        @Override
        public void apply(final RetryRunnableAsyncOnCallback retryRunnableAsyncOnCallback) {
            inner.commitAsync(new OffsetCommitCallback() {
                @Override
                public void onComplete(Map<TopicPartition, OffsetAndMetadata> offsets, Exception exception) {
                    callback.onComplete(offsets, exception);
                    if (exception != null) {
                        retryRunnableAsyncOnCallback.retry(exception);
                    }
                }
            });
        }
    }, strategy());
}
 
开发者ID:epam,项目名称:Lagerta,代码行数:18,代码来源:ConsumerProxyRetry.java

示例15: topicSubscription

import org.apache.kafka.clients.consumer.OffsetAndMetadata; //导入依赖的package包/类
@Test
public void topicSubscription() {
    state.subscribe(singleton(topic), rebalanceListener);
    assertEquals(1, state.subscription().size());
    assertTrue(state.assignedPartitions().isEmpty());
    assertTrue(state.partitionsAutoAssigned());
    state.assignFromSubscribed(singleton(tp0));
    state.seek(tp0, 1);
    state.committed(tp0, new OffsetAndMetadata(1));
    assertAllPositions(tp0, 1L);
    state.assignFromSubscribed(singleton(tp1));
    assertTrue(state.isAssigned(tp1));
    assertFalse(state.isAssigned(tp0));
    assertFalse(state.isFetchable(tp1));
    assertEquals(singleton(tp1), state.assignedPartitions());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:17,代码来源:SubscriptionStateTest.java


注:本文中的org.apache.kafka.clients.consumer.OffsetAndMetadata类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。