本文整理汇总了Java中io.vertx.kafka.client.common.TopicPartition类的典型用法代码示例。如果您正苦于以下问题:Java TopicPartition类的具体用法?Java TopicPartition怎么用?Java TopicPartition使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。
TopicPartition类属于io.vertx.kafka.client.common包,在下文中一共展示了TopicPartition类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: exampleConsumerAssignPartition
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
/**
* Example about how Kafka consumer receives messages
* from a topic requesting a specific partition for that
* @param consumer
*/
public void exampleConsumerAssignPartition(KafkaConsumer<String, String> consumer) {
// register the handler for incoming messages
consumer.handler(record -> {
System.out.println("key=" + record.key() + ",value=" + record.value() +
",partition=" + record.partition() + ",offset=" + record.offset());
});
//
Set<TopicPartition> topicPartitions = new HashSet<>();
topicPartitions.add(new TopicPartition()
.setTopic("test")
.setPartition(0));
// requesting to be assigned the specific partition
consumer.assign(topicPartitions, done -> {
if (done.succeeded()) {
System.out.println("Partition assigned");
// requesting the assigned partitions
consumer.assignment(done1 -> {
if (done1.succeeded()) {
for (TopicPartition topicPartition : done1.result()) {
System.out.println(topicPartition.getTopic() + " " + topicPartition.getPartition());
}
}
});
}
});
}
示例2: exampleSeek
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
public void exampleSeek(KafkaConsumer<String, String> consumer) {
TopicPartition topicPartition = new TopicPartition()
.setTopic("test")
.setPartition(0);
// seek to a specific offset
consumer.seek(topicPartition, 10, done -> {
if (done.succeeded()) {
System.out.println("Seeking done");
}
});
}
示例3: exampleSeekToBeginning
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
public void exampleSeekToBeginning(KafkaConsumer<String, String> consumer) {
TopicPartition topicPartition = new TopicPartition()
.setTopic("test")
.setPartition(0);
// seek to the beginning of the partition
consumer.seekToBeginning(Collections.singleton(topicPartition), done -> {
if (done.succeeded()) {
System.out.println("Seeking done");
}
});
}
示例4: exampleSeekToEnd
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
/**
* Example about how Kafka consumer can seek in the partition
* changing the offset from which starting to read messages
* @param consumer
*/
public void exampleSeekToEnd(KafkaConsumer<String, String> consumer) {
TopicPartition topicPartition = new TopicPartition()
.setTopic("test")
.setPartition(0);
// seek to the end of the partition
consumer.seekToEnd(Collections.singleton(topicPartition), done -> {
if (done.succeeded()) {
System.out.println("Seeking done");
}
});
}
示例5: exampleConsumerOffsetsForTimes
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
/**
* Example to demonstrate how one can use the new offsetsForTimes API (introduced with Kafka
* 0.10.1.1) to look up an offset by timestamp, i.e. search parameter is an epoch timestamp and
* the call returns the lowest offset with ingestion timestamp >= given timestamp.
* @param consumer Consumer to be used
*/
public void exampleConsumerOffsetsForTimes(KafkaConsumer<String, String> consumer) {
Map<TopicPartition, Long> topicPartitionsWithTimestamps = new HashMap<>();
TopicPartition topicPartition = new TopicPartition().setTopic("test").setPartition(0);
// We are interested in the offset for data ingested 60 seconds ago
long timestamp = (System.currentTimeMillis() - 60000);
topicPartitionsWithTimestamps.put(topicPartition, timestamp);
consumer.offsetsForTimes(topicPartitionsWithTimestamps, done -> {
if(done.succeeded()) {
Map<TopicPartition, OffsetAndTimestamp> results = done.result();
results.forEach((topic, offset) ->
System.out.println("Offset for topic="+topic.getTopic()+
", partition="+topic.getPartition()+"\n"+
", timestamp="+timestamp+", offset="+offset.getOffset()+
", offsetTimestamp="+offset.getTimestamp()));
}
});
// Convenience method for single-partition lookup
consumer.offsetsForTimes(topicPartition, timestamp, done -> {
if(done.succeeded()) {
OffsetAndTimestamp offsetAndTimestamp = done.result();
System.out.println("Offset for topic="+topicPartition.getTopic()+
", partition="+topicPartition.getPartition()+"\n"+
", timestamp="+timestamp+", offset="+offsetAndTimestamp.getOffset()+
", offsetTimestamp="+offsetAndTimestamp.getTimestamp());
}
});
}
示例6: exampleConsumerFlowControl
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
/**
* Example about how Kafka consumer can pause reading from a topic partition
* and then resume read operation for continuing to get messages
* @param vertx
* @param consumer
*/
public void exampleConsumerFlowControl(Vertx vertx, KafkaConsumer<String, String> consumer) {
TopicPartition topicPartition = new TopicPartition()
.setTopic("test")
.setPartition(0);
// registering the handler for incoming messages
consumer.handler(record -> {
System.out.println("key=" + record.key() + ",value=" + record.value() +
",partition=" + record.partition() + ",offset=" + record.offset());
// i.e. pause/resume on partition 0, after reading message up to offset 5
if ((record.partition() == 0) && (record.offset() == 5)) {
// pause the read operations
consumer.pause(topicPartition, ar -> {
if (ar.succeeded()) {
System.out.println("Paused");
// resume read operation after a specific time
vertx.setTimer(5000, timeId -> {
// resumi read operations
consumer.resume(topicPartition);
});
}
});
}
});
}
示例7: paused
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
@Override
public void paused(Handler<AsyncResult<Set<TopicPartition>>> handler) {
this.stream.paused(done -> {
if (done.succeeded()) {
handler.handle(Future.succeededFuture(Helper.from(done.result())));
} else {
handler.handle(Future.failedFuture(done.cause()));
}
});
}
示例8: assignment
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
@Override
public KafkaConsumer<K, V> assignment(Handler<AsyncResult<Set<TopicPartition>>> handler) {
this.stream.assignment(done -> {
if (done.succeeded()) {
handler.handle(Future.succeededFuture(Helper.from(done.result())));
} else {
handler.handle(Future.failedFuture(done.cause()));
}
});
return this;
}
示例9: commit
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
@Override
public void commit(Map<TopicPartition, OffsetAndMetadata> offsets, Handler<AsyncResult<Map<TopicPartition, OffsetAndMetadata>>> completionHandler) {
this.stream.commit(Helper.to(offsets), done -> {
if (done.succeeded()) {
completionHandler.handle(Future.succeededFuture(Helper.from(done.result())));
} else {
completionHandler.handle(Future.failedFuture(done.cause()));
}
});
}
示例10: committed
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
@Override
public void committed(TopicPartition topicPartition, Handler<AsyncResult<OffsetAndMetadata>> handler) {
this.stream.committed(Helper.to(topicPartition), done -> {
if (done.succeeded()) {
handler.handle(Future.succeededFuture(Helper.from(done.result())));
} else {
handler.handle(Future.failedFuture(done.cause()));
}
});
}
示例11: offsetsForTimes
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
@Override
public void offsetsForTimes(TopicPartition topicPartition, Long timestamp, Handler<AsyncResult<OffsetAndTimestamp>> handler) {
Map<TopicPartition, Long> topicPartitions = new HashMap<>();
topicPartitions.put(topicPartition, timestamp);
this.stream.offsetsForTimes(Helper.toTopicPartitionTimes(topicPartitions), done -> {
if(done.succeeded()) {
if (done.result().values().size() == 1) {
org.apache.kafka.common.TopicPartition kTopicPartition = new org.apache.kafka.common.TopicPartition (topicPartition.getTopic(), topicPartition.getPartition());
org.apache.kafka.clients.consumer.OffsetAndTimestamp offsetAndTimestamp = done.result().get(kTopicPartition);
if(offsetAndTimestamp != null) {
OffsetAndTimestamp resultOffsetAndTimestamp = new OffsetAndTimestamp(offsetAndTimestamp.offset(), offsetAndTimestamp.timestamp());
handler.handle(Future.succeededFuture(resultOffsetAndTimestamp));
}
// offsetAndTimestamp is null, i.e., search by timestamp did not lead to a result
else {
handler.handle(Future.succeededFuture());
}
} else if (done.result().values().size() == 0) {
handler.handle(Future.succeededFuture());
} else {
handler.handle(Future.failedFuture("offsetsForTimes should return exactly one OffsetAndTimestamp"));
}
} else {
handler.handle(Future.failedFuture(done.cause()));
}
});
}
示例12: beginningOffsets
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
@Override
public void beginningOffsets(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Map<TopicPartition, Long>>> handler) {
this.stream.beginningOffsets(Helper.to(topicPartitions), done -> {
if(done.succeeded()) {
handler.handle(Future.succeededFuture(Helper.fromTopicPartitionOffsets(done.result())));
} else {
handler.handle(Future.failedFuture(done.cause()));
}
});
}
示例13: endOffsets
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
@Override
public void endOffsets(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Map<TopicPartition, Long>>> handler) {
this.stream.endOffsets(Helper.to(topicPartitions), done -> {
if(done.succeeded()) {
handler.handle(Future.succeededFuture(Helper.fromTopicPartitionOffsets(done.result())));
} else {
handler.handle(Future.failedFuture(done.cause()));
}
});
}
示例14: adaptHandler
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
public static Handler<Set<org.apache.kafka.common.TopicPartition>> adaptHandler(Handler<Set<TopicPartition>> handler) {
if (handler != null) {
return topicPartitions -> handler.handle(Helper.from(topicPartitions));
} else {
return null;
}
}
示例15: fromTopicPartitionOffsetAndTimestamp
import io.vertx.kafka.client.common.TopicPartition; //导入依赖的package包/类
public static Map<TopicPartition, OffsetAndTimestamp> fromTopicPartitionOffsetAndTimestamp(Map<org.apache.kafka.common.TopicPartition, org.apache.kafka.clients.consumer.OffsetAndTimestamp> topicPartitionOffsetAndTimestamps) {
return topicPartitionOffsetAndTimestamps.entrySet().stream()
.filter(e-> e.getValue() != null)
.collect(Collectors.toMap(
e -> new TopicPartition(e.getKey().topic(), e.getKey().partition()),
e ->new OffsetAndTimestamp(e.getValue().offset(), e.getValue().timestamp()))
);
}