本文整理匯總了Python中kafka.coordinator.consumer.ConsumerCoordinator.commit_offsets_async方法的典型用法代碼示例。如果您正苦於以下問題:Python ConsumerCoordinator.commit_offsets_async方法的具體用法?Python ConsumerCoordinator.commit_offsets_async怎麽用?Python ConsumerCoordinator.commit_offsets_async使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類kafka.coordinator.consumer.ConsumerCoordinator
的用法示例。
在下文中一共展示了ConsumerCoordinator.commit_offsets_async方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: KafkaConsumer
# 需要導入模塊: from kafka.coordinator.consumer import ConsumerCoordinator [as 別名]
# 或者: from kafka.coordinator.consumer.ConsumerCoordinator import commit_offsets_async [as 別名]
#.........這裏部分代碼省略.........
try:
self.config["value_deserializer"].close()
except AttributeError:
pass
log.debug("The KafkaConsumer has closed.")
def commit_async(self, offsets=None, callback=None):
"""Commit offsets to kafka asynchronously, optionally firing callback
This commits offsets only to Kafka. The offsets committed using this API
will be used on the first fetch after every rebalance and also on
startup. As such, if you need to store offsets in anything other than
Kafka, this API should not be used.
This is an asynchronous call and will not block. Any errors encountered
are either passed to the callback (if provided) or discarded.
Arguments:
offsets (dict, optional): {TopicPartition: OffsetAndMetadata} dict
to commit with the configured group_id. Defaults to current
consumed offsets for all subscribed partitions.
callback (callable, optional): called as callback(offsets, response)
with response as either an Exception or a OffsetCommitResponse
struct. This callback can be used to trigger custom actions when
a commit request completes.
Returns:
kafka.future.Future
"""
assert self.config["api_version"] >= (0, 8, 1)
if offsets is None:
offsets = self._subscription.all_consumed_offsets()
log.debug("Committing offsets: %s", offsets)
future = self._coordinator.commit_offsets_async(offsets, callback=callback)
return future
def commit(self, offsets=None):
"""Commit offsets to kafka, blocking until success or error
This commits offsets only to Kafka. The offsets committed using this API
will be used on the first fetch after every rebalance and also on
startup. As such, if you need to store offsets in anything other than
Kafka, this API should not be used.
Blocks until either the commit succeeds or an unrecoverable error is
encountered (in which case it is thrown to the caller).
Currently only supports kafka-topic offset storage (not zookeeper)
Arguments:
offsets (dict, optional): {TopicPartition: OffsetAndMetadata} dict
to commit with the configured group_id. Defaults to current
consumed offsets for all subscribed partitions.
"""
assert self.config["api_version"] >= (0, 8, 1)
if offsets is None:
offsets = self._subscription.all_consumed_offsets()
self._coordinator.commit_offsets_sync(offsets)
def committed(self, partition):
"""Get the last committed offset for the given partition
This offset will be used as the position for the consumer
in the event of a failure.
This call may block to do a remote call if the partition in question