當前位置: 首頁>>代碼示例>>Python>>正文


Python ConsumerCoordinator.ensure_coordinator_known方法代碼示例

本文整理匯總了Python中kafka.coordinator.consumer.ConsumerCoordinator.ensure_coordinator_known方法的典型用法代碼示例。如果您正苦於以下問題:Python ConsumerCoordinator.ensure_coordinator_known方法的具體用法?Python ConsumerCoordinator.ensure_coordinator_known怎麽用?Python ConsumerCoordinator.ensure_coordinator_known使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在kafka.coordinator.consumer.ConsumerCoordinator的用法示例。


在下文中一共展示了ConsumerCoordinator.ensure_coordinator_known方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: KafkaConsumer

# 需要導入模塊: from kafka.coordinator.consumer import ConsumerCoordinator [as 別名]
# 或者: from kafka.coordinator.consumer.ConsumerCoordinator import ensure_coordinator_known [as 別名]

#.........這裏部分代碼省略.........

        # poll for new data until the timeout expires
        start = time.time()
        remaining = timeout_ms
        while True:
            records = self._poll_once(remaining)
            if records:
                # before returning the fetched records, we can send off the
                # next round of fetches and avoid block waiting for their
                # responses to enable pipelining while the user is handling the
                # fetched records.
                self._fetcher.init_fetches()
                return records

            elapsed_ms = (time.time() - start) * 1000
            remaining = timeout_ms - elapsed_ms

            if remaining <= 0:
                return {}

    def _poll_once(self, timeout_ms):
        """
        Do one round of polling. In addition to checking for new data, this does
        any needed heart-beating, auto-commits, and offset updates.

        Arguments:
            timeout_ms (int): The maximum time in milliseconds to block

        Returns:
            dict: map of topic to list of records (may be empty)
        """
        if self.config["api_version"] >= (0, 8, 2):
            # TODO: Sub-requests should take into account the poll timeout (KAFKA-1894)
            self._coordinator.ensure_coordinator_known()

        if self.config["api_version"] >= (0, 9):
            # ensure we have partitions assigned if we expect to
            if self._subscription.partitions_auto_assigned():
                self._coordinator.ensure_active_group()

        # fetch positions if we have partitions we're subscribed to that we
        # don't know the offset for
        if not self._subscription.has_all_fetch_positions():
            self._update_fetch_positions(self._subscription.missing_fetch_positions())

        # init any new fetches (won't resend pending fetches)
        records = self._fetcher.fetched_records()

        # if data is available already, e.g. from a previous network client
        # poll() call to commit, then just return it immediately
        if records:
            return records

        self._fetcher.init_fetches()
        self._client.poll(timeout_ms)
        return self._fetcher.fetched_records()

    def position(self, partition):
        """Get the offset of the next record that will be fetched

        Arguments:
            partition (TopicPartition): partition to check
        """
        assert self._subscription.is_assigned(partition)

        offset = self._subscription.assignment[partition].position
開發者ID:sounos,項目名稱:kafka-python,代碼行數:70,代碼來源:group.py


注:本文中的kafka.coordinator.consumer.ConsumerCoordinator.ensure_coordinator_known方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。