当前位置: 首页>>代码示例>>Python>>正文


Python Queue.get_nowait方法代码示例

本文整理汇总了Python中eventlet.queue.Queue.get_nowait方法的典型用法代码示例。如果您正苦于以下问题:Python Queue.get_nowait方法的具体用法?Python Queue.get_nowait怎么用?Python Queue.get_nowait使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在eventlet.queue.Queue的用法示例。


在下文中一共展示了Queue.get_nowait方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: EventletInbox

# 需要导入模块: from eventlet.queue import Queue [as 别名]
# 或者: from eventlet.queue.Queue import get_nowait [as 别名]
class EventletInbox(object):
    def __init__(self, logger=None):
        ''' __init__ '''

        self.__inbox = EventletQueue()

        if logger is None:
            self._logger = getLogger('%s.EventletInbox' % __name__)
        else:
            self._logger = logger

    def get(self):
        ''' get data from inbox '''
        try:
            result = self.__inbox.get_nowait()
        except EventletEmpty:
            raise EmptyInboxException
        return result

    def put(self, message):
        ''' put message to inbox '''

        self.__inbox.put(message)

    def __len__(self):
        ''' return length of inbox '''

        return self.__inbox.qsize()
开发者ID:snakeego,项目名称:pyactors,代码行数:30,代码来源:event.py

示例2: Crawler

# 需要导入模块: from eventlet.queue import Queue [as 别名]
# 或者: from eventlet.queue.Queue import get_nowait [as 别名]
class Crawler(object):
    def __init__(self, max_connections, input_is_plain):
        self.max_connections = max_connections
        self.input_is_plain = input_is_plain

        self.queue = Queue(1)
        self.closed = False
        self._handler_pool = GreenPool(self.max_connections)
        self._robots_cache = PoolMap(self.get_robots_checker, pool_max_size=1, timeout=600)

        # Start IO worker and die if he does.
        self.io_worker = io.Worker(lambda: self.closed)
        t = spawn(self.io_worker.run_loop)
        t.link(reraise_errors, greenthread.getcurrent())

        log.debug(u"Crawler started. Max connections: %d.", self.max_connections)

    def crawl(self, forever=True):
        # TODO: do something special about signals?

        if forever:
            self.start_queue_updater()

        while not self.closed:
            # `get_nowait` will only work together with sleep(0) here
            # because we need greenlet switch to reraise exception from `do_process`.
            sleep()
            try:
                item = self.queue.get_nowait()
            except Empty:
                if not forever:
                    self.graceful_stop()
                sleep(0.01)
                continue
            t = self._handler_pool.spawn(self.do_process, item)
            t.link(reraise_errors, greenthread.getcurrent())

    def stop(self):
        self.closed = True

    def graceful_stop(self, timeout=None):
        """Stops crawler and waits for all already started crawling requests to finish.

        If `timeout` is supplied, it waits for at most `timeout` time to finish
            and returns True if allocated time was enough.
            Returns False if `timeout` was not enough.
        """
        self.closed = True
        if timeout is not None:
            with eventlet.Timeout(timeout, False):
                if hasattr(self, "_queue_updater_thread"):
                    self._queue_updater_thread.kill()
                self._handler_pool.waitall()
                return True
            return False
        else:
            if hasattr(self, "_queue_updater_thread"):
                self._queue_updater_thread.kill()
            self._handler_pool.waitall()

    def start_queue_updater(self):
        self._queue_updater_thread = spawn(self.queue_updater)
        self._queue_updater_thread.link(reraise_errors, greenthread.getcurrent())

    def queue_updater(self):
        log.debug("Waiting for crawl jobs on stdin.")
        for line in sys.stdin:
            if self.closed:
                break

            line = line.strip()

            if self.input_is_plain:
                job = {"url": line}
            else:
                try:
                    job = json.loads(line)
                except ValueError:
                    log.error(u"Decoding input line: %s", line)
                    continue

            # extend worker queue
            # 1. skip duplicate URLs
            for queue_item in self.queue.queue:
                if queue_item["url"] == job["url"]:  # compare URLs
                    break
            else:
                # 2. extend queue with new items
                # May block here, when queue is full. This is a feature.
                self.queue.put(job)

        # Stdin exhausted -> stop.
        while not self.queue.empty():
            sleep(0.01)

        sleep(2)  # FIXME: Crutch to prevent stopping too early.

        self.graceful_stop()

    def get_robots_checker(self, scheme, authority):
#.........这里部分代码省略.........
开发者ID:GunioRobot,项目名称:heroshi,代码行数:103,代码来源:Crawler.py


注:本文中的eventlet.queue.Queue.get_nowait方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。