本文整理汇总了Python中Crawler.Crawler.process_q方法的典型用法代码示例。如果您正苦于以下问题:Python Crawler.process_q方法的具体用法?Python Crawler.process_q怎么用?Python Crawler.process_q使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类Crawler.Crawler
的用法示例。
在下文中一共展示了Crawler.process_q方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: test_crawl_limit
# 需要导入模块: from Crawler import Crawler [as 别名]
# 或者: from Crawler.Crawler import process_q [as 别名]
def test_crawl_limit(self):
c = Crawler("http://a.com")
c.SLEEP_TIME = 0
def side_effect():
c.process_q.pop(0)
c._process_next_url = mock.Mock(side_effect=side_effect)
c.render_sitemap = mock.Mock()
c.URL_LIMIT = 10
c.process_q = ["test"] * 5
c.crawl()
self.assertEqual(c._process_next_url.call_count, 5)
c._process_next_url.call_count = 0
c.process_q = ["test"] * 10
c.URL_LIMIT = 5
c.crawl()
self.assertEqual(c._process_next_url.call_count, 5)
c._process_next_url.call_count = 0
c.process_q = ["test"] * 10
c.URL_LIMIT = float("inf")
c.crawl()
self.assertEqual(c._process_next_url.call_count, 10)