本文整理汇总了Python中scrapy.http.Response.meta['user_agent']方法的典型用法代码示例。如果您正苦于以下问题:Python Response.meta['user_agent']方法的具体用法?Python Response.meta['user_agent']怎么用?Python Response.meta['user_agent']使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类scrapy.http.Response
的用法示例。
在下文中一共展示了Response.meta['user_agent']方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: test_spider_crawls_links
# 需要导入模块: from scrapy.http import Response [as 别名]
# 或者: from scrapy.http.Response import meta['user_agent'] [as 别名]
def test_spider_crawls_links(spider, scrape_request, html_headers,
mock_html_twolinks):
"""Ensure spider always picks up relevant links to HTML pages"""
# Use only 1 user agent for easier counting
ua = factories.BatchUserAgentFactory.build(ua_string='Firefox / 11.0')
spider.batch_user_agents = [ua]
# Generate a mock response based on html containing two links
mock_response = Response('http://test:12345',
body=mock_html_twolinks)
mock_response.request = scrape_request
mock_response.headers = html_headers
mock_response.meta['user_agent'] = ua
mock_response.status = 200
mock_response.encoding = u'utf-8'
mock_response.flags = []
# Call spider on the mock response
pipeline_generator = spider.parse(mock_response)
# Assert that we got the expected set of new requests generated in the
# spider and nothing else
sites_expected = set([
mock_response.url + '/link1.html',
mock_response.url + '/link2.html',
])
sites_collected = []
for new_request in pipeline_generator:
if isinstance(new_request, Request):
sites_collected.append(new_request.url)
else:
pass
assert sites_expected == set(sites_collected)