本文整理汇总了Python中scraper.Scraper.scrap_offers_list_max_page方法的典型用法代码示例。如果您正苦于以下问题:Python Scraper.scrap_offers_list_max_page方法的具体用法?Python Scraper.scrap_offers_list_max_page怎么用?Python Scraper.scrap_offers_list_max_page使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类scraper.Scraper
的用法示例。
在下文中一共展示了Scraper.scrap_offers_list_max_page方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: geocode_city_country
# 需要导入模块: from scraper import Scraper [as 别名]
# 或者: from scraper.Scraper import scrap_offers_list_max_page [as 别名]
return
if not offer.get('location'):
location = geocode_city_country(offer['city'].lower(), offer['country'].lower()) # Lower for better cache
offer['location'] = location or None
Offer.upsert(session, offer)
@memory.cache()
def geocode_city_country(city, country):
location = '{city}, {country}'.format(city=city, country=country)
return geocoder.google(location, key=settings.API_KEY_GOOGLE_GEOCODER).wkt
# 1) Fetch the number of pages of offers
print("Fetching total info...")
max_page = Scraper.scrap_offers_list_max_page()
print("Total %d pages to fetch" % max_page)
# 2) Fetch the list of offer ids
thread_pool = ThreadPool(processes=cpu_count())
offer_ids = thread_pool.map(Scraper.scrap_offers_list_page, range(1, max_page+1))
thread_pool.close()
thread_pool.join()
offer_ids = list(sum(offer_ids, []))
print("OFFER IDS", len(offer_ids), offer_ids)
# 3) Fetcb all the offers
thread_pool = ThreadPool(processes=cpu_count())
offers = thread_pool.map(fetch_and_process_offer, offer_ids)
thread_pool.close()