本文整理汇总了Python中pupa.scrape.Person.extras['_scraped_name']方法的典型用法代码示例。如果您正苦于以下问题:Python Person.extras['_scraped_name']方法的具体用法?Python Person.extras['_scraped_name']怎么用?Python Person.extras['_scraped_name']使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类pupa.scrape.Person
的用法示例。
在下文中一共展示了Person.extras['_scraped_name']方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: scrape_lower_chamber
# 需要导入模块: from pupa.scrape import Person [as 别名]
# 或者: from pupa.scrape.Person import extras['_scraped_name'] [as 别名]
def scrape_lower_chamber(self, term):
url = "http://www.okhouse.gov/Members/Default.aspx"
page = self.lxmlize(proxy_house_url(url))
legislator_nodes = self.get_nodes(
page,
'//table[@id="ctl00_ContentPlaceHolder1_RadGrid1_ctl00"]/tbody/tr')
for legislator_node in legislator_nodes:
name_node = self.get_node(
legislator_node,
'.//td[1]/a')
if name_node is not None:
name_text = name_node.text.strip()
# Handle seats with no current representative
if re.search(r'District \d+', name_text):
continue
last_name, delimiter, first_name = name_text.partition(',')
if last_name is not None and first_name is not None:
first_name = first_name.strip()
last_name = last_name.strip()
name = ' '.join([first_name, last_name])
else:
raise ValueError('Unable to parse name: {}'.format(
name_text))
if name.startswith('House District'):
continue
district_node = self.get_node(
legislator_node,
'.//td[3]')
if district_node is not None:
district = district_node.text.strip()
party_node = self.get_node(
legislator_node,
'.//td[4]')
if party_node is not None:
party_text = party_node.text.strip()
party = self._parties[party_text]
legislator_url = 'http://www.okhouse.gov/District.aspx?District=' + district
legislator_page = self.lxmlize(proxy_house_url(legislator_url))
photo_url = self.get_node(
legislator_page,
'//a[@id="ctl00_ContentPlaceHolder1_imgHiRes"]/@href')
person = Person(primary_org='lower',
district=district,
name=name,
party=party,
image=photo_url)
person.extras['_scraped_name'] = name_text
person.add_link(legislator_url)
person.add_source(url)
person.add_source(legislator_url)
# Scrape offices.
self.scrape_lower_offices(legislator_page, person)
yield person