当前位置: 首页>>代码示例>>Python>>正文


Python Crawler.get_page方法代码示例

本文整理汇总了Python中Crawler.Crawler.get_page方法的典型用法代码示例。如果您正苦于以下问题:Python Crawler.get_page方法的具体用法?Python Crawler.get_page怎么用?Python Crawler.get_page使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在Crawler.Crawler的用法示例。


在下文中一共展示了Crawler.get_page方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: get_article

# 需要导入模块: from Crawler import Crawler [as 别名]
# 或者: from Crawler.Crawler import get_page [as 别名]
    def get_article(self, url):
        crawler = Crawler()
        # get html data from url
        web_data = crawler.get_page(url)
        soup = BeautifulSoup(web_data, 'html.parser')

        # remove link news 
        [e.extract() for e in soup('div', {'class':'link_news'})]

        # article title
        self.title = soup('h3', {'id':'articleTitle'})[0].text

        # create date and time of article
        date_time = soup('span', {'class':'t11'})[0].text.split()
        self.date = date_time[0]
        self.time = date_time[1]

        # press name
        press_logo = soup('div', {'class':'press_logo'})[0]
        self.press = press_logo.find('img')['alt']
        del press_logo

        # article contents
        self.contents = soup('div', {'id':'articleBodyContents'})[0].text
        self.contents = re.sub('[\n\r]', '', self.contents)
开发者ID:ByeongkiJeong,项目名称:naverNewsCrawler,代码行数:27,代码来源:navernews.py

示例2: get_list

# 需要导入模块: from Crawler import Crawler [as 别名]
# 或者: from Crawler.Crawler import get_page [as 别名]
def get_list(section, date):

    # NAVER news url
    naver_news_url = 'http://news.naver.com/main/list.nhn'
    naver_news_parameter = {'mode':'LSD', 'mid':'sec', 'sid1':'', 'date':date, 'page':''}
    naver_news_parameter['sid1'] = section

    page = 1
    url_list = []
    while True:
        naver_news_parameter['page'] = page
        url =  naver_news_url + '?' + urllib.urlencode(naver_news_parameter)
        
        # get html data
        crawler = Crawler()
        web_data = crawler.get_page(url)
        
        # html parsing
        soup= BeautifulSoup(web_data, 'html.parser')
        list_body = soup('div', {'class':'list_body newsflash_body'})[0]

        # get each article's url
        list_body = list_body.findAll('li')
        current_list = []
        for e in list_body:
            current_list.append(e.find('a')['href'])
        del list_body
        
        # break when current page is end of url list
        if current_list[0] in url_list:
            break

        # add to url list
        url_list += current_list

        # next url page
        page += 1

    return url_list
开发者ID:ByeongkiJeong,项目名称:naverNewsCrawler,代码行数:41,代码来源:navernews.py


注:本文中的Crawler.Crawler.get_page方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。