当前位置: 首页>>代码示例>>Python>>正文


Python Crawler.startCrawling方法代码示例

本文整理汇总了Python中Crawler.Crawler.startCrawling方法的典型用法代码示例。如果您正苦于以下问题:Python Crawler.startCrawling方法的具体用法?Python Crawler.startCrawling怎么用?Python Crawler.startCrawling使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在Crawler.Crawler的用法示例。


在下文中一共展示了Crawler.startCrawling方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: SearchEngine

# 需要导入模块: from Crawler import Crawler [as 别名]
# 或者: from Crawler.Crawler import startCrawling [as 别名]

#.........这里部分代码省略.........
        
        if not documents:
            print ("No documents match your search terms (\"" 
                   "" + ', '.join(str(term) for term in terms) + "\").")
            return
        
        print "Results:"
        
        for document in sorted(documents, 
                               key = lambda url : self._page_ranks[url] * scores[url], 
                               reverse = True):
            print "  - " + document
            print ("      (Score: " + str(scores[document]) + ""
                   ", PageRank: " + str(self._page_ranks[document]) + ""
                   ", Combined: " + str(self._page_ranks[document] * scores[document]) + ")")
    
    def _do_crawling(self):
        """
        Initializes the crawler with the seed urls and starts crawling, then stores the resulting
        webgraph and the extracted terms in the attributes.
        
        Also counts the extracted words in every website and stores each website's length in the 
        document_lengths attribute.
        
        """
        
        print "Starting crawler ..."
        print "  Seed URLs: "
        
        for url in self._seed_urls:
            print "   - " + url
        
        self._crawler = Crawler(self._seed_urls)
        results = self._crawler.startCrawling()
        
        self._webgraph = results[0]
        self._extracted_terms = results[1]

        
        print "  Web graph: "
        for url in self._webgraph.keys():
            print "   - " + url
            for outlink in self._webgraph[url]:
                print "     -> " + outlink
        
        #print "  Extracted terms: "
        #for website in self._extracted_terms:
        #    print "   - " + website[0] + ": "
        #    print ', '.join(str(token) for token in website[1])
        
        print "Crawler finished."
        print
        
    def _compute_page_ranks(self):
        """
        Initializes the page rank computer with the webgraph and computes the page ranks.
        
        """
        print "Computing page ranks ..."

        self._page_rank_computer = Computer(self._webgraph)
        self._page_rank_computer.dampening_factor = 0.95
        self._page_rank_computer.compute()
        self._page_ranks = self._page_rank_computer.page_ranks
        
        print "  Page ranks:"
开发者ID:davidgreisler,项目名称:webcrawler,代码行数:70,代码来源:SearchEngine.py


注:本文中的Crawler.Crawler.startCrawling方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。