当前位置: 首页>>代码示例>>Python>>正文


Python RobotFileParser.disallow_all方法代码示例

本文整理汇总了Python中robotparser.RobotFileParser.disallow_all方法的典型用法代码示例。如果您正苦于以下问题:Python RobotFileParser.disallow_all方法的具体用法?Python RobotFileParser.disallow_all怎么用?Python RobotFileParser.disallow_all使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在robotparser.RobotFileParser的用法示例。


在下文中一共展示了RobotFileParser.disallow_all方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: check_robots

# 需要导入模块: from robotparser import RobotFileParser [as 别名]
# 或者: from robotparser.RobotFileParser import disallow_all [as 别名]
 def check_robots(self, url):
     '''check the robots.txt in this url's domain'''
     hostname = urlparse(url).netloc
     if hostname not in self.domain_list.keys():      # no records in domain_list
         rp = RobotFileParser('http://%s/robots.txt' % hostname)
         print("%s: fetching %s" % (url, rp.url))
         try:
             rp.read()                                # get new robots.txt
         except IOError, e:                           # url's server not available(connection timeout)
             log.error(str(e))
             rp.disallow_all = True                   # reject all request
         self.domain_list[hostname] = rp              # add domain entry into domain_list
开发者ID:YvesChan,项目名称:OpenSP,代码行数:14,代码来源:spider.py

示例2: print

# 需要导入模块: from robotparser import RobotFileParser [as 别名]
# 或者: from robotparser.RobotFileParser import disallow_all [as 别名]
            # update time is old, we update it
            if rp.mtime() < (time.time() - settings.ROBOTS_TXT_CACHE):
                self.logger.info("Refresh %s/robots.txt cache" % hostname)
                try:
                    rp.read()
                except Exception, e:
                    print(e)
                    self.logger.info("Unable to get or parse %s/robots.txt" % hostname)
                    rp.disallow_all = False
                    rp.allow_all = True
            else:
                self.logger.debug("Retrieve cached %s/robots.txt" % hostname)
        else:
            # First (or very long) time we see this domain, create a new
            # RobotFileParser and read it once
            self.logger.info("First hit on %s/robots.txt" % hostname)
            rp = RobotFileParser(url="%s://%s/robots.txt" % (scheme, hostname))
            try:
                rp.read()
            except Exception, e:
                print(e)
                self.logger.info("Unable to get or parse %s/robots.txt" % hostname)
                rp.disallow_all = False
                rp.allow_all = True

        # In any case, we update the last robotstxt fetched time
        rp.modified()
        self.robotstxt[hostname] = rp

        return rp
开发者ID:mlorant,项目名称:webcrawler,代码行数:32,代码来源:crawler.py


注:本文中的robotparser.RobotFileParser.disallow_all方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。