当前位置: 首页>>代码示例>>Python>>正文


Python URLGrabber.urlread方法代码示例

本文整理汇总了Python中urlgrabber.grabber.URLGrabber.urlread方法的典型用法代码示例。如果您正苦于以下问题:Python URLGrabber.urlread方法的具体用法?Python URLGrabber.urlread怎么用?Python URLGrabber.urlread使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在urlgrabber.grabber.URLGrabber的用法示例。


在下文中一共展示了URLGrabber.urlread方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: Fetcher

# 需要导入模块: from urlgrabber.grabber import URLGrabber [as 别名]
# 或者: from urlgrabber.grabber.URLGrabber import urlread [as 别名]
	class Fetcher(object):
		def __init__(self, remote):
			self.remote = remote
			self.g = URLGrabber(prefix=self.remote)

		def fetch_to_file(self, src, dest):
			tmp = dest + '.part'
			try:
				self.g.urlgrab(src, filename=tmp, copy_local=1, user_agent='lsd-fetch/1.0')
			except URLGrabError as e:
				raise IOError(str(e))
			os.rename(tmp, dest)

		def fetch(self, src='/'):
			try:
				contents = self.g.urlread(src).strip()
			except URLGrabError as e:
				raise IOError(str(e))
			return contents

		def listdir(self, dir='/'):
			lfn = os.path.join(dir, '.listing')

			contents = self.fetch(lfn)

			return [ s.strip() for s in contents.split() if s.strip() != '' ]

		# Pickling support -- only pickle the remote URL
		def __getstate__(self):
			return self.remote
		def __setstate__(self, remote):
			self.__init__(remote)
开发者ID:banados,项目名称:lsd,代码行数:34,代码来源:fetcher.py

示例2: _retrievePublicKey

# 需要导入模块: from urlgrabber.grabber import URLGrabber [as 别名]
# 或者: from urlgrabber.grabber.URLGrabber import urlread [as 别名]
    def _retrievePublicKey(self, keyurl, repo=None):
        """
        Retrieve a key file
        @param keyurl: url to the key to retrieve
        Returns a list of dicts with all the keyinfo
        """
        key_installed = False

        # Go get the GPG key from the given URL
        try:
            url = yum.misc.to_utf8(keyurl)
            if repo is None:
                rawkey = urlgrabber.urlread(url, limit=9999)
            else:
                #  If we have a repo. use the proxy etc. configuration for it.
                # In theory we have a global proxy config. too, but meh...
                # external callers should just update.
                ug = URLGrabber(bandwidth = repo.bandwidth,
                                retry = repo.retries,
                                throttle = repo.throttle,
                                progress_obj = repo.callback,
                                proxies=repo.proxy_dict)
                ug.opts.user_agent = default_grabber.opts.user_agent
                rawkey = ug.urlread(url, text=repo.id + "/gpgkey")

        except urlgrabber.grabber.URLGrabError, e:
            raise ChannelException('GPG key retrieval failed: ' +
                                    yum.i18n.to_unicode(str(e)))
开发者ID:m47ik,项目名称:uyuni,代码行数:30,代码来源:yum_src.py

示例3: urlread

# 需要导入模块: from urlgrabber.grabber import URLGrabber [as 别名]
# 或者: from urlgrabber.grabber.URLGrabber import urlread [as 别名]
	def urlread(self, filename, *args, **kwargs):
		self.check_offline_mode()

		# This is for older versions of urlgrabber which are packaged in Debian
		# and Ubuntu and cannot handle filenames as a normal Python string but need
		# a unicode string.
		return URLGrabber.urlread(self, filename.encode("utf-8"), *args, **kwargs)
开发者ID:ipfire,项目名称:pakfire,代码行数:9,代码来源:downloader.py

示例4: moosWeb2dict

# 需要导入模块: from urlgrabber.grabber import URLGrabber [as 别名]
# 或者: from urlgrabber.grabber.URLGrabber import urlread [as 别名]
def moosWeb2dict(vehicle_host, vehicle_port):

    def moosHTML2dict(data):
        soup = BeautifulSoup(data)
        istrtd = (lambda tag : tag.name == "tr" and len(tag.findAll("td")) > 0)
        ret = {}
        for tr in soup.table.table.findAll(istrtd):
            tds = tr.findAll("td")
            vartag = tds[0].a
            if 0 < len(vartag) and "pending" != tds[2].contents[0]:
                key = vartag.contents[0]
                val = tds[6].contents[0]
                ret[str(key)] = str(val)
        return ret


    UG = URLGrabber()

    #fetch new page
    data = UG.urlread("http://" + remote_vehicle + ":" + str(vehicle_port))

    #paul newman writes shitty HTML; we must fix it
    p = re.compile('<A href = ([^>]*)>')
    fixed_data = p.sub(r'<A href="\1">', data)
                
    return moosHTML2dict(fixed_data)
开发者ID:Hoffman408,项目名称:MOOS-python-utils,代码行数:28,代码来源:MOOSDBparser.py


注:本文中的urlgrabber.grabber.URLGrabber.urlread方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。