當前位置: 首頁>>代碼示例>>Python>>正文


Python URLGrabber.urlread方法代碼示例

本文整理匯總了Python中urlgrabber.grabber.URLGrabber.urlread方法的典型用法代碼示例。如果您正苦於以下問題:Python URLGrabber.urlread方法的具體用法?Python URLGrabber.urlread怎麽用?Python URLGrabber.urlread使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在urlgrabber.grabber.URLGrabber的用法示例。


在下文中一共展示了URLGrabber.urlread方法的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: Fetcher

# 需要導入模塊: from urlgrabber.grabber import URLGrabber [as 別名]
# 或者: from urlgrabber.grabber.URLGrabber import urlread [as 別名]
	class Fetcher(object):
		def __init__(self, remote):
			self.remote = remote
			self.g = URLGrabber(prefix=self.remote)

		def fetch_to_file(self, src, dest):
			tmp = dest + '.part'
			try:
				self.g.urlgrab(src, filename=tmp, copy_local=1, user_agent='lsd-fetch/1.0')
			except URLGrabError as e:
				raise IOError(str(e))
			os.rename(tmp, dest)

		def fetch(self, src='/'):
			try:
				contents = self.g.urlread(src).strip()
			except URLGrabError as e:
				raise IOError(str(e))
			return contents

		def listdir(self, dir='/'):
			lfn = os.path.join(dir, '.listing')

			contents = self.fetch(lfn)

			return [ s.strip() for s in contents.split() if s.strip() != '' ]

		# Pickling support -- only pickle the remote URL
		def __getstate__(self):
			return self.remote
		def __setstate__(self, remote):
			self.__init__(remote)
開發者ID:banados,項目名稱:lsd,代碼行數:34,代碼來源:fetcher.py

示例2: _retrievePublicKey

# 需要導入模塊: from urlgrabber.grabber import URLGrabber [as 別名]
# 或者: from urlgrabber.grabber.URLGrabber import urlread [as 別名]
    def _retrievePublicKey(self, keyurl, repo=None):
        """
        Retrieve a key file
        @param keyurl: url to the key to retrieve
        Returns a list of dicts with all the keyinfo
        """
        key_installed = False

        # Go get the GPG key from the given URL
        try:
            url = yum.misc.to_utf8(keyurl)
            if repo is None:
                rawkey = urlgrabber.urlread(url, limit=9999)
            else:
                #  If we have a repo. use the proxy etc. configuration for it.
                # In theory we have a global proxy config. too, but meh...
                # external callers should just update.
                ug = URLGrabber(bandwidth = repo.bandwidth,
                                retry = repo.retries,
                                throttle = repo.throttle,
                                progress_obj = repo.callback,
                                proxies=repo.proxy_dict)
                ug.opts.user_agent = default_grabber.opts.user_agent
                rawkey = ug.urlread(url, text=repo.id + "/gpgkey")

        except urlgrabber.grabber.URLGrabError, e:
            raise ChannelException('GPG key retrieval failed: ' +
                                    yum.i18n.to_unicode(str(e)))
開發者ID:m47ik,項目名稱:uyuni,代碼行數:30,代碼來源:yum_src.py

示例3: urlread

# 需要導入模塊: from urlgrabber.grabber import URLGrabber [as 別名]
# 或者: from urlgrabber.grabber.URLGrabber import urlread [as 別名]
	def urlread(self, filename, *args, **kwargs):
		self.check_offline_mode()

		# This is for older versions of urlgrabber which are packaged in Debian
		# and Ubuntu and cannot handle filenames as a normal Python string but need
		# a unicode string.
		return URLGrabber.urlread(self, filename.encode("utf-8"), *args, **kwargs)
開發者ID:ipfire,項目名稱:pakfire,代碼行數:9,代碼來源:downloader.py

示例4: moosWeb2dict

# 需要導入模塊: from urlgrabber.grabber import URLGrabber [as 別名]
# 或者: from urlgrabber.grabber.URLGrabber import urlread [as 別名]
def moosWeb2dict(vehicle_host, vehicle_port):

    def moosHTML2dict(data):
        soup = BeautifulSoup(data)
        istrtd = (lambda tag : tag.name == "tr" and len(tag.findAll("td")) > 0)
        ret = {}
        for tr in soup.table.table.findAll(istrtd):
            tds = tr.findAll("td")
            vartag = tds[0].a
            if 0 < len(vartag) and "pending" != tds[2].contents[0]:
                key = vartag.contents[0]
                val = tds[6].contents[0]
                ret[str(key)] = str(val)
        return ret


    UG = URLGrabber()

    #fetch new page
    data = UG.urlread("http://" + remote_vehicle + ":" + str(vehicle_port))

    #paul newman writes shitty HTML; we must fix it
    p = re.compile('<A href = ([^>]*)>')
    fixed_data = p.sub(r'<A href="\1">', data)
                
    return moosHTML2dict(fixed_data)
開發者ID:Hoffman408,項目名稱:MOOS-python-utils,代碼行數:28,代碼來源:MOOSDBparser.py


注:本文中的urlgrabber.grabber.URLGrabber.urlread方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。