当前位置: 首页>>代码示例>>Python>>正文


Python UrlParser.unparse方法代码示例

本文整理汇总了Python中r2.lib.utils.UrlParser.unparse方法的典型用法代码示例。如果您正苦于以下问题:Python UrlParser.unparse方法的具体用法?Python UrlParser.unparse怎么用?Python UrlParser.unparse使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在r2.lib.utils.UrlParser的用法示例。


在下文中一共展示了UrlParser.unparse方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: test_default_prefix

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
    def test_default_prefix(self):
        u = UrlParser('http://i.reddit.com/r/redditdev')
        u.switch_subdomain_by_extension()
        self.assertEquals('http://www.reddit.com/r/redditdev', u.unparse())

        u = UrlParser('http://i.reddit.com/r/redditdev')
        u.switch_subdomain_by_extension('does-not-exist')
        self.assertEquals('http://www.reddit.com/r/redditdev', u.unparse())
开发者ID:APerson241,项目名称:reddit,代码行数:10,代码来源:urlparser_test.py

示例2: test_normal_urls

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
    def test_normal_urls(self):
        u = UrlParser('http://www.reddit.com/r/redditdev')
        u.switch_subdomain_by_extension('compact')
        result = u.unparse()
        self.assertEquals('http://i.reddit.com/r/redditdev', result)

        u = UrlParser(result)
        u.switch_subdomain_by_extension('mobile')
        result = u.unparse()
        self.assertEquals('http://m.reddit.com/r/redditdev', result)
开发者ID:APerson241,项目名称:reddit,代码行数:12,代码来源:urlparser_test.py

示例3: POST_bpoptions

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
    def POST_bpoptions(self, all_langs, **prefs):
        u = UrlParser(c.site.path + "prefs")
        bpfilter_prefs(prefs, c.user)
        if c.errors.errors:
            for error in c.errors.errors:
                if error[1] == 'stylesheet_override':
                    u.update_query(error_style_override=error[0])
                else:
                    u.update_query(generic_error=error[0])
            return self.redirect(u.unparse())

        set_prefs(c.user, prefs)
        c.user._commit()
        u.update_query(done='true')
        return self.redirect(u.unparse())
开发者ID:mewald55,项目名称:BlockPath,代码行数:17,代码来源:post.py

示例4: POST_request_promo

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
    def POST_request_promo(self, srnames):
        if not srnames:
            return

        srnames = srnames.split('+')

        # request multiple ads in case some are hidden by the builder due
        # to the user's hides/preferences
        response = adzerk_request(srnames)

        if not response:
            g.stats.simple_event('adzerk.request.no_promo')
            return

        res_by_campaign = {r.campaign: r for r in response}
        tuples = [promote.PromoTuple(r.link, 1., r.campaign) for r in response]
        builder = CampaignBuilder(tuples, wrap=default_thing_wrapper(),
                                  keep_fn=promote.promo_keep_fn,
                                  num=1,
                                  skip=True)
        listing = LinkListing(builder, nextprev=False).listing()
        if listing.things:
            g.stats.simple_event('adzerk.request.valid_promo')
            w = listing.things[0]
            r = res_by_campaign[w.campaign]

            up = UrlParser(r.imp_pixel)
            up.hostname = "pixel.redditmedia.com"
            w.adserver_imp_pixel = up.unparse()
            w.adserver_click_url = r.click_url
            w.num = ""
            return spaceCompress(w.render())
        else:
            g.stats.simple_event('adzerk.request.skip_promo')
开发者ID:JordanMilne,项目名称:reddit-plugin-adzerk,代码行数:36,代码来源:adzerkpromote.py

示例5: POST_options

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
    def POST_options(self, all_langs, pref_lang, **kw):
        #temporary. eventually we'll change pref_clickgadget to an
        #integer preference
        kw['pref_clickgadget'] = kw['pref_clickgadget'] and 5 or 0
        if c.user.pref_show_promote is None:
            kw['pref_show_promote'] = None
        elif not kw.get('pref_show_promote'):
            kw['pref_show_promote'] = False

        if not kw.get("pref_over_18") or not c.user.pref_over_18:
            kw['pref_no_profanity'] = True

        if kw.get("pref_no_profanity") or c.user.pref_no_profanity:
            kw['pref_label_nsfw'] = True

        if kw.get("avatar_img"):
            kw["pref_avatar_img"]= kw.get("avatar_img")


        # default all the gold options to on if they don't have gold
        if not c.user.gold:
            for pref in ('pref_show_adbox',
                         'pref_show_sponsors',
                         'pref_show_sponsorships',
                         'pref_highlight_new_comments',
                         'pref_monitor_mentions'):
                kw[pref] = True

        self.set_options(all_langs, pref_lang, **kw)
        u = UrlParser(c.site.path + "prefs")
        u.update_query(done = 'true')
        if c.cname:
            u.put_in_frame()
        return self.redirect(u.unparse())
开发者ID:aldarund,项目名称:reddit,代码行数:36,代码来源:post.py

示例6: format_output_url

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
    def format_output_url(cls, url, **kw):
        """
        Helper method used during redirect to ensure that the redirect
        url (assisted by frame busting code or javasctipt) will point
        to the correct domain and not have any extra dangling get
        parameters.  The extensions are also made to match and the
        resulting url is utf8 encoded.

        Node: for development purposes, also checks that the port
        matches the request port
        """
        preserve_extension = kw.pop("preserve_extension", True)
        u = UrlParser(url)

        if u.is_reddit_url():
            # make sure to pass the port along if not 80
            if not kw.has_key('port'):
                kw['port'] = request.port

            # disentangle the cname (for urls that would have
            # cnameframe=1 in them)
            u.mk_cname(**kw)

            # make sure the extensions agree with the current page
            if preserve_extension and c.extension:
                u.set_extension(c.extension)

        # unparse and encode it un utf8
        rv = _force_unicode(u.unparse()).encode('utf8')
        if "\n" in rv or "\r" in rv:
            abort(400)
        return rv
开发者ID:GodOfConquest,项目名称:reddit,代码行数:34,代码来源:base.py

示例7: purge_url

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
    def purge_url(self, url):
        """Purge an image (by url) from imgix.

        Reference: http://www.imgix.com/docs/tutorials/purging-images

        Note that as mentioned in the imgix docs, in order to remove
        an image, this function should be used *after* already
        removing the image from our source, or imgix will just re-fetch
        and replace the image with a new copy even after purging.
        """

        p = UrlParser(url)

        if p.hostname == g.imgix_domain:
            p.hostname = g.imgix_purge_domain
        elif p.hostname == g.imgix_gif_domain:
            p.hostname = g.imgix_gif_purge_domain

        url = p.unparse()

        requests.post(
            "https://api.imgix.com/v2/image/purger",
            auth=(g.secrets["imgix_api_key"], ""),
            data={"url": url},
        )
开发者ID:zeantsoi,项目名称:reddit,代码行数:27,代码来源:imgix.py

示例8: POST_options

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
    def POST_options(self, all_langs, pref_lang, **kw):
        #temporary. eventually we'll change pref_clickgadget to an
        #integer preference
        kw['pref_clickgadget'] = kw['pref_clickgadget'] and 5 or 0
        if c.user.pref_show_promote is None:
            kw['pref_show_promote'] = None
        elif not kw.get('pref_show_promote'):
            kw['pref_show_promote'] = False

        if not kw.get("pref_over_18") or not c.user.pref_over_18:
            kw['pref_no_profanity'] = True

        if kw.get("pref_no_profanity") or c.user.pref_no_profanity:
            kw['pref_label_nsfw'] = True

        if not c.user.gold:
            kw['pref_show_adbox'] = True
            kw['pref_show_sponsors'] = True
            kw['pref_show_sponsorships'] = True

        self.set_options(all_langs, pref_lang, **kw)
        u = UrlParser(c.site.path + "prefs")
        u.update_query(done = 'true')
        if c.cname:
            u.put_in_frame()
        return self.redirect(u.unparse())
开发者ID:Krenair,项目名称:reddit,代码行数:28,代码来源:post.py

示例9: _update_redirect_uri

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
def _update_redirect_uri(base_redirect_uri, params, as_fragment=False):
    parsed = UrlParser(base_redirect_uri)
    if as_fragment:
        parsed.fragment = urlencode(params)
    else:
        parsed.update_query(**params)
    return parsed.unparse()
开发者ID:AHAMED750,项目名称:reddit,代码行数:9,代码来源:oauth2.py

示例10: url_for_title

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
  def url_for_title(self, title):
      """Uses the MediaWiki API to get the URL for a wiki page
      with the given title"""
      if title is None:
          return None

      from pylons import g
      cache_key = ('wiki_url_%s' % title).encode('ascii', 'ignore')
      wiki_url = g.cache.get(cache_key)
      if wiki_url is None:
          # http://www.mediawiki.org/wiki/API:Query_-_Properties#info_.2F_in
          api = UrlParser(g.wiki_api_url)
          api.update_query(
              action = 'query',
              titles= title,
              prop = 'info',
              format = 'yaml',
              inprop = 'url'
          )

          try:
              response = urlopen(api.unparse()).read()
              parsed_response = yaml.load(response, Loader=yaml.CLoader)
              page = parsed_response['query']['pages'][0]
          except:
              return None

          wiki_url = page.get('fullurl').strip()

          # Things are created every couple of days so 12 hours seems
          # to be a reasonable cache time
          g.permacache.set(cache_key, wiki_url, time=3600 * 12)

      return wiki_url
开发者ID:Kenneth-Chen,项目名称:lesswrong,代码行数:36,代码来源:wiki.py

示例11: add_sr

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
def add_sr(path, sr_path = True, nocname=False, force_hostname = False):
    """
    Given a path (which may be a full-fledged url or a relative path),
    parses the path and updates it to include the subreddit path
    according to the rules set by its arguments:

     * force_hostname: if True, force the url's hotname to be updated
       even if it is already set in the path, and subject to the
       c.cname/nocname combination.  If false, the path will still
       have its domain updated if no hostname is specified in the url.
    
     * nocname: when updating the hostname, overrides the value of
       c.cname to set the hotname to g.domain.  The default behavior
       is to set the hostname consistent with c.cname.

     * sr_path: if a cname is not used for the domain, updates the
       path to include c.site.path.
    """
    u = UrlParser(path)
    if sr_path and (nocname or not c.cname):
        u.path_add_subreddit(c.site)

    if not u.hostname or force_hostname:
        u.hostname = get_domain(cname = (c.cname and not nocname),
                                subreddit = False)

    if c.render_style == 'mobile':
        u.set_extension('mobile')

    return u.unparse()
开发者ID:AndrewHay,项目名称:lesswrong,代码行数:32,代码来源:template_helpers.py

示例12: GET_framebuster

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
    def GET_framebuster(self, what = None, blah = None):
        """
        renders the contents of the iframe which, on a cname, checks
        if the user is currently logged into reddit.
        
        if this page is hit from the primary domain, redirects to the
        cnamed domain version of the site.  If the user is logged in,
        this cnamed version will drop a boolean session cookie on that
        domain so that subsequent page reloads will be caught in
        middleware and a frame will be inserted around the content.

        If the user is not logged in, previous session cookies will be
        emptied so that subsequent refreshes will not be rendered in
        that pesky frame.
        """
        if not c.site.domain:
            return ""
        elif c.cname:
            return FrameBuster(login = (what == "login")).render()
        else:
            path = "/framebuster/"
            if c.user_is_loggedin:
                path += "login/"
            u = UrlParser(path + str(random.random()))
            u.mk_cname(require_frame = False, subreddit = c.site,
                       port = request.port)
            return self.redirect(u.unparse())
        # the user is not logged in or there is no cname.
        return FrameBuster(login = False).render()
开发者ID:JediWatchman,项目名称:reddit,代码行数:31,代码来源:front.py

示例13: _get_scrape_url

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
def _get_scrape_url(link):
    if not link.is_self:
        sr_name = link.subreddit_slow.name
        if not feature.is_enabled("imgur_gif_conversion", subreddit=sr_name):
            return link.url
        p = UrlParser(link.url)
        # If it's a gif link on imgur, replacing it with gifv should
        # give us the embedly friendly video url
        if is_subdomain(p.hostname, "imgur.com"):
            if p.path_extension().lower() == "gif":
                p.set_extension("gifv")
                return p.unparse()
        return link.url

    urls = extract_urls_from_markdown(link.selftext)
    second_choice = None
    for url in urls:
        p = UrlParser(url)
        if p.is_reddit_url():
            continue
        # If we don't find anything we like better, use the first image.
        if not second_choice:
            second_choice = url
        # This is an optimization for "proof images" in AMAs.
        if is_subdomain(p.netloc, 'imgur.com') or p.has_image_extension():
            return url

    return second_choice
开发者ID:AppleBetas,项目名称:reddit,代码行数:30,代码来源:media.py

示例14: resize_image

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
    def resize_image(self, image, width=None, censor_nsfw=False, max_ratio=None):
        url = UrlParser(image['url'])
        url.hostname = g.imgix_domain
        # Let's encourage HTTPS; it's cool, works just fine on HTTP pages, and
        # will prevent insecure content warnings on HTTPS pages.
        url.scheme = 'https'

        if max_ratio:
            url.update_query(fit='crop')
            # http://www.imgix.com/docs/reference/size#param-crop
            url.update_query(crop='faces,entropy')
            url.update_query(arh=max_ratio)

        if width:
            if width > image['width']:
                raise NotLargeEnough()
            # http://www.imgix.com/docs/reference/size#param-w
            url.update_query(w=width)
        if censor_nsfw:
            # Since we aren't concerned with inhibiting a user's ability to
            # reverse the censoring for privacy reasons, pixellation is better
            # than a Gaussian blur because it compresses well.  The specific
            # value is just "what looks about right".
            #
            # http://www.imgix.com/docs/reference/stylize#param-px
            url.update_query(px=20)
        if g.imgix_signing:
            url = self._sign_url(url, g.secrets['imgix_signing_token'])
        return url.unparse()
开发者ID:ActivateServices,项目名称:reddit,代码行数:31,代码来源:imgix.py

示例15: format_output_url

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import unparse [as 别名]
    def format_output_url(cls, url, **kw):
        """
        Helper method used during redirect to ensure that the redirect
        url (assisted by frame busting code or javasctipt) will point
        to the correct domain and not have any extra dangling get
        parameters.  The extensions are also made to match and the
        resulting url is utf8 encoded.

        Node: for development purposes, also checks that the port
        matches the request port
        """
        u = UrlParser(url)

        if u.is_reddit_url():
            # make sure to pass the port along if not 80
            if not kw.has_key("port"):
                kw["port"] = request.port

            # disentagle the cname (for urls that would have
            # cnameframe=1 in them)
            u.mk_cname(**kw)

            # make sure the extensions agree with the current page
            if c.extension:
                u.set_extension(c.extension)

        # unparse and encode it un utf8
        rv = _force_unicode(u.unparse()).encode("utf8")
        if any(ch.isspace() for ch in rv):
            raise ValueError("Space characters in redirect URL: [%r]" % rv)
        return rv
开发者ID:ketralnis,项目名称:reddit,代码行数:33,代码来源:base.py


注:本文中的r2.lib.utils.UrlParser.unparse方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。