当前位置: 首页>>代码示例>>Python>>正文


Python UrlParser.domain_permutations方法代码示例

本文整理汇总了Python中r2.lib.utils.UrlParser.domain_permutations方法的典型用法代码示例。如果您正苦于以下问题:Python UrlParser.domain_permutations方法的具体用法?Python UrlParser.domain_permutations怎么用?Python UrlParser.domain_permutations使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在r2.lib.utils.UrlParser的用法示例。


在下文中一共展示了UrlParser.domain_permutations方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: process_message

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import domain_permutations [as 别名]
    def process_message(msgs, chan):
        """Update get_domain_links(), the Links by domain precomputed query.

        get_domain_links() is a CachedResult which is stored in permacache. To
        update these objects we need to do a read-modify-write which requires
        obtaining a lock. Sharding these updates by domain allows us to run
        multiple consumers (but ideally just one per shard) to avoid lock
        contention.

        """

        from r2.lib.db.queries import add_queries, get_domain_links

        link_names = {msg.body for msg in msgs}
        links = Link._by_fullname(link_names, return_dict=False)
        print 'Processing %r' % (links,)

        links_by_domain = defaultdict(list)
        for link in links:
            parsed = UrlParser(link.url)

            # update the listings for all permutations of the link's domain
            for domain in parsed.domain_permutations():
                links_by_domain[domain].append(link)

        for d, links in links_by_domain.iteritems():
            with g.stats.get_timer("link_vote_processor.domain_queries"):
                add_queries(
                    queries=[
                        get_domain_links(d, sort, "all") for sort in SORTS],
                    insert_items=links,
                )
开发者ID:13steinj,项目名称:reddit,代码行数:34,代码来源:voting.py

示例2: add_to_domain_query_q

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import domain_permutations [as 别名]
def add_to_domain_query_q(link):
    parsed = UrlParser(link.url)
    if not parsed.domain_permutations():
        # no valid domains found
        return

    if g.shard_domain_query_queues:
        domain_shard = hash(parsed.hostname) % 10
        queue_name = "domain_query_%s_q" % domain_shard
    else:
        queue_name = "domain_query_q"
    amqp.add_item(queue_name, link._fullname)
开发者ID:13steinj,项目名称:reddit,代码行数:14,代码来源:voting.py

示例3: process

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import domain_permutations [as 别名]
    def process(thing):
        if thing.deleted:
            return

        thing_cls = thingcls_by_name[thing.thing_type]
        fname = make_fullname(thing_cls, thing.thing_id)
        thing_score = score(thing.ups, thing.downs)
        thing_controversy = controversy(thing.ups, thing.downs)

        for interval, cutoff in cutoff_by_interval.iteritems():
            if thing.timestamp < cutoff:
                continue

            yield ("user/%s/top/%s/%d" % (thing.thing_type, interval, thing.author_id),
                   thing_score, thing.timestamp, fname)
            yield ("user/%s/controversial/%s/%d" % (thing.thing_type, interval, thing.author_id),
                   thing_controversy, thing.timestamp, fname)

            if thing.spam:
                continue

            if thing.thing_type == "link":
                yield ("sr/link/top/%s/%d" % (interval, thing.sr_id),
                       thing_score, thing.timestamp, fname)
                yield ("sr/link/controversial/%s/%d" % (interval, thing.sr_id),
                       thing_controversy, thing.timestamp, fname)

                if thing.url:
                    try:
                        parsed = UrlParser(thing.url)
                    except ValueError:
                        continue

                    for domain in parsed.domain_permutations():
                        yield ("domain/link/top/%s/%s" % (interval, domain),
                               thing_score, thing.timestamp, fname)
                        yield ("domain/link/controversial/%s/%s" % (interval, domain),
                               thing_controversy, thing.timestamp, fname)
开发者ID:Sheesha1992,项目名称:reddit,代码行数:40,代码来源:mr_top.py

示例4: time_listing_iter

# 需要导入模块: from r2.lib.utils import UrlParser [as 别名]
# 或者: from r2.lib.utils.UrlParser import domain_permutations [as 别名]
    def time_listing_iter(self, thing, cutoff_by_interval):
        if thing.deleted:
            return

        thing_cls = self.thing_cls
        fname = make_fullname(thing_cls, thing.thing_id)
        scores = {k: func(thing) for k, func in self.LISTING_SORTS.iteritems()}

        for interval, cutoff in cutoff_by_interval.iteritems():
            if thing.timestamp < cutoff:
                continue

            for sort, value in scores.iteritems():
                aid = thing.author_id
                key = self.make_key("user", sort, interval, aid)
                yield (key, value, thing.timestamp, fname)

            if thing.spam:
                continue

            if thing.thing_type == "link":
                for sort, value in scores.iteritems():
                    sr_id = thing.sr_id
                    key = self.make_key("sr", sort, interval, sr_id)
                    yield (key, value, thing.timestamp, fname)

                if not thing.url:
                    continue
                try:
                    parsed = UrlParser(thing.url)
                except ValueError:
                    continue

                for d in parsed.domain_permutations():
                    for sort, value in scores.iteritems():
                        key = self.make_key("domain", sort, interval, d)
                        yield (key, value, thing.timestamp, fname)
开发者ID:zeantsoi,项目名称:reddit,代码行数:39,代码来源:mr_top.py


注:本文中的r2.lib.utils.UrlParser.domain_permutations方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。