当前位置: 首页>>代码示例>>Python>>正文


Python Analyzer.get_userinfo方法代码示例

本文整理汇总了Python中analyzer.Analyzer.get_userinfo方法的典型用法代码示例。如果您正苦于以下问题:Python Analyzer.get_userinfo方法的具体用法?Python Analyzer.get_userinfo怎么用?Python Analyzer.get_userinfo使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在analyzer.Analyzer的用法示例。


在下文中一共展示了Analyzer.get_userinfo方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: get_user_info

# 需要导入模块: from analyzer import Analyzer [as 别名]
# 或者: from analyzer.Analyzer import get_userinfo [as 别名]
    def get_user_info(self,response):
        item =WeibospiderItem()
        item['uid'] = response.meta['user_id']
        analyzer = Analyzer()
        keyword_analyzer = keyword_info_analyzer()
        total_pq1 = analyzer.get_html(response.body,'script:contains("pf_photo")')
        item['image_urls'] = analyzer.get_userphoto_url(total_pq1) + "?uid=" + str(response.meta['user_id'])
        total_pq2 = analyzer.get_html(response.body,'script:contains("PCD_text_b")')
        total_pq3 = analyzer.get_html(response.body,'script:contains("PCD_counter")')
        item['userinfo'] = analyzer.get_userinfo(total_pq2,total_pq3)
#        logger.info(item)
        yield item
开发者ID:commonfire,项目名称:scrapy-weibospider-mysql,代码行数:14,代码来源:cauc_keyword_info.py

示例2: parse_userinfo

# 需要导入模块: from analyzer import Analyzer [as 别名]
# 或者: from analyzer.Analyzer import get_userinfo [as 别名]
 def parse_userinfo(self,response):
     '''解析非公众账号个人信息'''
     item = WeibospiderItem()
     analyzer = Analyzer()
     try:
         total_pq1 = analyzer.get_html(response.body,'script:contains("pf_photo")')
         item['image_urls'] = analyzer.get_userphoto_url(total_pq1)
          
         total_pq2 = analyzer.get_html(response.body,'script:contains("PCD_text_b")')
         item['userinfo'] = analyzer.get_userinfo(total_pq2)
     except Exception,e:
         item['userinfo'] = {}.fromkeys(('昵称:'.decode('utf-8'),'所在地:'.decode('utf-8'),'性别:'.decode('utf-8'),'博客:'.decode('utf-8'),'个性域名:'.    decode('utf-8'),'简介:'.decode('utf-8'),'生日:'.decode('utf-8'),'注册时间:'.decode('utf-8')),'')
         item['image_urls'] = None
开发者ID:commonfire,项目名称:scrapy-weibospider-mysql,代码行数:15,代码来源:weibocontent_userinfo.py

示例3: parse_userinfo

# 需要导入模块: from analyzer import Analyzer [as 别名]
# 或者: from analyzer.Analyzer import get_userinfo [as 别名]
    def parse_userinfo(self,response):
        '''解析非公众账号个人信息 '''
        item = WeibospiderItem()
        analyzer = Analyzer()
        try:
            total_pq1 = analyzer.get_html(response.body,'script:contains("pf_photo")')
            item['image_urls'] = analyzer.get_userphoto_url(total_pq1) + "?uid=" + str(response.meta['uid'])
            #item['image_urls'] = None 
             
            total_pq2 = analyzer.get_html(response.body,'script:contains("PCD_text_b")')
            total_pq3 = analyzer.get_html(response.body,'script:contains("PCD_counter")')

            if response.meta['is_friend'] == 0: #此时用于获取主用户基本信息,而非朋友圈用户基本信息
                item['userinfo'] = analyzer.get_userinfo(total_pq2,total_pq3)
            elif response.meta['is_friend'] == 1: #此时用于获取@用户基本信息
                item['atuser_userinfo'] = analyzer.get_userinfo(total_pq2,total_pq3)
            else: #此时用于获取转发用户基本信息
                item['repostuser_userinfo'] = analyzer.get_userinfo(total_pq2,total_pq3)

        except Exception,e:
            item['userinfo'] = {}.fromkeys(('昵称:'.decode('utf-8'),'所在地:'.decode('utf-8'),'性别:'.decode('utf-8'),'博客:'.decode('utf-8'),'个性域名:'.decode('utf-8'),'简介:'.decode('utf-8'),'生日:'.decode('utf-8'),'注册时间:'.decode('utf-8'),'follow_num','follower_num'),'')
            item['atuser_userinfo'] = item['userinfo'] 
            item['repostuser_userinfo'] = item['userinfo']
            item['image_urls'] = None
开发者ID:commonfire,项目名称:scrapy-weibospider-mysql-tianjin,代码行数:26,代码来源:cauc_friendcircle_userinfo.py

示例4: parse_userinfo

# 需要导入模块: from analyzer import Analyzer [as 别名]
# 或者: from analyzer.Analyzer import get_userinfo [as 别名]
 def parse_userinfo(self,response):
     item = response.meta['item'] 
     #f=open('./text2.html','w')
     #f.write(response.body)
     analyzer = Analyzer()
     total_pq = analyzer.get_html(response.body,'script:contains("PCD_text_b")')
     #userinfo_dict = analyzer.get_userinfo(total_pq)
     item['userinfo'] = analyzer.get_userinfo(total_pq)
     #uid = item['uid']
     mainpageurl = 'http://weibo.com/u/'+str(response.meta['uid'])+'?from=otherprofile&wvr=3.6&loc=tagweibo'
     GetWeibopage.data['uid'] = response.meta['uid']     #uid
     getweibopage = GetWeibopage()
     GetWeibopage.data['page'] = WeiboSpider.page_num-1
     thirdloadurl = mainpageurl + getweibopage.get_thirdloadurl()
     yield  Request(url=thirdloadurl,meta={'cookiejar':response.meta['cookiejar'],'item':item,'uid':response.meta['uid'],'followlist':response.meta['followlist']},callback=self.parse_thirdload)
开发者ID:commonfire,项目名称:scrapy-weibospider,代码行数:17,代码来源:weibo.py


注:本文中的analyzer.Analyzer.get_userinfo方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。