当前位置: 首页>>代码示例>>Python>>正文


Python Login.do_login方法代码示例

本文整理汇总了Python中login.Login.do_login方法的典型用法代码示例。如果您正苦于以下问题:Python Login.do_login方法的具体用法?Python Login.do_login怎么用?Python Login.do_login使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在login.Login的用法示例。


在下文中一共展示了Login.do_login方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: main

# 需要导入模块: from login import Login [as 别名]
# 或者: from login.Login import do_login [as 别名]
def main(site_url):
    last_processed = None
    try:
        definitions = RuleGenerator.generate(site_url)
        logger.info("Definition received for : {}".format(site_url))
        # print(Color.green(
        #     "> Definitions Received for {}:".format(site_url)))
        print(pprint.pprint(definitions))

        site_id = get_site_id(site_url)

        logger.info("Extracted Site ID: {}".format(site_id))
        # print("> Site URL: {}".format("www.classichome.com"))

        username, password = get_username(site_id), get_password(site_id)

        logger.info("Username : {}, Password: {}".format(username, password))

        login_form = Login("http://{}".format(site_url), username, password)
        br = login_form.do_login()

        if br is False:
            logger.info("Error username/password invalid")
            print(Color.red("Error-Please check your username and password."))
            raise Exception(
                "Invalid username/password found for {} store.".format(site_url))

        product_parser = ProductParser(br)

        output_filename = "classichome-output-{}.csv".format(
            datetime.date.today())
        # input_filename = os.path.join("filtered_product_urls.csv")

        # input_csv_file = open(input_filename, "r")
        final_csv_file = open(output_filename, "a+")
        writer = csv.writer(final_csv_file)
        queue = Queue()

        for row in get_urls_to_scrape(site_id):
            queue.put((row[0], definitions, writer))

        # for rows in csv.reader(input_csv_file):
        #     queue.put((rows.pop(),definitions, writer))

        count = 0
        THREAD_COUNT = 12
        while queue.qsize():
            for i in xrange(THREAD_COUNT):
                if queue.empty():
                    logger.info("Queue Empty. Nothing to process")
                    # print(Color.red("> Queue Empty. Nothing to process.\n"))
                    sys.exit(0)
                else:
                    url, definitions, writer = queue.get()
                    last_processed = url
                    t = threading.Thread(group=None,
                                         target=product_parser.parse, name=i,
                                         args=(definitions, url, writer))
                    t.start()

                    lock.acquire()
                    count += 1
                    lock.release()

            for t in threading.enumerate():
                if t is not threading.currentThread():
                    t.join()

        logger.info("Total products parsed : {}".format(count))
        print(Color.green("\nTotal products parsed : {}".format(count)))
        # print("Total failed rows: {}".format(len(failed)))
    except KeyboardInterrupt as e:
        print(Color.red("Ctrl+C User Interruped."))
        sys.exit(0)
    except Exception as e:
        logger.error("Exception : {}".format(e.message), exc_info=True)
        print(Color.red("Exceptions: {}".format(e.message)))
开发者ID:beekal,项目名称:WebInfoScraper,代码行数:79,代码来源:URLBuildFactory.py


注:本文中的login.Login.do_login方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。