当前位置: 首页>>代码示例>>Python>>正文


Python Crawler.crawl_multithread方法代码示例

本文整理汇总了Python中Crawler.Crawler.crawl_multithread方法的典型用法代码示例。如果您正苦于以下问题:Python Crawler.crawl_multithread方法的具体用法?Python Crawler.crawl_multithread怎么用?Python Crawler.crawl_multithread使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在Crawler.Crawler的用法示例。


在下文中一共展示了Crawler.crawl_multithread方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: cralwer

# 需要导入模块: from Crawler import Crawler [as 别名]
# 或者: from Crawler.Crawler import crawl_multithread [as 别名]
from Crawler import Crawler
import os.path

parser = argparse.ArgumentParser(description='Crawl file and execute regex rules on them')
parser.add_argument('-p', metavar='ParameterFilePath', type=argparse.FileType('r'), required=True,
                   help="path to a parameter json file. Parameter file should contain a 'crawling', 'rules' and 'result' key")
parser.add_argument('-o', metavar='OutputFilePath', type=argparse.FileType('w+'), help='output file. This argument is required if no output is specified in parameter file.\n The file must be either a .csv or .json')
parser.add_argument('-mt', metavar='Thread Numbers', type=int, help='have a multi-threaded cralwer (1 thread per file) and precise the number of concurrent thread')
parser.add_argument('-s', metavar='StartDirectory', type=str, help='directory in which the crawling will start. This parameter is necessary if there is no "crawling" dictionary in the parameter file')

args = parser.parse_args()
if "p" not in args or args.p is None:
    parser.error(parser.format_usage())
param = FO.get_from_JSON_file(args.p.name)
if "rules" not in param or ("o" not in args and "output" not in param):
    print("rules error")
    parser.error(parser.format_usage())
if "crawling" not in param and ("s" not in args or args.s is None):
    parser.error(parser.format_usage())
elif "s" in args and args.s is not None:
    param["crawling"] = { "start": args.s}
if "o" in args and args.o is not None:
    output_name, output_extension = os.path.splitext(args.o.name)
    param["output"] = {
        "path": args.o.name,
        "type": "csv" if ".csv" in output_extension else "json"
    }
if "mt" in args and args.mt is not None:
    Crawler.crawl_multithread(param.get("crawling"), param.get("rules"), param.get("result"), param["output"], args.mt)
else:
    Crawler.crawl(param.get("crawling"), param.get("rules"),  param.get("result"), param["output"])
开发者ID:glebedel,项目名称:FileCrawler,代码行数:33,代码来源:filecrawler.py

示例2: test_crawl_native_minimalParameterFile_multithreaded_native

# 需要导入模块: from Crawler import Crawler [as 别名]
# 或者: from Crawler.Crawler import crawl_multithread [as 别名]
 def test_crawl_native_minimalParameterFile_multithreaded_native(self):
     parameters = FileOperations.get_from_JSON_file("./test/minimal_parameters.json")
     data = Crawler.crawl_multithread(parameters["crawling"], parameters["rules"], parameters.get("result"))
     self.assertEqual(data['./test/test_inputs/minimalist_data.txt']['matches']['HasName']['city'][0], 'London')
开发者ID:glebedel,项目名称:FileCrawler,代码行数:6,代码来源:test_crawler.py


注:本文中的Crawler.Crawler.crawl_multithread方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。