本文整理汇总了Python中nose.plugins.xunit.Xunit.enabled方法的典型用法代码示例。如果您正苦于以下问题:Python Xunit.enabled方法的具体用法?Python Xunit.enabled怎么用?Python Xunit.enabled使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类nose.plugins.xunit.Xunit
的用法示例。
在下文中一共展示了Xunit.enabled方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: run_tests
# 需要导入模块: from nose.plugins.xunit import Xunit [as 别名]
# 或者: from nose.plugins.xunit.Xunit import enabled [as 别名]
def run_tests(spider, output_file, settings):
"""
Helper for running test contractors for a spider and output an
XUnit file (for CI)
For using offline input the HTTP cache is enabled
"""
settings.overrides.update({
"HTTPCACHE_ENABLED": True,
"HTTPCACHE_EXPIRATION_SECS": 0,
})
crawler = CrawlerProcess(settings)
contracts = build_component_list(
crawler.settings['SPIDER_CONTRACTS_BASE'],
crawler.settings['SPIDER_CONTRACTS'],
)
xunit = Xunit()
xunit.enabled = True
xunit.configure(AttributeDict(xunit_file=output_file), Config())
xunit.stopTest = lambda *x: None
check = CheckCommand()
check.set_crawler(crawler)
check.settings = settings
check.conman = ContractsManager([load_object(c) for c in contracts])
check.results = xunit
# this are specially crafted requests that run tests as callbacks
requests = check.get_requests(spider)
crawler.install()
crawler.configure()
crawler.crawl(spider, requests)
log.start(loglevel='DEBUG')
# report is called when the crawler finishes, it creates the XUnit file
report = lambda: check.results.report(check.results.error_report_file)
dispatcher.connect(report, signals.engine_stopped)
crawler.start()