当前位置: 首页>>代码示例>>Python>>正文


Python Configuration.show_skipped方法代码示例

本文整理汇总了Python中behave.configuration.Configuration.show_skipped方法的典型用法代码示例。如果您正苦于以下问题:Python Configuration.show_skipped方法的具体用法?Python Configuration.show_skipped怎么用?Python Configuration.show_skipped使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在behave.configuration.Configuration的用法示例。


在下文中一共展示了Configuration.show_skipped方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: goh_behave

# 需要导入模块: from behave.configuration import Configuration [as 别名]
# 或者: from behave.configuration.Configuration import show_skipped [as 别名]
def goh_behave(feature_dirs=None, step_dirs=None, test_artifact_dir=None,
               listeners=None, dry_run=False, webdriver_url=None,
               webdriver_processor=None, tags=None, show_skipped=True, config_file=None,
               test_config=None):
    """
    runs behave

    :param feature_dirs: list of paths to feature files to run, or a single path. Defaults to singleton list of "features"
    :type feature_dirs: list
    :param step_dirs: list of paths to load steps from. the steps paths will be
        searched recursssssively
    :type step_dirs: list
    :param test_artifact_dir: path to where to store test artifacts. if None, no test
        artifacts will be written. note that setting this to None will prevent
        screenshots from automatically being taken after each step
    :param listeners: list of Listener objects to call for diff pts of the test
    :type listeners: list
    :param dry_run: if True, behave will just check if all steps are defined
    :param webdriver_url: optional. webdriver node/grid url to hit to execute
        tests
    :param webdriver_processor: provides the ability to process things like
        capabilities before they're actually used
    :param config_file: a configuration file, formatted as JSON data, that contains
        all of the other parameters listed here (except for listeners or webdriver_processor)
    :param test_config: a configuration that is a dictionary of the parameters. If a config_file is
        also passed, the values in the config_file will be written to the test_config as well. Any existing
        keys will have the values overwritten by the value in the config_file

    :return: True if the tests passed, else False
    :rtype: bool

    :raise ParserError: when a feature file couldnt be parsed
    :raise FileNotFoundError: when a feature file couldnt be found
    :raise InvalidFileLocationError: when a feature path was bad
    :raise InvalidFilenameError: when a feature file name was bad
    :raise UndefinedStepsError: if some steps were undefined
    """

    if config_file:
        try:
            with open(config_file) as config:
                try:
                    json_config = json.load(config)
                except ValueError as e:
                    raise ValueError(u'Could not parse {} config file as JSON. See sample.config for an example config file. {}'.format(config_file, e))
                if test_config:
                    test_config.update(json_config)
                else:
                    test_config = json_config
        except EnvironmentError as e:
            raise IOError(u'Could not open the {} config file. See sample.config for an example config file. {}'.format(config_file, e))

    if test_config:
        log.debug(u'Using test_config:')
        for key in test_config:
            log.debug(u'    {}: {}'.format(key, test_config[key]))
        if u'feature_dirs' in test_config and not feature_dirs:
            feature_dirs = test_config[u'feature_dirs']
        if u'step_dirs' in test_config and not step_dirs:
            step_dirs = test_config[u'step_dirs']
        if u'test_artifact_dir' in test_config and not test_artifact_dir:
            test_artifact_dir = test_config[u'test_artifact_dir']
        if u'dry_run' in test_config:
            dry_run = test_config[u'dry_run']
        if u'webdriver_url' in test_config and not webdriver_url:
            webdriver_url = test_config[u'webdriver_url']
        if u'tags' in test_config and not tags:
            tags = test_config[u'tags']
        if u'show_skipped' in test_config:
            show_skipped = test_config[u'show_skipped']

    if not feature_dirs:
        feature_dirs = ["features"]
    if isinstance(feature_dirs, basestring):
        feature_dirs = [feature_dirs]

    args = [u'']

    # first run in dry run mode to catch any undefined steps
    # if the user specifies dry run mode, then don't do this, since it'll be
    # done anyway
    # but we're running it always anyway. so what's the difference? if user
    # specifies dry run mode, they might also give listeners, etc.
    # auto dry run mode is meant to happen silently
    if not dry_run:
        config = Configuration([u'--dry-run'])
        config.format = []
        _run_behave(feature_dirs, config, step_dirs=step_dirs)

    # output test artifacts
    if test_artifact_dir:
        args.append(u'--junit')
        args.append(u'--junit-directory')
        args.append(test_artifact_dir)

    if dry_run:
        args.append(u'--dry-run')

    # setup config for behave's runner
    config = Configuration(args)
#.........这里部分代码省略.........
开发者ID:PhoenixWright,项目名称:MobileBDDCore,代码行数:103,代码来源:runner.py


注:本文中的behave.configuration.Configuration.show_skipped方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。