当前位置: 首页>>代码示例>>Python>>正文


Python flaky.flaky方法代码示例

本文整理汇总了Python中flaky.flaky方法的典型用法代码示例。如果您正苦于以下问题:Python flaky.flaky方法的具体用法?Python flaky.flaky怎么用?Python flaky.flaky使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在flaky的用法示例。


在下文中一共展示了flaky.flaky方法的14个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: test_failure_due_to_timeout

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def test_failure_due_to_timeout(err, *args):
    """
    check if we should rerun a test with the flaky plugin or not.
    for now, only run if we failed the test for one of the following
    three exceptions: cassandra.OperationTimedOut, ccm.node.ToolError,
    and ccm.node.TimeoutError.

    - cassandra.OperationTimedOut will be thrown when a cql query made thru
    the python-driver times out.
    - ccm.node.ToolError will be thrown when an invocation of a "tool"
    (in the case of dtests this will almost always invoking stress).
    - ccm.node.TimeoutError will be thrown when a blocking ccm operation
    on a individual node times out. In most cases this tends to be something
    like watch_log_for hitting the timeout before the desired pattern is seen
    in the node's logs.

    if we failed for one of these reasons - and we're running in docker - run
    the same "cleanup" logic we run before test execution and test setup begins
    and for good measure introduce a 2 second sleep. why 2 seconds? because it's
    magic :) - ideally this gets the environment back into a good state and makes
    the rerun of flaky tests likely to suceed if they failed in the first place
    due to environmental issues.
    """
    if issubclass(err[0], OperationTimedOut) or issubclass(err[0], ToolError) or issubclass(err[0], TimeoutError):
        if running_in_docker():
            cleanup_docker_environment_before_test_execution()
            time.sleep(2)
        return True
    else:
        return False 
开发者ID:apache,项目名称:cassandra-dtest,代码行数:32,代码来源:dtest.py

示例2: test_function

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def test_function():
    """
    Nose will import this function and wrap it in a :class:`FunctionTestCase`.
    It's included in the example to make sure flaky handles it correctly.
    """ 
开发者ID:box,项目名称:flaky,代码行数:7,代码来源:test_nose_example.py

示例3: test_something_flaky

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def test_something_flaky(self):
        """
        Flaky will run this test twice.
        It will fail once and then succeed once.
        This ensures that we mark tests as flaky even if they don't have a
        decorator when we use the command-line options.
        """
        self._threshold += 1
        if self._threshold < 0:
            raise Exception("Threshold is not high enough.") 
开发者ID:box,项目名称:flaky,代码行数:12,代码来源:test_nose_options_example.py

示例4: test_flaky_thing_that_fails_then_succeeds

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def test_flaky_thing_that_fails_then_succeeds(self):
        """
        Flaky will run this test 3 times.
        It will fail twice and then succeed once.
        This ensures that the flaky decorator overrides any command-line
        options we specify.
        """
        self._threshold += 1
        if self._threshold < 1:
            raise Exception("Threshold is not high enough.") 
开发者ID:box,项目名称:flaky,代码行数:12,代码来源:test_nose_options_example.py

示例5: test_flaky_thing_that_fails_then_succeeds

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def test_flaky_thing_that_fails_then_succeeds():
        """
        Flaky will run this test 3 times.
        It will fail twice and then succeed once.
        This ensures that the flaky decorator overrides any command-line
        options we specify.
        """
        TestExample._threshold += 1
        assert TestExample._threshold >= 1 
开发者ID:box,项目名称:flaky,代码行数:11,代码来源:test_pytest_options_example.py

示例6: flaky

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def flaky(f=None, max_runs=5, filter=None):
    """Makes a test retry on remote service errors."""
    if not f:
        return functools.partial(flaky, max_runs=max_runs, filter=filter)

    return _flaky(max_runs=3, rerun_filter=filter)(
        pytest.mark.flaky(pytest.mark.slow(f))) 
开发者ID:GoogleCloudPlatform,项目名称:python-repo-tools,代码行数:9,代码来源:flaky.py

示例7: test_forward_pass_runs_correctly

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def test_forward_pass_runs_correctly(self):
        batch = Batch(self.instances)
        batch.index_instances(self.vocab)
        training_tensors = batch.as_tensor_dict()
        output_dict = self.model(**training_tensors)

        metrics = self.model.get_metrics(reset=True)
        # We've set up the data such that there's a fake answer that consists of the whole
        # paragraph.  _Any_ valid prediction for that question should produce an F1 of greater than
        # zero, while if we somehow haven't been able to load the evaluation data, or there was an
        # error with using the evaluation script, this will fail.  This makes sure that we've
        # loaded the evaluation data correctly and have hooked things up to the official evaluation
        # script.
        assert metrics[u'f1'] > 0

        span_start_probs = output_dict[u'span_start_probs'][0].data.numpy()
        span_end_probs = output_dict[u'span_start_probs'][0].data.numpy()
        assert_almost_equal(numpy.sum(span_start_probs, -1), 1, decimal=6)
        assert_almost_equal(numpy.sum(span_end_probs, -1), 1, decimal=6)
        span_start, span_end = tuple(output_dict[u'best_span'][0].data.numpy())
        assert span_start >= 0
        assert span_start <= span_end
        assert span_end < self.instances[0].fields[u'passage'].sequence_length()
        assert isinstance(output_dict[u'best_span_str'][0], unicode)

    # Some recent efficiency changes (using bmm for `weighted_sum`, the more efficient
    # `masked_softmax`...) have made this _very_ flaky... 
开发者ID:plasticityai,项目名称:magnitude,代码行数:29,代码来源:bidaf_test.py

示例8: increase_wait_time

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def increase_wait_time(err, func_name, func, plugin):
    """
    This function is used as a "rerun_filter" for "flaky". It increases an offset
    time in TIME_STORE that can be used later inside tests, and is actually used
    in TestSupervisor.wait() to wait more and more in case we run tests on a slow
    machine.
    """
    # offset time starts at 0 on first invocation
    TIME_STORE[func_name] = TIME_STORE.get(func_name, -1) + 1
    return True 
开发者ID:botify-labs,项目名称:simpleflow,代码行数:12,代码来源:test_supervisor.py

示例9: client_with_credentials

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def client_with_credentials(app):
    """This fixture provides a Flask app test client that has a session
    pre-configured with use credentials."""
    credentials = OAuth2Credentials(
        'access_token',
        'client_id',
        'client_secret',
        'refresh_token',
        '3600',
        None,
        'Test',
        id_token={'sub': '123', 'email': 'user@example.com'},
        scopes=('email', 'profile'))

    @contextlib.contextmanager
    def inner():
        with app.test_client() as client:
            with client.session_transaction() as session:
                session['profile'] = {
                    'email': 'abc@example.com',
                    'name': 'Test User'
                }
                session['google_oauth2_credentials'] = credentials.to_json()
            yield client

    return inner


# Mark all test cases in this class as flaky, so that if errors occur they
# can be retried. This is useful when databases are temporarily unavailable. 
开发者ID:GoogleCloudPlatform,项目名称:getting-started-python,代码行数:32,代码来源:test_auth.py

示例10: test_cached_runtime

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def test_cached_runtime(self):
        """
        Test the runtime caching by manually running with it off
        and then running with it on and comparing invocation times. 
        Note that due to aws lambda internals this might not 
        do the right thing so we mark it as flaky
        """

        def test_add(x):
            return x + 7

        t1 = time.time()
        fut = self.wrenexec.map(test_add, [10], use_cached_runtime=False)[0]
        res = fut.result() 
        t2 = time.time()
        non_cached_latency = t2-t1

        assert fut.run_status['runtime_cached'] == False
        assert res == 17

        t1 = time.time()
        fut = self.wrenexec.map(test_add, [10], use_cached_runtime=True)[0]
        res = fut.result() 
        t2 = time.time()
        cached_latency = t2-t1

        assert res == 17
        assert fut.run_status['runtime_cached'] == True

        assert cached_latency < non_cached_latency 
开发者ID:pywren,项目名称:pywren,代码行数:32,代码来源:test_simple.py

示例11: setupClass

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def setupClass(cls):
        # Travis runs multiple tests concurrently on fake machines that might
        # collide on pid and hostid, so use an uuid1 which should be fairly random
        # thanks to clock_seq
        cls.run_id = '%s-%s' % (uuid.uuid1().hex, os.getpid())

        try:
            global S3Storage
            from depot.io.awss3 import S3Storage
        except ImportError:
            raise SkipTest('Boto not installed')

        env = os.environ
        access_key_id = env.get('AWS_ACCESS_KEY_ID')
        secret_access_key = env.get('AWS_SECRET_ACCESS_KEY')
        if access_key_id is None or secret_access_key is None:
            raise SkipTest('Amazon S3 credentials not available')

        cls.default_bucket_name = 'filedepot-%s' % (access_key_id.lower(), )
        cls.cred = (access_key_id, secret_access_key)

        bucket_name = 'filedepot-testfs-%s' % cls.run_id
        cls.fs = S3Storage(access_key_id, secret_access_key, bucket_name)
        while not cls.fs._conn.lookup(bucket_name):
            # Wait for bucket to exist, to avoid flaky tests...
            time.sleep(0.5) 
开发者ID:amol-,项目名称:depot,代码行数:28,代码来源:test_awss3_storage.py

示例12: teardownClass

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def teardownClass(cls):
        if not cls.fs._conn.lookup(cls.fs._bucket_driver.bucket.name):
            return
        
        keys = [key.name for key in cls.fs._bucket_driver.bucket]
        if keys:
            cls.fs._bucket_driver.bucket.delete_keys(keys)

        try:
            cls.fs._conn.delete_bucket(cls.fs._bucket_driver.bucket.name)
            while cls.fs._conn.lookup(cls.fs._bucket_driver.bucket.name):
                # Wait for bucket to be deleted, to avoid flaky tests...
                time.sleep(0.5)
        except:
            pass 
开发者ID:amol-,项目名称:depot,代码行数:17,代码来源:test_awss3_storage.py

示例13: test_6924_dropping_ks

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def test_6924_dropping_ks(self):
        """
        @jira_ticket CASSANDRA-6924
        @jira_ticket CASSANDRA-11729

        Data inserted immediately after dropping and recreating a
        keyspace with an indexed column familiy is not included
        in the index.

        This test can be flaky due to concurrency issues during
        schema updates. See CASSANDRA-11729 for an explanation.
        """
        # Reproducing requires at least 3 nodes:
        cluster = self.cluster
        cluster.populate(3).start()
        node1, node2, node3 = cluster.nodelist()
        session = self.patient_cql_connection(node1)

        # We have to wait up to RING_DELAY + 1 seconds for the MV Builder task
        # to complete, to prevent schema concurrency issues with the drop
        # keyspace calls that come later. See CASSANDRA-11729.
        if self.cluster.version() > '3.0':
            self.cluster.wait_for_any_log('Completed submission of build tasks for any materialized views',
                                          timeout=35, filename='debug.log')

        # This only occurs when dropping and recreating with
        # the same name, so loop through this test a few times:
        for i in range(10):
            logger.debug("round %s" % i)
            try:
                session.execute("DROP KEYSPACE ks")
            except (ConfigurationException, InvalidRequest):
                pass

            create_ks(session, 'ks', 1)
            session.execute("CREATE TABLE ks.cf (key text PRIMARY KEY, col1 text);")
            session.execute("CREATE INDEX on ks.cf (col1);")

            for r in range(10):
                stmt = "INSERT INTO ks.cf (key, col1) VALUES ('%s','asdf');" % r
                session.execute(stmt)

            self.wait_for_schema_agreement(session)

            rows = session.execute("select count(*) from ks.cf WHERE col1='asdf'")
            count = rows[0][0]
            assert count == 10 
开发者ID:apache,项目名称:cassandra-dtest,代码行数:49,代码来源:secondary_indexes_test.py

示例14: test_too_big_runtime

# 需要导入模块: import flaky [as 别名]
# 或者: from flaky import flaky [as 别名]
def test_too_big_runtime():
    """
    Sometimes we accidentally build a runtime that's too big. 
    When this happens, the runtime was leaving behind crap
    and we could never test the runtime again. 
    This tests if we now return a sane exception and can re-run code. 

    There are problems with this test. It is:
    1. lambda only 
    2. depends on lambda having a 512 MB limit. When that is raised someday, 
    this test will always pass. 
    3. Is flaky, because it might be the case that we get _new_
    workers on the next invocation to map that don't have the left-behind
    crap. 
    """


    too_big_config = pywren.wrenconfig.default()
    too_big_config['runtime']['s3_bucket'] = 'pywren-runtimes-public-us-west-2'
    ver_str = "{}.{}".format(sys.version_info[0], sys.version_info[1])
    too_big_config['runtime']['s3_key'] = "pywren.runtimes/too_big_do_not_use_{}.tar.gz".format(ver_str)


    default_config = pywren.wrenconfig.default()


    wrenexec_toobig = pywren.default_executor(config=too_big_config)
    wrenexec = pywren.default_executor(config=default_config)


    def simple_foo(x):
        return x
    MAP_N = 10

    futures = wrenexec_toobig.map(simple_foo, range(MAP_N))
    for f in futures:
        with pytest.raises(Exception) as excinfo:
            f.result()
        assert excinfo.value.args[1] == 'RUNTIME_TOO_BIG'

    # these ones should work
    futures = wrenexec.map(simple_foo, range(MAP_N))
    for f in futures:
        f.result() 
开发者ID:pywren,项目名称:pywren,代码行数:46,代码来源:test_lambda.py


注:本文中的flaky.flaky方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。