當前位置: 首頁>>代碼示例>>Python>>正文


Python flaky.flaky方法代碼示例

本文整理匯總了Python中flaky.flaky方法的典型用法代碼示例。如果您正苦於以下問題:Python flaky.flaky方法的具體用法?Python flaky.flaky怎麽用?Python flaky.flaky使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在flaky的用法示例。


在下文中一共展示了flaky.flaky方法的14個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: test_failure_due_to_timeout

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def test_failure_due_to_timeout(err, *args):
    """
    check if we should rerun a test with the flaky plugin or not.
    for now, only run if we failed the test for one of the following
    three exceptions: cassandra.OperationTimedOut, ccm.node.ToolError,
    and ccm.node.TimeoutError.

    - cassandra.OperationTimedOut will be thrown when a cql query made thru
    the python-driver times out.
    - ccm.node.ToolError will be thrown when an invocation of a "tool"
    (in the case of dtests this will almost always invoking stress).
    - ccm.node.TimeoutError will be thrown when a blocking ccm operation
    on a individual node times out. In most cases this tends to be something
    like watch_log_for hitting the timeout before the desired pattern is seen
    in the node's logs.

    if we failed for one of these reasons - and we're running in docker - run
    the same "cleanup" logic we run before test execution and test setup begins
    and for good measure introduce a 2 second sleep. why 2 seconds? because it's
    magic :) - ideally this gets the environment back into a good state and makes
    the rerun of flaky tests likely to suceed if they failed in the first place
    due to environmental issues.
    """
    if issubclass(err[0], OperationTimedOut) or issubclass(err[0], ToolError) or issubclass(err[0], TimeoutError):
        if running_in_docker():
            cleanup_docker_environment_before_test_execution()
            time.sleep(2)
        return True
    else:
        return False 
開發者ID:apache,項目名稱:cassandra-dtest,代碼行數:32,代碼來源:dtest.py

示例2: test_function

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def test_function():
    """
    Nose will import this function and wrap it in a :class:`FunctionTestCase`.
    It's included in the example to make sure flaky handles it correctly.
    """ 
開發者ID:box,項目名稱:flaky,代碼行數:7,代碼來源:test_nose_example.py

示例3: test_something_flaky

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def test_something_flaky(self):
        """
        Flaky will run this test twice.
        It will fail once and then succeed once.
        This ensures that we mark tests as flaky even if they don't have a
        decorator when we use the command-line options.
        """
        self._threshold += 1
        if self._threshold < 0:
            raise Exception("Threshold is not high enough.") 
開發者ID:box,項目名稱:flaky,代碼行數:12,代碼來源:test_nose_options_example.py

示例4: test_flaky_thing_that_fails_then_succeeds

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def test_flaky_thing_that_fails_then_succeeds(self):
        """
        Flaky will run this test 3 times.
        It will fail twice and then succeed once.
        This ensures that the flaky decorator overrides any command-line
        options we specify.
        """
        self._threshold += 1
        if self._threshold < 1:
            raise Exception("Threshold is not high enough.") 
開發者ID:box,項目名稱:flaky,代碼行數:12,代碼來源:test_nose_options_example.py

示例5: test_flaky_thing_that_fails_then_succeeds

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def test_flaky_thing_that_fails_then_succeeds():
        """
        Flaky will run this test 3 times.
        It will fail twice and then succeed once.
        This ensures that the flaky decorator overrides any command-line
        options we specify.
        """
        TestExample._threshold += 1
        assert TestExample._threshold >= 1 
開發者ID:box,項目名稱:flaky,代碼行數:11,代碼來源:test_pytest_options_example.py

示例6: flaky

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def flaky(f=None, max_runs=5, filter=None):
    """Makes a test retry on remote service errors."""
    if not f:
        return functools.partial(flaky, max_runs=max_runs, filter=filter)

    return _flaky(max_runs=3, rerun_filter=filter)(
        pytest.mark.flaky(pytest.mark.slow(f))) 
開發者ID:GoogleCloudPlatform,項目名稱:python-repo-tools,代碼行數:9,代碼來源:flaky.py

示例7: test_forward_pass_runs_correctly

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def test_forward_pass_runs_correctly(self):
        batch = Batch(self.instances)
        batch.index_instances(self.vocab)
        training_tensors = batch.as_tensor_dict()
        output_dict = self.model(**training_tensors)

        metrics = self.model.get_metrics(reset=True)
        # We've set up the data such that there's a fake answer that consists of the whole
        # paragraph.  _Any_ valid prediction for that question should produce an F1 of greater than
        # zero, while if we somehow haven't been able to load the evaluation data, or there was an
        # error with using the evaluation script, this will fail.  This makes sure that we've
        # loaded the evaluation data correctly and have hooked things up to the official evaluation
        # script.
        assert metrics[u'f1'] > 0

        span_start_probs = output_dict[u'span_start_probs'][0].data.numpy()
        span_end_probs = output_dict[u'span_start_probs'][0].data.numpy()
        assert_almost_equal(numpy.sum(span_start_probs, -1), 1, decimal=6)
        assert_almost_equal(numpy.sum(span_end_probs, -1), 1, decimal=6)
        span_start, span_end = tuple(output_dict[u'best_span'][0].data.numpy())
        assert span_start >= 0
        assert span_start <= span_end
        assert span_end < self.instances[0].fields[u'passage'].sequence_length()
        assert isinstance(output_dict[u'best_span_str'][0], unicode)

    # Some recent efficiency changes (using bmm for `weighted_sum`, the more efficient
    # `masked_softmax`...) have made this _very_ flaky... 
開發者ID:plasticityai,項目名稱:magnitude,代碼行數:29,代碼來源:bidaf_test.py

示例8: increase_wait_time

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def increase_wait_time(err, func_name, func, plugin):
    """
    This function is used as a "rerun_filter" for "flaky". It increases an offset
    time in TIME_STORE that can be used later inside tests, and is actually used
    in TestSupervisor.wait() to wait more and more in case we run tests on a slow
    machine.
    """
    # offset time starts at 0 on first invocation
    TIME_STORE[func_name] = TIME_STORE.get(func_name, -1) + 1
    return True 
開發者ID:botify-labs,項目名稱:simpleflow,代碼行數:12,代碼來源:test_supervisor.py

示例9: client_with_credentials

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def client_with_credentials(app):
    """This fixture provides a Flask app test client that has a session
    pre-configured with use credentials."""
    credentials = OAuth2Credentials(
        'access_token',
        'client_id',
        'client_secret',
        'refresh_token',
        '3600',
        None,
        'Test',
        id_token={'sub': '123', 'email': 'user@example.com'},
        scopes=('email', 'profile'))

    @contextlib.contextmanager
    def inner():
        with app.test_client() as client:
            with client.session_transaction() as session:
                session['profile'] = {
                    'email': 'abc@example.com',
                    'name': 'Test User'
                }
                session['google_oauth2_credentials'] = credentials.to_json()
            yield client

    return inner


# Mark all test cases in this class as flaky, so that if errors occur they
# can be retried. This is useful when databases are temporarily unavailable. 
開發者ID:GoogleCloudPlatform,項目名稱:getting-started-python,代碼行數:32,代碼來源:test_auth.py

示例10: test_cached_runtime

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def test_cached_runtime(self):
        """
        Test the runtime caching by manually running with it off
        and then running with it on and comparing invocation times. 
        Note that due to aws lambda internals this might not 
        do the right thing so we mark it as flaky
        """

        def test_add(x):
            return x + 7

        t1 = time.time()
        fut = self.wrenexec.map(test_add, [10], use_cached_runtime=False)[0]
        res = fut.result() 
        t2 = time.time()
        non_cached_latency = t2-t1

        assert fut.run_status['runtime_cached'] == False
        assert res == 17

        t1 = time.time()
        fut = self.wrenexec.map(test_add, [10], use_cached_runtime=True)[0]
        res = fut.result() 
        t2 = time.time()
        cached_latency = t2-t1

        assert res == 17
        assert fut.run_status['runtime_cached'] == True

        assert cached_latency < non_cached_latency 
開發者ID:pywren,項目名稱:pywren,代碼行數:32,代碼來源:test_simple.py

示例11: setupClass

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def setupClass(cls):
        # Travis runs multiple tests concurrently on fake machines that might
        # collide on pid and hostid, so use an uuid1 which should be fairly random
        # thanks to clock_seq
        cls.run_id = '%s-%s' % (uuid.uuid1().hex, os.getpid())

        try:
            global S3Storage
            from depot.io.awss3 import S3Storage
        except ImportError:
            raise SkipTest('Boto not installed')

        env = os.environ
        access_key_id = env.get('AWS_ACCESS_KEY_ID')
        secret_access_key = env.get('AWS_SECRET_ACCESS_KEY')
        if access_key_id is None or secret_access_key is None:
            raise SkipTest('Amazon S3 credentials not available')

        cls.default_bucket_name = 'filedepot-%s' % (access_key_id.lower(), )
        cls.cred = (access_key_id, secret_access_key)

        bucket_name = 'filedepot-testfs-%s' % cls.run_id
        cls.fs = S3Storage(access_key_id, secret_access_key, bucket_name)
        while not cls.fs._conn.lookup(bucket_name):
            # Wait for bucket to exist, to avoid flaky tests...
            time.sleep(0.5) 
開發者ID:amol-,項目名稱:depot,代碼行數:28,代碼來源:test_awss3_storage.py

示例12: teardownClass

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def teardownClass(cls):
        if not cls.fs._conn.lookup(cls.fs._bucket_driver.bucket.name):
            return
        
        keys = [key.name for key in cls.fs._bucket_driver.bucket]
        if keys:
            cls.fs._bucket_driver.bucket.delete_keys(keys)

        try:
            cls.fs._conn.delete_bucket(cls.fs._bucket_driver.bucket.name)
            while cls.fs._conn.lookup(cls.fs._bucket_driver.bucket.name):
                # Wait for bucket to be deleted, to avoid flaky tests...
                time.sleep(0.5)
        except:
            pass 
開發者ID:amol-,項目名稱:depot,代碼行數:17,代碼來源:test_awss3_storage.py

示例13: test_6924_dropping_ks

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def test_6924_dropping_ks(self):
        """
        @jira_ticket CASSANDRA-6924
        @jira_ticket CASSANDRA-11729

        Data inserted immediately after dropping and recreating a
        keyspace with an indexed column familiy is not included
        in the index.

        This test can be flaky due to concurrency issues during
        schema updates. See CASSANDRA-11729 for an explanation.
        """
        # Reproducing requires at least 3 nodes:
        cluster = self.cluster
        cluster.populate(3).start()
        node1, node2, node3 = cluster.nodelist()
        session = self.patient_cql_connection(node1)

        # We have to wait up to RING_DELAY + 1 seconds for the MV Builder task
        # to complete, to prevent schema concurrency issues with the drop
        # keyspace calls that come later. See CASSANDRA-11729.
        if self.cluster.version() > '3.0':
            self.cluster.wait_for_any_log('Completed submission of build tasks for any materialized views',
                                          timeout=35, filename='debug.log')

        # This only occurs when dropping and recreating with
        # the same name, so loop through this test a few times:
        for i in range(10):
            logger.debug("round %s" % i)
            try:
                session.execute("DROP KEYSPACE ks")
            except (ConfigurationException, InvalidRequest):
                pass

            create_ks(session, 'ks', 1)
            session.execute("CREATE TABLE ks.cf (key text PRIMARY KEY, col1 text);")
            session.execute("CREATE INDEX on ks.cf (col1);")

            for r in range(10):
                stmt = "INSERT INTO ks.cf (key, col1) VALUES ('%s','asdf');" % r
                session.execute(stmt)

            self.wait_for_schema_agreement(session)

            rows = session.execute("select count(*) from ks.cf WHERE col1='asdf'")
            count = rows[0][0]
            assert count == 10 
開發者ID:apache,項目名稱:cassandra-dtest,代碼行數:49,代碼來源:secondary_indexes_test.py

示例14: test_too_big_runtime

# 需要導入模塊: import flaky [as 別名]
# 或者: from flaky import flaky [as 別名]
def test_too_big_runtime():
    """
    Sometimes we accidentally build a runtime that's too big. 
    When this happens, the runtime was leaving behind crap
    and we could never test the runtime again. 
    This tests if we now return a sane exception and can re-run code. 

    There are problems with this test. It is:
    1. lambda only 
    2. depends on lambda having a 512 MB limit. When that is raised someday, 
    this test will always pass. 
    3. Is flaky, because it might be the case that we get _new_
    workers on the next invocation to map that don't have the left-behind
    crap. 
    """


    too_big_config = pywren.wrenconfig.default()
    too_big_config['runtime']['s3_bucket'] = 'pywren-runtimes-public-us-west-2'
    ver_str = "{}.{}".format(sys.version_info[0], sys.version_info[1])
    too_big_config['runtime']['s3_key'] = "pywren.runtimes/too_big_do_not_use_{}.tar.gz".format(ver_str)


    default_config = pywren.wrenconfig.default()


    wrenexec_toobig = pywren.default_executor(config=too_big_config)
    wrenexec = pywren.default_executor(config=default_config)


    def simple_foo(x):
        return x
    MAP_N = 10

    futures = wrenexec_toobig.map(simple_foo, range(MAP_N))
    for f in futures:
        with pytest.raises(Exception) as excinfo:
            f.result()
        assert excinfo.value.args[1] == 'RUNTIME_TOO_BIG'

    # these ones should work
    futures = wrenexec.map(simple_foo, range(MAP_N))
    for f in futures:
        f.result() 
開發者ID:pywren,項目名稱:pywren,代碼行數:46,代碼來源:test_lambda.py


注:本文中的flaky.flaky方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。