当前位置: 首页>>代码示例>>Java>>正文


Java TimeoutException类代码示例

本文整理汇总了Java中org.apache.kafka.common.errors.TimeoutException的典型用法代码示例。如果您正苦于以下问题:Java TimeoutException类的具体用法?Java TimeoutException怎么用?Java TimeoutException使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


TimeoutException类属于org.apache.kafka.common.errors包,在下文中一共展示了TimeoutException类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: shouldThrowStreamsExceptionIfTimeoutOccursDuringPartitionsFor

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
@SuppressWarnings("unchecked")
@Test
public void shouldThrowStreamsExceptionIfTimeoutOccursDuringPartitionsFor() throws Exception {
    final MockConsumer<byte[], byte[]> consumer = new MockConsumer(OffsetResetStrategy.EARLIEST) {
        @Override
        public List<PartitionInfo> partitionsFor(final String topic) {
            throw new TimeoutException("KABOOM!");
        }
    };
    final StoreChangelogReader changelogReader = new StoreChangelogReader(consumer, new MockTime(), 5);
    try {
        changelogReader.validatePartitionExists(topicPartition, "store");
        fail("Should have thrown streams exception");
    } catch (final StreamsException e) {
        // pass
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:18,代码来源:StoreChangelogReaderTest.java

示例2: awaitUpdate

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
/**
 * Wait for metadata update until the current version is larger than the last version we know of
 */
public synchronized void awaitUpdate(final int lastVersion, final long maxWaitMs) throws InterruptedException {
    if (maxWaitMs < 0) {
        throw new IllegalArgumentException("Max time to wait for metadata updates should not be < 0 milli seconds");
    }
    long begin = System.currentTimeMillis();
    long remainingWaitMs = maxWaitMs;
    // 比较版本号,判断是否更新完成
    while (this.version <= lastVersion) {
        if (remainingWaitMs != 0)
            wait(remainingWaitMs);
        long elapsed = System.currentTimeMillis() - begin;
        if (elapsed >= maxWaitMs)
            throw new TimeoutException("Failed to update metadata after " + maxWaitMs + " ms.");
        remainingWaitMs = maxWaitMs - elapsed;
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:20,代码来源:Metadata.java

示例3: handleTimeouts

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
/**
 * Check for calls which have timed out.
 * Timed out calls will be removed and failed.
 * The remaining milliseconds until the next timeout will be updated.
 *
 * @param calls         The collection of calls.
 *
 * @return              The number of calls which were timed out.
 */
int handleTimeouts(Collection<Call> calls, String msg) {
    int numTimedOut = 0;
    for (Iterator<Call> iter = calls.iterator(); iter.hasNext(); ) {
        Call call = iter.next();
        int remainingMs = calcTimeoutMsRemainingAsInt(now, call.deadlineMs);
        if (remainingMs < 0) {
            call.fail(now, new TimeoutException(msg));
            iter.remove();
            numTimedOut++;
        } else {
            nextTimeoutMs = Math.min(nextTimeoutMs, remainingMs);
        }
    }
    return numTimedOut;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:25,代码来源:KafkaAdminClient.java

示例4: enqueue

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
/**
 * Queue a call for sending.
 *
 * If the AdminClient thread has exited, this will fail.  Otherwise, it will succeed (even
 * if the AdminClient is shutting down.)  This function should called when retrying an
 * existing call.
 *
 * @param call      The new call object.
 * @param now       The current time in milliseconds.
 */
void enqueue(Call call, long now) {
    if (log.isDebugEnabled()) {
        log.debug("{}: queueing {} with a timeout {} ms from now.",
            clientId, call, call.deadlineMs - now);
    }
    boolean accepted = false;
    synchronized (this) {
        if (newCalls != null) {
            newCalls.add(call);
            accepted = true;
        }
    }
    if (accepted) {
        client.wakeup(); // wake the thread if it is in poll()
    } else {
        log.debug("{}: the AdminClient thread has exited.  Timing out {}.", clientId, call);
        call.fail(Long.MAX_VALUE, new TimeoutException("The AdminClient thread has exited."));
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:30,代码来源:KafkaAdminClient.java

示例5: testBlockTimeout

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
/**
 * Test if Timeout exception is thrown when there is not enough memory to allocate and the elapsed time is greater than the max specified block time.
 * And verify that the allocation should finish soon after the maxBlockTimeMs.
 */
@Test
public void testBlockTimeout() throws Exception {
    BufferPool pool = new BufferPool(10, 1, metrics, Time.SYSTEM, metricGroup);
    ByteBuffer buffer1 = pool.allocate(1, maxBlockTimeMs);
    ByteBuffer buffer2 = pool.allocate(1, maxBlockTimeMs);
    ByteBuffer buffer3 = pool.allocate(1, maxBlockTimeMs);
    // First two buffers will be de-allocated within maxBlockTimeMs since the most recent de-allocation
    delayedDeallocate(pool, buffer1, maxBlockTimeMs / 2);
    delayedDeallocate(pool, buffer2, maxBlockTimeMs);
    // The third buffer will be de-allocated after maxBlockTimeMs since the most recent de-allocation
    delayedDeallocate(pool, buffer3, maxBlockTimeMs / 2 * 5);

    long beginTimeMs = Time.SYSTEM.milliseconds();
    try {
        pool.allocate(10, maxBlockTimeMs);
        fail("The buffer allocated more memory than its maximum value 10");
    } catch (TimeoutException e) {
        // this is good
    }
    assertTrue("available memory" + pool.availableMemory(), pool.availableMemory() >= 9 && pool.availableMemory() <= 10);
    long endTimeMs = Time.SYSTEM.milliseconds();
    assertTrue("Allocation should finish not much later than maxBlockTimeMs", endTimeMs - beginTimeMs < maxBlockTimeMs + 1000);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:28,代码来源:BufferPoolTest.java

示例6: testTimeoutWithoutMetadata

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
/**
 * Test that the client properly times out when we don't receive any metadata.
 */
@Test
public void testTimeoutWithoutMetadata() throws Exception {
    try (MockKafkaAdminClientEnv env = mockClientEnv(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, "10")) {
        env.kafkaClient().setNodeApiVersions(NodeApiVersions.create());
        env.kafkaClient().setNode(new Node(0, "localhost", 8121));
        env.kafkaClient().prepareResponse(new CreateTopicsResponse(Collections.singletonMap("myTopic", new ApiError(Errors.NONE, ""))));
        KafkaFuture<Void> future = env.adminClient().createTopics(
                Collections.singleton(new NewTopic("myTopic", Collections.singletonMap(Integer.valueOf(0), asList(new Integer[]{0, 1, 2})))),
                new CreateTopicsOptions().timeoutMs(1000)).all();
        assertFutureError(future, TimeoutException.class);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:16,代码来源:KafkaAdminClientTest.java

示例7: awaitUpdate

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
/**
 * Wait for metadata update until the current version is larger than the last version we know of
 */
public synchronized void awaitUpdate(final int lastVersion, final long maxWaitMs) throws InterruptedException {
    if (maxWaitMs < 0) {
        throw new IllegalArgumentException("Max time to wait for metadata updates should not be < 0 milli seconds");
    }
    long begin = System.currentTimeMillis();
    long remainingWaitMs = maxWaitMs;
    while (this.version <= lastVersion) {
        if (remainingWaitMs != 0)
            wait(remainingWaitMs);
        long elapsed = System.currentTimeMillis() - begin;
        if (elapsed >= maxWaitMs)
            throw new TimeoutException("Failed to update metadata after " + maxWaitMs + " ms.");
        remainingWaitMs = maxWaitMs - elapsed;
    }
}
 
开发者ID:txazo,项目名称:kafka,代码行数:19,代码来源:Metadata.java

示例8: maybeExpire

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
/**
 * A batch whose metadata is not available should be expired if one of the following is true:
 * <ol>
 *     <li> the batch is not in retry AND request timeout has elapsed after it is ready (full or linger.ms has reached).
 *     <li> the batch is in retry AND request timeout has elapsed after the backoff period ended.
 * </ol>
 */
public boolean maybeExpire(int requestTimeoutMs, long retryBackoffMs, long now, long lingerMs, boolean isFull) {
    boolean expire = false;

    if (!this.inRetry() && isFull && requestTimeoutMs < (now - this.lastAppendTime))
        expire = true;
    else if (!this.inRetry() && requestTimeoutMs < (now - (this.createdMs + lingerMs)))
        expire = true;
    else if (this.inRetry() && requestTimeoutMs < (now - (this.lastAttemptMs + retryBackoffMs)))
        expire = true;

    if (expire) {
        this.records.close();
        this.done(-1L, Record.NO_TIMESTAMP, new TimeoutException("Batch containing " + recordCount + " record(s) expired due to timeout while requesting metadata from brokers for " + topicPartition));
    }

    return expire;
}
 
开发者ID:txazo,项目名称:kafka,代码行数:25,代码来源:RecordBatch.java

示例9: waitOnMetadata

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
/**
 * Wait for cluster metadata including partitions for the given topic to be available.
 * @param topic The topic we want metadata for
 * @param maxWaitMs The maximum time in ms for waiting on the metadata
 * @return The amount of time we waited in ms
 */
private long waitOnMetadata(String topic, long maxWaitMs) throws InterruptedException {
    // add topic to metadata topic list if it is not there already.
    if (!this.metadata.containsTopic(topic))
        this.metadata.add(topic);

    if (metadata.fetch().partitionsForTopic(topic) != null)
        return 0;

    long begin = time.milliseconds();
    long remainingWaitMs = maxWaitMs;
    while (metadata.fetch().partitionsForTopic(topic) == null) {
        log.trace("Requesting metadata update for topic {}.", topic);
        int version = metadata.requestUpdate();
        sender.wakeup();
        metadata.awaitUpdate(version, remainingWaitMs);
        long elapsed = time.milliseconds() - begin;
        if (elapsed >= maxWaitMs)
            throw new TimeoutException("Failed to update metadata after " + maxWaitMs + " ms.");
        if (metadata.fetch().unauthorizedTopics().contains(topic))
            throw new TopicAuthorizationException(topic);
        remainingWaitMs = maxWaitMs - elapsed;
    }
    return time.milliseconds() - begin;
}
 
开发者ID:txazo,项目名称:kafka,代码行数:31,代码来源:KafkaProducer.java

示例10: failExpiredRequests

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
private void failExpiredRequests(long now) {
    // clear all expired unsent requests and fail their corresponding futures
    Iterator<Map.Entry<Node, List<ClientRequest>>> iterator = unsent.entrySet().iterator();
    while (iterator.hasNext()) {
        Map.Entry<Node, List<ClientRequest>> requestEntry = iterator.next();
        Iterator<ClientRequest> requestIterator = requestEntry.getValue().iterator();
        while (requestIterator.hasNext()) {
            ClientRequest request = requestIterator.next();
            if (request.createdTimeMs() < now - unsentExpiryMs) {
                RequestFutureCompletionHandler handler =
                        (RequestFutureCompletionHandler) request.callback();
                handler.raise(new TimeoutException("Failed to send request after " + unsentExpiryMs + " ms."));
                requestIterator.remove();
            } else
                break;
        }
        if (requestEntry.getValue().isEmpty())
            iterator.remove();
    }
}
 
开发者ID:txazo,项目名称:kafka,代码行数:21,代码来源:ConsumerNetworkClient.java

示例11: testBlockTimeout

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
/**
 * Test if Timeout exception is thrown when there is not enough memory to allocate and the elapsed time is greater than the max specified block time.
 * And verify that the allocation should finish soon after the maxBlockTimeMs.
 */
@Test
public void testBlockTimeout() throws Exception {
    BufferPool pool = new BufferPool(10, 1, metrics, systemTime, metricGroup);
    ByteBuffer buffer1 = pool.allocate(1, maxBlockTimeMs);
    ByteBuffer buffer2 = pool.allocate(1, maxBlockTimeMs);
    ByteBuffer buffer3 = pool.allocate(1, maxBlockTimeMs);
    // First two buffers will be de-allocated within maxBlockTimeMs since the most recent de-allocation
    delayedDeallocate(pool, buffer1, maxBlockTimeMs / 2);
    delayedDeallocate(pool, buffer2, maxBlockTimeMs);
    // The third buffer will be de-allocated after maxBlockTimeMs since the most recent de-allocation
    delayedDeallocate(pool, buffer3, maxBlockTimeMs / 2 * 5);

    long beginTimeMs = systemTime.milliseconds();
    try {
        pool.allocate(10, maxBlockTimeMs);
        fail("The buffer allocated more memory than its maximum value 10");
    } catch (TimeoutException e) {
        // this is good
    }
    long endTimeMs = systemTime.milliseconds();
    assertTrue("Allocation should finish not much later than maxBlockTimeMs", endTimeMs - beginTimeMs < maxBlockTimeMs + 1000);
}
 
开发者ID:txazo,项目名称:kafka,代码行数:27,代码来源:BufferPoolTest.java

示例12: deleteTopics

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
/**
 * Given its name, deletes a topic on the Kafka broker.
 *
 * @param topicsName The name of the topic.
 */
public static void deleteTopics (String BOOTSTRAP_SERVERS_HOST_PORT, String topicsName) {
    AdminClient adminClient = createAdminClient(BOOTSTRAP_SERVERS_HOST_PORT);
    // remove topic which is not already exists
    DeleteTopicsResult deleteTopicsResult = adminClient.deleteTopics(Arrays.asList(topicsName.split(",")));
    try {
        deleteTopicsResult.all().get();
        // real failure cause is wrapped inside the raised ExecutionException
    } catch (ExecutionException | InterruptedException e) {
        if (e.getCause() instanceof UnknownTopicOrPartitionException) {
            System.err.println("Topic not exists !!");
        } else if (e.getCause() instanceof TimeoutException) {
            System.err.println("Timeout !!");
        }
        e.printStackTrace();
    } finally {
        adminClient.close();
    }
}
 
开发者ID:datafibers-community,项目名称:df_data_service,代码行数:24,代码来源:KafkaAdminClient.java

示例13: describeTopics

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
/**
 * Given its name, deletes a topic on the Kafka broker.
 *
 * @param topicsName The name of the topic.
 */
public static void describeTopics (String BOOTSTRAP_SERVERS_HOST_PORT, String topicsName) {
    AdminClient adminClient = createAdminClient(BOOTSTRAP_SERVERS_HOST_PORT);
    // remove topic which is not already exists
    DescribeTopicsResult describeTopicsResult = adminClient.describeTopics(Arrays.asList(topicsName.split(",")));
    try {
        describeTopicsResult.all().get().forEach((key, value) -> {
            System.out.println("Key : " + key + " Value : " + value);
        });
        // real failure cause is wrapped inside the raised ExecutionException
    } catch (ExecutionException | InterruptedException e) {
        if (e.getCause() instanceof UnknownTopicOrPartitionException) {
            System.err.println("Topic not exists !!");
        } else if (e.getCause() instanceof TimeoutException) {
            System.err.println("Timeout !!");
        }
        e.printStackTrace();
    } finally {
        adminClient.close();
    }
}
 
开发者ID:datafibers-community,项目名称:df_data_service,代码行数:26,代码来源:KafkaAdminClient.java

示例14: sendRecordAction

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
public PrecipiceFuture<ProduceStatus, RecordMetadata> sendRecordAction(ProducerRecord<K, V> record) {
    final PrecipicePromise<ProduceStatus, RecordMetadata> promise = Asynchronous.acquirePermitsAndPromise(guardRail, 1L);

    producer.send(record, new Callback() {
        @Override
        public void onCompletion(RecordMetadata metadata, Exception exception) {
            if (exception == null) {
                promise.complete(ProduceStatus.SUCCESS, metadata);
            } else {
                if (exception instanceof TimeoutException) {
                    promise.completeExceptionally(ProduceStatus.TIMEOUT, exception);
                } else if (exception instanceof NetworkException) {
                    promise.completeExceptionally(ProduceStatus.NETWORK_EXCEPTION, exception);
                } else {
                    promise.completeExceptionally(ProduceStatus.OTHER_ERROR, exception);
                }
            }
        }
    });

    return promise.future();
}
 
开发者ID:tbrooks8,项目名称:Precipice,代码行数:23,代码来源:KafkaService.java

示例15: sendFailsReturnsFalse

import org.apache.kafka.common.errors.TimeoutException; //导入依赖的package包/类
@Test
public void sendFailsReturnsFalse() {
    KafkaProducer producer = mock(KafkaProducer.class);
    publisher.realProducer = producer;
    RecordMetadata metadata = new RecordMetadata(null, 0, 0,
            0, Long.valueOf(0), 0, 0);
    ArgumentCaptor<Callback> captor = ArgumentCaptor.forClass(Callback.class);
    when(producer.send(any(), captor.capture())).then(
        invocation -> {
            captor.getValue().onCompletion(metadata, new TimeoutException("error"));
            return new CompletableFuture();
        });
    String[] events = { "test" };
    assertThat(publisher.publishEvents(false, null, events)).isFalse();
}
 
开发者ID:Sixt,项目名称:ja-micro,代码行数:16,代码来源:KafkaPublisherTest.java


注:本文中的org.apache.kafka.common.errors.TimeoutException类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。