当前位置: 首页>>代码示例>>Java>>正文


Java InterruptException类代码示例

本文整理汇总了Java中org.apache.kafka.common.errors.InterruptException的典型用法代码示例。如果您正苦于以下问题:Java InterruptException类的具体用法?Java InterruptException怎么用?Java InterruptException使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


InterruptException类属于org.apache.kafka.common.errors包,在下文中一共展示了InterruptException类的13个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: send

import org.apache.kafka.common.errors.InterruptException; //导入依赖的package包/类
public Status send(String key, Task data) {
    try {
        log.debug("Publishing Request data to Kafka. Topic:{}, Key: {}, value: {}", topic, key, data);
        kafkaProducer.send(topic, key, data).get(); // send data right now.
        log.info("Publishing SUCCESSFUL");
    } catch (InterruptException ie) {
        log.error("Publisher thread interrupted. Exception: {}. Value: {}", ie, data);
        return Status.FAILURE;
    } catch (SerializationException se) {
        log.error("Supplied object could not be published due to serialization issues. Exception: {}", se);
        return Status.FAILURE;
    } catch (Exception e) {
        log.error("Error occurred while publishing task on Kafka. Exception: {}. Key: {}. Value{}", e, key, data);
        return Status.FAILURE;
    }
    return Status.SUCCESS;
}
 
开发者ID:dixantmittal,项目名称:scalable-task-scheduler,代码行数:18,代码来源:RequestProducer.java

示例2: poll

import org.apache.kafka.common.errors.InterruptException; //导入依赖的package包/类
/**
 * Poll this MongodbSourceTask for new records.
 *
 * @return a list of source records
 * @throws InterruptException
 */
@Override
public List<SourceRecord> poll() throws InterruptException {
    List<SourceRecord> records = new ArrayList<>();
    while (!reader.isEmpty()) {
    	Document message = reader.pool();
        Struct messageStruct = getStruct(message);
        String topic = getTopic(message);
        String db = getDB(message);
        String timestamp = getTimestamp(message);
        records.add(new SourceRecord(Collections.singletonMap("mongodb", db), Collections.singletonMap(db, timestamp), topic, messageStruct.schema(), messageStruct));
        log.trace(message.toString());
    }


    return records;
}
 
开发者ID:DataReply,项目名称:kafka-connect-mongodb,代码行数:23,代码来源:MongodbSourceTask.java

示例3: activate

import org.apache.kafka.common.errors.InterruptException; //导入依赖的package包/类
@Override
public void activate() {
    try {
        subscribeKafkaConsumer();
    } catch (InterruptException e) {
        throwKafkaConsumerInterruptedException();
    }
}
 
开发者ID:Paleozoic,项目名称:storm_spring_boot_demo,代码行数:9,代码来源:KafkaSpout.java

示例4: deactivate

import org.apache.kafka.common.errors.InterruptException; //导入依赖的package包/类
@Override
public void deactivate() {
    try {
        shutdown();
    } catch (InterruptException e) {
        throwKafkaConsumerInterruptedException();
    }
}
 
开发者ID:Paleozoic,项目名称:storm_spring_boot_demo,代码行数:9,代码来源:KafkaSpout.java

示例5: close

import org.apache.kafka.common.errors.InterruptException; //导入依赖的package包/类
@Override
public void close() {
    try {
        shutdown();
    } catch (InterruptException e) {
        throwKafkaConsumerInterruptedException();
    }
}
 
开发者ID:Paleozoic,项目名称:storm_spring_boot_demo,代码行数:9,代码来源:KafkaSpout.java

示例6: partitionsFor

import org.apache.kafka.common.errors.InterruptException; //导入依赖的package包/类
/**
 * Get the partition metadata for the given topic. This can be used for custom partitioning.
 *
 * @throws InterruptException If the thread is interrupted while blocked
 */
@Override
// 从Metadata中获取指定的topc分区信息
public List<PartitionInfo> partitionsFor(String topic) {
    try {
        return waitOnMetadata(topic, null, maxBlockTimeMs).cluster.partitionsForTopic(topic);
    } catch (InterruptedException e) {
        throw new InterruptException(e);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:15,代码来源:KafkaProducer.java

示例7: close

import org.apache.kafka.common.errors.InterruptException; //导入依赖的package包/类
private void close(long timeoutMs, boolean swallowException) {
    log.trace("Closing the Kafka consumer.");
    AtomicReference<Throwable> firstException = new AtomicReference<>();
    this.closed = true;
    try {
        if (coordinator != null)
            coordinator.close(Math.min(timeoutMs, requestTimeoutMs));
    } catch (Throwable t) {
        firstException.compareAndSet(null, t);
        log.error("Failed to close coordinator", t);
    }
    ClientUtils.closeQuietly(fetcher, "fetcher", firstException);
    ClientUtils.closeQuietly(interceptors, "consumer interceptors", firstException);
    ClientUtils.closeQuietly(metrics, "consumer metrics", firstException);
    ClientUtils.closeQuietly(client, "consumer network client", firstException);
    ClientUtils.closeQuietly(keyDeserializer, "consumer key deserializer", firstException);
    ClientUtils.closeQuietly(valueDeserializer, "consumer value deserializer", firstException);
    AppInfoParser.unregisterAppInfo(JMX_PREFIX, clientId);
    log.debug("The Kafka consumer has closed.");
    Throwable exception = firstException.get();
    if (exception != null && !swallowException) {
        if (exception instanceof InterruptException) {
            throw (InterruptException) exception;
        }
        throw new KafkaException("Failed to close kafka consumer", exception);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:28,代码来源:KafkaConsumer.java

示例8: testPollThrowsInterruptExceptionIfInterrupted

import org.apache.kafka.common.errors.InterruptException; //导入依赖的package包/类
@Test
public void testPollThrowsInterruptExceptionIfInterrupted() throws Exception {
    int rebalanceTimeoutMs = 60000;
    int sessionTimeoutMs = 30000;
    int heartbeatIntervalMs = 3000;

    final Time time = new MockTime();
    Cluster cluster = TestUtils.singletonCluster(topic, 1);
    final Node node = cluster.nodes().get(0);

    Metadata metadata = createMetadata();
    metadata.update(cluster, Collections.<String>emptySet(), time.milliseconds());

    final MockClient client = new MockClient(time, metadata);
    client.setNode(node);
    final PartitionAssignor assignor = new RoundRobinAssignor();

    final KafkaConsumer<String, String> consumer = newConsumer(time, client, metadata, assignor,
            rebalanceTimeoutMs, sessionTimeoutMs, heartbeatIntervalMs, false, 0);

    consumer.subscribe(Arrays.asList(topic), getConsumerRebalanceListener(consumer));
    prepareRebalance(client, node, assignor, Arrays.asList(tp0), null);

    consumer.poll(0);

    // interrupt the thread and call poll
    try {
        Thread.currentThread().interrupt();
        expectedException.expect(InterruptException.class);
        consumer.poll(0);
    } finally {
        // clear interrupted state again since this thread may be reused by JUnit
        Thread.interrupted();
    }
    consumer.close(0, TimeUnit.MILLISECONDS);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:37,代码来源:KafkaConsumerTest.java

示例9: close

import org.apache.kafka.common.errors.InterruptException; //导入依赖的package包/类
public void close() {
    if (this.producer != null) {
        synchronized (this) {
            if (this.producer != null) {
                try {
                    localLogger.info("Start to stop producer....");
                    producer.close();
                } catch (InterruptException e) {
                    localLogger.info("producer.close() error due to ", e);
                }
                offsetLogger.printOffsetMap("KafkaPublisher");
                this.producer = null;
            }
        }
    }
}
 
开发者ID:jretty-org,项目名称:kafka-xclient,代码行数:17,代码来源:ProducerTemplate.java

示例10: partitionsFor

import org.apache.kafka.common.errors.InterruptException; //导入依赖的package包/类
/**
 * Get the partition metadata for the give topic. This can be used for custom partitioning.
 * @throws InterruptException If the thread is interrupted while blocked
 */
@Override
public List<PartitionInfo> partitionsFor(String topic) {
    try {
        waitOnMetadata(topic, this.maxBlockTimeMs);
    } catch (InterruptedException e) {
        throw new InterruptException(e);
    }
    return this.metadata.fetch().partitionsForTopic(topic);
}
 
开发者ID:txazo,项目名称:kafka,代码行数:14,代码来源:KafkaProducer.java

示例11: maybeThrowInterruptException

import org.apache.kafka.common.errors.InterruptException; //导入依赖的package包/类
private void maybeThrowInterruptException() {
    if (Thread.interrupted()) {
        throw new InterruptException(new InterruptedException());
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:6,代码来源:ConsumerNetworkClient.java

示例12: flush

import org.apache.kafka.common.errors.InterruptException; //导入依赖的package包/类
/**
 * Invoking this method makes all buffered records immediately available to send (even if <code>linger.ms</code> is
 * greater than 0) and blocks on the completion of the requests associated with these records. The post-condition
 * of <code>flush()</code> is that any previously sent record will have completed (e.g. <code>Future.isDone() == true</code>).
 * A request is considered completed when it is successfully acknowledged
 * according to the <code>acks</code> configuration you have specified or else it results in an error.
 * <p>
 * Other threads can continue sending records while one thread is blocked waiting for a flush call to complete,
 * however no guarantee is made about the completion of records sent after the flush call begins.
 * <p>
 * This method can be useful when consuming from some input system and producing into Kafka. The <code>flush()</code> call
 * gives a convenient way to ensure all previously sent messages have actually completed.
 * <p>
 * This example shows how to consume from one Kafka topic and produce to another Kafka topic:
 * <pre>
 * {@code
 * for(ConsumerRecord<String, String> record: consumer.poll(100))
 *     producer.send(new ProducerRecord("my-topic", record.key(), record.value());
 * producer.flush();
 * consumer.commit();
 * }
 * </pre>
 * <p>
 * Note that the above example may drop records if the produce request fails. If we want to ensure that this does not occur
 * we need to set <code>retries=&lt;large_number&gt;</code> in our config.
 * </p>
 * <p>
 * Applications don't need to call this method for transactional producers, since the {@link #commitTransaction()} will
 * flush all buffered records before performing the commit. This ensures that all the the {@link #send(ProducerRecord)}
 * calls made since the previous {@link #beginTransaction()} are completed before the commit.
 * </p>
 *
 * @throws InterruptException If the thread is interrupted while blocked
 */
// 等待RecordAccumulator中消息发送完
@Override
public void flush() {
    log.trace("Flushing accumulated records in producer.");
    this.accumulator.beginFlush();
    this.sender.wakeup();
    try {
        this.accumulator.awaitFlushCompletion();
    } catch (InterruptedException e) {
        throw new InterruptException("Flush interrupted.", e);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:47,代码来源:KafkaProducer.java

示例13: flush

import org.apache.kafka.common.errors.InterruptException; //导入依赖的package包/类
/**
 * Invoking this method makes all buffered records immediately available to send (even if <code>linger.ms</code> is
 * greater than 0) and blocks on the completion of the requests associated with these records. The post-condition
 * of <code>flush()</code> is that any previously sent record will have completed (e.g. <code>Future.isDone() == true</code>).
 * A request is considered completed when it is successfully acknowledged
 * according to the <code>acks</code> configuration you have specified or else it results in an error.
 * <p>
 * Other threads can continue sending records while one thread is blocked waiting for a flush call to complete,
 * however no guarantee is made about the completion of records sent after the flush call begins.
 * <p>
 * This method can be useful when consuming from some input system and producing into Kafka. The <code>flush()</code> call
 * gives a convenient way to ensure all previously sent messages have actually completed.
 * <p>
 * This example shows how to consume from one Kafka topic and produce to another Kafka topic:
 * <pre>
 * {@code
 * for(ConsumerRecord<String, String> record: consumer.poll(100))
 *     producer.send(new ProducerRecord("my-topic", record.key(), record.value());
 * producer.flush();
 * consumer.commit();
 * }
 * </pre>
 *
 * Note that the above example may drop records if the produce request fails. If we want to ensure that this does not occur
 * we need to set <code>retries=&lt;large_number&gt;</code> in our config.
 *
 * @throws InterruptException If the thread is interrupted while blocked
 */
@Override
public void flush() {
    log.trace("Flushing accumulated records in producer.");
    this.accumulator.beginFlush();
    this.sender.wakeup();
    try {
        this.accumulator.awaitFlushCompletion();
    } catch (InterruptedException e) {
        throw new InterruptException("Flush interrupted.", e);
    }
}
 
开发者ID:txazo,项目名称:kafka,代码行数:40,代码来源:KafkaProducer.java


注:本文中的org.apache.kafka.common.errors.InterruptException类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。