当前位置: 首页>>代码示例>>Java>>正文


Java KafkaException类代码示例

本文整理汇总了Java中org.apache.kafka.common.KafkaException的典型用法代码示例。如果您正苦于以下问题:Java KafkaException类的具体用法?Java KafkaException怎么用?Java KafkaException使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


KafkaException类属于org.apache.kafka.common包,在下文中一共展示了KafkaException类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testConstructorFailureCloseResource

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
@Test
public void testConstructorFailureCloseResource() {
    Properties props = new Properties();
    props.setProperty(ProducerConfig.CLIENT_ID_CONFIG, "testConstructorClose");
    props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "some.invalid.hostname.foo.bar.local:9999");
    props.setProperty(ProducerConfig.METRIC_REPORTER_CLASSES_CONFIG, MockMetricsReporter.class.getName());

    final int oldInitCount = MockMetricsReporter.INIT_COUNT.get();
    final int oldCloseCount = MockMetricsReporter.CLOSE_COUNT.get();
    try {
        KafkaProducer<byte[], byte[]> producer = new KafkaProducer<byte[], byte[]>(
                props, new ByteArraySerializer(), new ByteArraySerializer());
    } catch (KafkaException e) {
        assertEquals(oldInitCount + 1, MockMetricsReporter.INIT_COUNT.get());
        assertEquals(oldCloseCount + 1, MockMetricsReporter.CLOSE_COUNT.get());
        assertEquals("Failed to construct kafka producer", e.getMessage());
        return;
    }
    fail("should have caught an exception and returned");
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:21,代码来源:KafkaProducerTest.java

示例2: createTopic

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
/**
 * Create a topic
 *
 * @param connection        Connection
 * @param topicName         Topic name
 * @param partitions        The number of partitions for the topic being created
 * @param replicationFactor The replication factor for each partition in the topic being created
 * @param topicProperties   A topic configuration override for an existing topic
 * @throws TopicOperationException if topic was not created.
 */
public void createTopic(final ZkUtils connection, final String topicName,
                        final int partitions,
                        final int replicationFactor,
                        final Properties topicProperties) {

    try {
        AdminUtils.createTopic(connection,
                topicName,
                partitions,
                replicationFactor,
                topicProperties);

    } catch (IllegalArgumentException | KafkaException | AdminOperationException e) {
        throw new TopicOperationException(topicName, e.getMessage(), e, this.getClass());
    }
}
 
开发者ID:mcafee,项目名称:management-sdk-for-kafka,代码行数:27,代码来源:ClusterTools.java

示例3: getMessages

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
@Override
public List<PubSubMessage> getMessages() throws PubSubException {
    ConsumerRecords<String, byte[]> buffer;
    try {
        buffer = consumer.poll(0);
    } catch (KafkaException e) {
        throw new PubSubException("Consumer poll failed", e);
    }
    List<PubSubMessage> messages = new ArrayList<>();
    for (ConsumerRecord<String, byte[]> record : buffer) {
        Object message = SerializerDeserializer.fromBytes(record.value());
        if (message == null || !(message instanceof PubSubMessage)) {
            log.warn("Invalid message received: {}", message);
            continue;
        }
        messages.add((PubSubMessage) message);
    }
    if (manualCommit) {
        consumer.commitAsync();
    }
    return messages;
}
 
开发者ID:yahoo,项目名称:bullet-kafka,代码行数:23,代码来源:KafkaSubscriber.java

示例4: onAcknowledgement

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
@Override
public void onAcknowledgement(RecordMetadata metadata, Exception exception) {
    onAckCount++;
    if (exception != null) {
        onErrorAckCount++;
        // the length check is just to call topic() method and let it throw an exception
        // if RecordMetadata.TopicPartition is null
        if (metadata != null && metadata.topic().length() >= 0) {
            onErrorAckWithTopicSetCount++;
            if (metadata.partition() >= 0)
                onErrorAckWithTopicPartitionSetCount++;
        }
    }
    if (throwExceptionOnAck)
        throw new KafkaException("Injected exception in AppendProducerInterceptor.onAcknowledgement");
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:17,代码来源:ProducerInterceptorsTest.java

示例5: handleResponse

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
@Override
public void handleResponse(AbstractResponse response) {
    InitProducerIdResponse initProducerIdResponse = (InitProducerIdResponse) response;
    Errors error = initProducerIdResponse.error();

    if (error == Errors.NONE) {
        ProducerIdAndEpoch producerIdAndEpoch = new ProducerIdAndEpoch(initProducerIdResponse.producerId(), initProducerIdResponse.epoch());
        setProducerIdAndEpoch(producerIdAndEpoch);
        transitionTo(State.READY);
        lastError = null;
        result.done();
    } else if (error == Errors.NOT_COORDINATOR || error == Errors.COORDINATOR_NOT_AVAILABLE) {
        lookupCoordinator(FindCoordinatorRequest.CoordinatorType.TRANSACTION, transactionalId);
        reenqueue();
    } else if (error == Errors.COORDINATOR_LOAD_IN_PROGRESS || error == Errors.CONCURRENT_TRANSACTIONS) {
        reenqueue();
    } else if (error == Errors.TRANSACTIONAL_ID_AUTHORIZATION_FAILED) {
        fatalError(error.exception());
    } else {
        fatalError(new KafkaException("Unexpected error in InitProducerIdResponse; " + error.message()));
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:23,代码来源:TransactionManager.java

示例6: resendFailedProduceRequestAfterAbortableError

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
@Test
public void resendFailedProduceRequestAfterAbortableError() throws Exception {
    final long pid = 13131L;
    final short epoch = 1;
    doInitTransactions(pid, epoch);
    transactionManager.beginTransaction();

    transactionManager.maybeAddPartitionToTransaction(tp0);

    Future<RecordMetadata> responseFuture = accumulator.append(tp0, time.milliseconds(), "key".getBytes(),
            "value".getBytes(), Record.EMPTY_HEADERS, null, MAX_BLOCK_TIMEOUT).future;

    prepareAddPartitionsToTxnResponse(Errors.NONE, tp0, epoch, pid);
    prepareProduceResponse(Errors.NOT_LEADER_FOR_PARTITION, pid, epoch);
    sender.run(time.milliseconds()); // AddPartitions
    sender.run(time.milliseconds()); // Produce

    assertFalse(responseFuture.isDone());

    transactionManager.transitionToAbortableError(new KafkaException());
    prepareProduceResponse(Errors.NONE, pid, epoch);

    sender.run(time.milliseconds());
    assertTrue(responseFuture.isDone());
    assertNotNull(responseFuture.get());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:27,代码来源:TransactionManagerTest.java

示例7: configure

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
public void configure(TransportLayer transportLayer, PrincipalBuilder principalBuilder, Map<String, ?> configs) throws KafkaException {
    try {
        this.transportLayer = transportLayer;
        this.configs = configs;

        // 初始化saslstate字段为SEND_HANDSHAKE_REQUEST
        setSaslState(handshakeRequestEnable ? SaslState.SEND_HANDSHAKE_REQUEST : SaslState.INITIAL);

        // determine client principal from subject for Kerberos to use as authorization id for the SaslClient.
        // For other mechanisms, the authenticated principal (username for PLAIN and SCRAM) is used as
        // authorization id. Hence the principal is not specified for creating the SaslClient.
        if (mechanism.equals(SaslConfigs.GSSAPI_MECHANISM))
            this.clientPrincipalName = firstPrincipal(subject);
        else
            this.clientPrincipalName = null;
        // 用于收集认证信息I的SaslClientCallbackHandler
        callbackHandler = new SaslClientCallbackHandler();
        callbackHandler.configure(configs, Mode.CLIENT, subject, mechanism);

        // 创建saslclient对象
        saslClient = createSaslClient();
    } catch (Exception e) {
        throw new KafkaException("Failed to configure SaslClientAuthenticator", e);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:26,代码来源:SaslClientAuthenticator.java

示例8: testConstructorClose

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
@Test
public void testConstructorClose() throws Exception {
    Properties props = new Properties();
    props.setProperty(ConsumerConfig.CLIENT_ID_CONFIG, "testConstructorClose");
    props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "some.invalid.hostname.foo.bar.local:9999");
    props.setProperty(ConsumerConfig.METRIC_REPORTER_CLASSES_CONFIG, MockMetricsReporter.class.getName());

    final int oldInitCount = MockMetricsReporter.INIT_COUNT.get();
    final int oldCloseCount = MockMetricsReporter.CLOSE_COUNT.get();
    try {
        new KafkaConsumer<>(props, new ByteArrayDeserializer(), new ByteArrayDeserializer());
    } catch (KafkaException e) {
        assertEquals(oldInitCount + 1, MockMetricsReporter.INIT_COUNT.get());
        assertEquals(oldCloseCount + 1, MockMetricsReporter.CLOSE_COUNT.get());
        assertEquals("Failed to construct kafka consumer", e.getMessage());
        return;
    }
    Assert.fail("should have caught an exception and returned");
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:20,代码来源:KafkaConsumerTest.java

示例9: JaasConfig

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
public JaasConfig(String loginContextName, String jaasConfigParams) {
    StreamTokenizer tokenizer = new StreamTokenizer(new StringReader(jaasConfigParams));
    tokenizer.slashSlashComments(true);
    tokenizer.slashStarComments(true);
    tokenizer.wordChars('-', '-');
    tokenizer.wordChars('_', '_');
    tokenizer.wordChars('$', '$');

    try {
        configEntries = new ArrayList<>();
        while (tokenizer.nextToken() != StreamTokenizer.TT_EOF) {
            configEntries.add(parseAppConfigurationEntry(tokenizer));
        }
        if (configEntries.isEmpty())
            throw new IllegalArgumentException("Login module not specified in JAAS config");

        this.loginContextName = loginContextName;

    } catch (IOException e) {
        throw new KafkaException("Unexpected exception while parsing JAAS config");
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:23,代码来源:JaasConfig.java

示例10: buildChannel

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
public KafkaChannel buildChannel(String id, SelectionKey key, int maxReceiveSize) throws KafkaException {
    try {
        SocketChannel socketChannel = (SocketChannel) key.channel();
        TransportLayer transportLayer = buildTransportLayer(id, key, socketChannel);
        Authenticator authenticator;
        if (mode == Mode.SERVER)
            authenticator = new SaslServerAuthenticator(id, jaasContext, loginManager.subject(),
                    kerberosShortNamer, socketChannel.socket().getLocalAddress().getHostName(), maxReceiveSize,
                    credentialCache);
        else
            authenticator = new SaslClientAuthenticator(id, loginManager.subject(), loginManager.serviceName(),
                    socketChannel.socket().getInetAddress().getHostName(), clientSaslMechanism, handshakeRequestEnable);
        // Both authenticators don't use `PrincipalBuilder`, so we pass `null` for now. Reconsider if this changes.
        authenticator.configure(transportLayer, null, this.configs);
        return new KafkaChannel(id, transportLayer, authenticator, maxReceiveSize);
    } catch (Exception e) {
        log.info("Failed to create channel due to ", e);
        throw new KafkaException(e);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:21,代码来源:SaslChannelBuilder.java

示例11: writeTo

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
@Override
public long writeTo(GatheringByteChannel channel) throws IOException {
    if (completed())
        throw new KafkaException("This operation cannot be invoked on a complete request.");

    int totalWrittenPerCall = 0;
    boolean sendComplete;
    do {
        long written = current.writeTo(channel);
        totalWrittenPerCall += written;
        sendComplete = current.completed();
        if (sendComplete)
            nextSendOrDone();
    } while (!completed() && sendComplete);

    totalWritten += totalWrittenPerCall;

    if (completed() && totalWritten != size)
        log.error("mismatch in sending bytes over socket; expected: " + size + " actual: " + totalWritten);

    log.trace("Bytes written as part of multi-send call: {}, total bytes written so far: {}, expected bytes to write: {}",
            totalWrittenPerCall, totalWritten, size);

    return totalWrittenPerCall;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:26,代码来源:MultiSend.java

示例12: truncateTo

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
/**
 * Truncate this file message set to the given size in bytes. Note that this API does no checking that the
 * given size falls on a valid message boundary.
 * In some versions of the JDK truncating to the same size as the file message set will cause an
 * update of the files mtime, so truncate is only performed if the targetSize is smaller than the
 * size of the underlying FileChannel.
 * It is expected that no other threads will do writes to the log when this function is called.
 * @param targetSize The size to truncate to. Must be between 0 and sizeInBytes.
 * @return The number of bytes truncated off
 */
//截取日志文件到targetSize
public int truncateTo(int targetSize) throws IOException {
    int originalSize = sizeInBytes();
    //检测有效性
    if (targetSize > originalSize || targetSize < 0)
        throw new KafkaException("Attempt to truncate log segment to " + targetSize + " bytes failed, " +
                " size of this log segment is " + originalSize + " bytes.");
    if (targetSize < (int) channel.size()) {
        //截取文件
        channel.truncate(targetSize);
        //跟新Size
        size.set(targetSize);
    }
    //返回被截掉的字节数
    return originalSize - targetSize;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:27,代码来源:FileRecords.java

示例13: writeTo

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
@Override
public long writeTo(GatheringByteChannel destChannel, long offset, int length) throws IOException {
    long newSize = Math.min(channel.size(), end) - start;
    int oldSize = sizeInBytes();
    if (newSize < oldSize)
        throw new KafkaException(String.format(
                "Size of FileRecords %s has been truncated during write: old size %d, new size %d",
                file.getAbsolutePath(), oldSize, newSize));

    long position = start + offset;
    int count = Math.min(length, oldSize);
    final long bytesTransferred;
    if (destChannel instanceof TransportLayer) {
        TransportLayer tl = (TransportLayer) destChannel;
        bytesTransferred = tl.transferFrom(channel, position, count);
    } else {
        bytesTransferred = channel.transferTo(position, count, destChannel);
    }
    return bytesTransferred;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:21,代码来源:FileRecords.java

示例14: testIsSendToPartitionAllowedWithInFlightPartitionAddAfterAbortableError

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
@Test
public void testIsSendToPartitionAllowedWithInFlightPartitionAddAfterAbortableError() {
    final long pid = 13131L;
    final short epoch = 1;

    doInitTransactions(pid, epoch);

    transactionManager.beginTransaction();
    transactionManager.maybeAddPartitionToTransaction(tp0);

    // Send the AddPartitionsToTxn request and leave it in-flight
    sender.run(time.milliseconds());
    transactionManager.transitionToAbortableError(new KafkaException());

    assertFalse(transactionManager.isSendToPartitionAllowed(tp0));
    assertTrue(transactionManager.hasAbortableError());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:18,代码来源:TransactionManagerTest.java

示例15: testIsSendToPartitionAllowedWithAddedPartitionAfterAbortableError

import org.apache.kafka.common.KafkaException; //导入依赖的package包/类
@Test
public void testIsSendToPartitionAllowedWithAddedPartitionAfterAbortableError() {
    final long pid = 13131L;
    final short epoch = 1;

    doInitTransactions(pid, epoch);

    transactionManager.beginTransaction();

    transactionManager.maybeAddPartitionToTransaction(tp0);
    prepareAddPartitionsToTxnResponse(Errors.NONE, tp0, epoch, pid);
    sender.run(time.milliseconds());
    assertFalse(transactionManager.hasPartitionsToAdd());
    transactionManager.transitionToAbortableError(new KafkaException());

    assertTrue(transactionManager.isSendToPartitionAllowed(tp0));
    assertTrue(transactionManager.hasAbortableError());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:19,代码来源:TransactionManagerTest.java


注:本文中的org.apache.kafka.common.KafkaException类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。