当前位置: 首页>>代码示例>>Java>>正文


Java StreamsException类代码示例

本文整理汇总了Java中org.apache.kafka.streams.errors.StreamsException的典型用法代码示例。如果您正苦于以下问题:Java StreamsException类的具体用法?Java StreamsException怎么用?Java StreamsException使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


StreamsException类属于org.apache.kafka.streams.errors包,在下文中一共展示了StreamsException类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: view

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
@Override
public KTableValueGetterSupplier<K, KeyValue<K1, V1>> view() {
    final KTableValueGetterSupplier<K, V> parentValueGetterSupplier = parent.valueGetterSupplier();

    return new KTableValueGetterSupplier<K, KeyValue<K1, V1>>() {

        public KTableValueGetter<K, KeyValue<K1, V1>> get() {
            return new KTableMapValueGetter(parentValueGetterSupplier.get());
        }

        @Override
        public String[] storeNames() {
            throw new StreamsException("Underlying state store not accessible due to repartitioning.");
        }
    };
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:17,代码来源:KTableRepartitionMap.java

示例2: process

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
/**
 * @throws StreamsException if key is null
 */
@Override
public void process(K key, Change<V> change) {
    // the original key should never be null
    if (key == null)
        throw new StreamsException("Record key for the grouping KTable should not be null.");

    // if the value is null, we do not need to forward its selected key-value further
    KeyValue<? extends K1, ? extends V1> newPair = change.newValue == null ? null : mapper.apply(key, change.newValue);
    KeyValue<? extends K1, ? extends V1> oldPair = change.oldValue == null ? null : mapper.apply(key, change.oldValue);

    // if the selected repartition key or value is null, skip
    // forward oldPair first, to be consistent with reduce and aggregate
    if (oldPair != null && oldPair.key != null && oldPair.value != null) {
        context().forward(oldPair.key, new Change<>(null, oldPair.value));
    }

    if (newPair != null && newPair.key != null && newPair.value != null) {
        context().forward(newPair.key, new Change<>(newPair.value, null));
    }
    
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:25,代码来源:KTableRepartitionMap.java

示例3: serialize

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
/**
 * @throws StreamsException if both old and new values of data are null, or if
 * both values are not null
 */
@Override
public byte[] serialize(String topic, Headers headers, Change<T> data) {
    byte[] serializedKey;

    // only one of the old / new values would be not null
    if (data.newValue != null) {
        if (data.oldValue != null)
            throw new StreamsException("Both old and new values are not null (" + data.oldValue
                    + " : " + data.newValue + ") in ChangeSerializer, which is not allowed.");

        serializedKey = inner.serialize(topic, headers, data.newValue);
    } else {
        if (data.oldValue == null)
            throw new StreamsException("Both old and new values are null in ChangeSerializer, which is not allowed.");

        serializedKey = inner.serialize(topic, headers, data.oldValue);
    }

    ByteBuffer buf = ByteBuffer.allocate(serializedKey.length + NEWFLAG_SIZE);
    buf.put(serializedKey);
    buf.put((byte) (data.newValue != null ? 1 : 0));

    return buf.array();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:29,代码来源:ChangedSerializer.java

示例4: keySerde

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
/**
 * Return an {@link Serde#configure(Map, boolean) configured} instance of {@link #KEY_SERDE_CLASS_CONFIG key Serde
 * class}. This method is deprecated. Use {@link #defaultKeySerde()} method instead.
 *
 * @return an configured instance of key Serde class
 */
@Deprecated
public Serde keySerde() {
    try {
        Serde<?> serde = getConfiguredInstance(KEY_SERDE_CLASS_CONFIG, Serde.class);
        // the default value of deprecated key serde is null
        if (serde == null) {
            serde = defaultKeySerde();
        } else {
            serde.configure(originals(), true);
        }
        return serde;
    } catch (final Exception e) {
        throw new StreamsException(String.format("Failed to configure key serde %s", get(KEY_SERDE_CLASS_CONFIG)), e);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:22,代码来源:StreamsConfig.java

示例5: valueSerde

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
/**
 * Return an {@link Serde#configure(Map, boolean) configured} instance of {@link #VALUE_SERDE_CLASS_CONFIG value
 * Serde class}. This method is deprecated. Use {@link #defaultValueSerde()} instead.
 *
 * @return an configured instance of value Serde class
 */
@Deprecated
public Serde valueSerde() {
    try {
        Serde<?> serde = getConfiguredInstance(VALUE_SERDE_CLASS_CONFIG, Serde.class);
        // the default value of deprecated value serde is null
        if (serde == null) {
            serde = defaultValueSerde();
        } else {
            serde.configure(originals(), false);
        }
        return serde;
    } catch (final Exception e) {
        throw new StreamsException(String.format("Failed to configure value serde %s", get(VALUE_SERDE_CLASS_CONFIG)), e);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:22,代码来源:StreamsConfig.java

示例6: start

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
/**
 * Start the {@code KafkaStreams} instance by starting all its threads.
 * <p>
 * Note, for brokers with version {@code 0.9.x} or lower, the broker version cannot be checked.
 * There will be no error and the client will hang and retry to verify the broker version until it
 * {@link StreamsConfig#REQUEST_TIMEOUT_MS_CONFIG times out}.

 * @throws IllegalStateException if process was already started
 * @throws StreamsException if the Kafka brokers have version 0.10.0.x
 */
public synchronized void start() throws IllegalStateException, StreamsException {
    log.debug("{} Starting Kafka Stream process.", logPrefix);

    if (state == State.CREATED) {
        checkBrokerVersionCompatibility();
        setState(State.RUNNING);

        if (globalStreamThread != null) {
            globalStreamThread.start();
        }

        for (final StreamThread thread : threads) {
            thread.start();
        }

        log.info("{} Started Kafka Stream process", logPrefix);
    } else {
        throw new IllegalStateException("Cannot start again.");
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:31,代码来源:KafkaStreams.java

示例7: makeReady

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
/**
 * Prepares a set of given internal topics.
 *
 * If a topic does not exist creates a new topic.
 * If a topic with the correct number of partitions exists ignores it.
 * If a topic exists already but has different number of partitions we fail and throw exception requesting user to reset the app before restarting again.
 */
public void makeReady(final Map<InternalTopicConfig, Integer> topics) {
    for (int i = 0; i < MAX_TOPIC_READY_TRY; i++) {
        try {
            final MetadataResponse metadata = streamsKafkaClient.fetchMetadata();
            final Map<String, Integer> existingTopicPartitions = fetchExistingPartitionCountByTopic(metadata);
            final Map<InternalTopicConfig, Integer> topicsToBeCreated = validateTopicPartitions(topics, existingTopicPartitions);
            if (metadata.brokers().size() < replicationFactor) {
                throw new StreamsException("Found only " + metadata.brokers().size() + " brokers, " +
                    " but replication factor is " + replicationFactor + "." +
                    " Decrease replication factor for internal topics via StreamsConfig parameter \"replication.factor\""  +
                    " or add more brokers to your cluster.");
            }
            streamsKafkaClient.createTopics(topicsToBeCreated, replicationFactor, windowChangeLogAdditionalRetention, metadata);
            return;
        } catch (StreamsException ex) {
            log.warn("Could not create internal topics: " + ex.getMessage() + " Retry #" + i);
        }
        // backoff
        time.sleep(100L);
    }
    throw new StreamsException("Could not create internal topics.");
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:30,代码来源:InternalTopicManager.java

示例8: getNumPartitions

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
/**
 * Get the number of partitions for the given topics
 */
public Map<String, Integer> getNumPartitions(final Set<String> topics) {
    for (int i = 0; i < MAX_TOPIC_READY_TRY; i++) {
        try {
            final MetadataResponse metadata = streamsKafkaClient.fetchMetadata();
            final Map<String, Integer> existingTopicPartitions = fetchExistingPartitionCountByTopic(metadata);
            existingTopicPartitions.keySet().retainAll(topics);

            return existingTopicPartitions;
        } catch (StreamsException ex) {
            log.warn("Could not get number of partitions: " + ex.getMessage() + " Retry #" + i);
        }
        // backoff
        time.sleep(100L);
    }
    throw new StreamsException("Could not get number of partitions.");
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:20,代码来源:InternalTopicManager.java

示例9: validateTopicPartitions

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
/**
 * Check the existing topics to have correct number of partitions; and return the non existing topics to be created
 */
private Map<InternalTopicConfig, Integer> validateTopicPartitions(final Map<InternalTopicConfig, Integer> topicsPartitionsMap,
                                                                  final Map<String, Integer> existingTopicNamesPartitions) {
    final Map<InternalTopicConfig, Integer> topicsToBeCreated = new HashMap<>();
    for (Map.Entry<InternalTopicConfig, Integer> entry : topicsPartitionsMap.entrySet()) {
        InternalTopicConfig topic = entry.getKey();
        Integer partition = entry.getValue();
        if (existingTopicNamesPartitions.containsKey(topic.name())) {
            if (!existingTopicNamesPartitions.get(topic.name()).equals(partition)) {
                throw new StreamsException("Existing internal topic " + topic.name() + " has invalid partitions." +
                        " Expected: " + partition + " Actual: " + existingTopicNamesPartitions.get(topic.name()) +
                        ". Use 'kafka.tools.StreamsResetter' tool to clean up invalid topics before processing.");
            }
        } else {
            topicsToBeCreated.put(topic, partition);
        }
    }

    return topicsToBeCreated;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:23,代码来源:InternalTopicManager.java

示例10: punctuate

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
/**
 * @throws IllegalStateException if the current node is not null
 */
@Override
public void punctuate(final ProcessorNode node, final long timestamp) {
    if (processorContext.currentNode() != null) {
        throw new IllegalStateException(String.format("%s Current node is not null", logPrefix));
    }

    updateProcessorContext(new StampedRecord(DUMMY_RECORD, timestamp), node);

    log.trace("{} Punctuating processor {} with timestamp {}", logPrefix, node.name(), timestamp);

    try {
        node.punctuate(timestamp);
    } catch (final KafkaException e) {
        throw new StreamsException(String.format("%s Exception caught while punctuating processor '%s'", logPrefix,  node.name()), e);
    } finally {
        processorContext.setCurrentNode(null);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:22,代码来源:StreamTask.java

示例11: send

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
@Override
public <K, V> void send(final String topic,
                        final K key,
                        final V value,
                        final Long timestamp,
                        final Serializer<K> keySerializer,
                        final Serializer<V> valueSerializer,
                        final StreamPartitioner<? super K, ? super V> partitioner) {
    Integer partition = null;

    if (partitioner != null) {
        final List<PartitionInfo> partitions = producer.partitionsFor(topic);
        if (partitions.size() > 0) {
            partition = partitioner.partition(key, value, partitions.size());
        } else {
            throw new StreamsException("Could not get partition information for topic '" + topic + "'." +
                " This can happen if the topic does not exist.");
        }
    }

    send(topic, key, value, partition, timestamp, keySerializer, valueSerializer);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:23,代码来源:RecordCollectorImpl.java

示例12: process

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
@Override
public void process(final K key, final V value) {
    final RecordCollector collector = ((RecordCollector.Supplier) context).recordCollector();

    final long timestamp = context.timestamp();
    if (timestamp < 0) {
        throw new StreamsException("Invalid (negative) timestamp of " + timestamp + " for output record <" + key + ":" + value + ">.");
    }

    try {
        collector.send(topic, key, value, timestamp, keySerializer, valSerializer, partitioner);
    } catch (final ClassCastException e) {
        final String keyClass = key == null ? "unknown because key is null" : key.getClass().getName();
        final String valueClass = value == null ? "unknown because value is null" : value.getClass().getName();
        throw new StreamsException(
                String.format("A serializer (key: %s / value: %s) is not compatible to the actual key or value type " +
                                "(key type: %s / value type: %s). Change the default Serdes in StreamConfig or " +
                                "provide correct Serdes via method parameters.",
                                keySerializer.getClass().getName(),
                                valSerializer.getClass().getName(),
                                keyClass,
                                valueClass),
                e);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:26,代码来源:SinkNode.java

示例13: fetchMetadata

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
/**
 * Fetch the metadata for all topics
 */
public MetadataResponse fetchMetadata() {

    final ClientRequest clientRequest = kafkaClient.newClientRequest(
        getAnyReadyBrokerId(),
        MetadataRequest.Builder.allTopics(),
        Time.SYSTEM.milliseconds(),
        true);
    final ClientResponse clientResponse = sendRequest(clientRequest);

    if (!clientResponse.hasResponse()) {
        throw new StreamsException("Empty response for client request.");
    }
    if (!(clientResponse.responseBody() instanceof MetadataResponse)) {
        throw new StreamsException("Inconsistent response type for internal topic metadata request. " +
            "Expected MetadataResponse but received " + clientResponse.responseBody().getClass().getName());
    }
    final MetadataResponse metadataResponse = (MetadataResponse) clientResponse.responseBody();
    return metadataResponse;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:23,代码来源:StreamsKafkaClient.java

示例14: pollRequests

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
/**
 * Get the next batch of records by polling.
 * @return Next batch of records or null if no records available.
 */
private ConsumerRecords<byte[], byte[]> pollRequests() {
    ConsumerRecords<byte[], byte[]> records = null;

    try {
        records = consumer.poll(pollTimeMs);
    } catch (final InvalidOffsetException e) {
        resetInvalidOffsets(e);
    }

    if (rebalanceException != null) {
        if (!(rebalanceException instanceof ProducerFencedException)) {
            throw new StreamsException(logPrefix + " Failed to rebalance.", rebalanceException);
        }
    }

    return records;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:22,代码来源:StreamThread.java

示例15: shouldThroughOnUnassignedStateStoreAccess

import org.apache.kafka.streams.errors.StreamsException; //导入依赖的package包/类
@Test(expected = TopologyBuilderException.class)
public void shouldThroughOnUnassignedStateStoreAccess() {
    final String sourceNodeName = "source";
    final String goodNodeName = "goodGuy";
    final String badNodeName = "badGuy";

    final Properties config = new Properties();
    config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "host:1");
    config.put(StreamsConfig.APPLICATION_ID_CONFIG, "appId");
    final StreamsConfig streamsConfig = new StreamsConfig(config);

    try {
        final TopologyBuilder builder = new TopologyBuilder();
        builder
            .addSource(sourceNodeName, "topic")
            .addProcessor(goodNodeName, new LocalMockProcessorSupplier(), sourceNodeName)
            .addStateStore(
                Stores.create(LocalMockProcessorSupplier.STORE_NAME).withStringKeys().withStringValues().inMemory().build(),
                goodNodeName)
            .addProcessor(badNodeName, new LocalMockProcessorSupplier(), sourceNodeName);

        final ProcessorTopologyTestDriver driver = new ProcessorTopologyTestDriver(streamsConfig, builder);
        driver.process("topic", null, null);
    } catch (final StreamsException e) {
        final Throwable cause = e.getCause();
        if (cause != null
            && cause instanceof TopologyBuilderException
            && cause.getMessage().equals("Invalid topology building: Processor " + badNodeName + " has no access to StateStore " + LocalMockProcessorSupplier.STORE_NAME)) {
            throw (TopologyBuilderException) cause;
        } else {
            throw new RuntimeException("Did expect different exception. Did catch:", e);
        }
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:35,代码来源:TopologyBuilderTest.java


注:本文中的org.apache.kafka.streams.errors.StreamsException类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。