当前位置: 首页>>代码示例>>Java>>正文


Java ByteArrayDeserializer类代码示例

本文整理汇总了Java中org.apache.kafka.common.serialization.ByteArrayDeserializer的典型用法代码示例。如果您正苦于以下问题:Java ByteArrayDeserializer类的具体用法?Java ByteArrayDeserializer怎么用?Java ByteArrayDeserializer使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


ByteArrayDeserializer类属于org.apache.kafka.common.serialization包,在下文中一共展示了ByteArrayDeserializer类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: buildIOReader

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
@Override
  public PCollection<BeamRecord> buildIOReader(Pipeline pipeline) {
    KafkaIO.Read<byte[], byte[]> kafkaRead = null;
    if (topics != null) {
      kafkaRead = KafkaIO.<byte[], byte[]>read()
      .withBootstrapServers(bootstrapServers)
      .withTopics(topics)
      .updateConsumerProperties(configUpdates)
      .withKeyDeserializerAndCoder(ByteArrayDeserializer.class, ByteArrayCoder.of())
      .withValueDeserializerAndCoder(ByteArrayDeserializer.class, ByteArrayCoder.of());
    } else if (topicPartitions != null) {
      kafkaRead = KafkaIO.<byte[], byte[]>read()
          .withBootstrapServers(bootstrapServers)
          .withTopicPartitions(topicPartitions)
          .updateConsumerProperties(configUpdates)
          .withKeyDeserializerAndCoder(ByteArrayDeserializer.class, ByteArrayCoder.of())
          .withValueDeserializerAndCoder(ByteArrayDeserializer.class, ByteArrayCoder.of());
    } else {
      throw new IllegalArgumentException("One of topics and topicPartitions must be configurated.");
    }

    return PBegin.in(pipeline).apply("read", kafkaRead.withoutMetadata())
.apply("in_format", getPTransformForInput());
  }
 
开发者ID:apache,项目名称:beam,代码行数:25,代码来源:BeamKafkaTable.java

示例2: Consumer

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
Consumer(Topic topic, String consumerGroupId, Properties props, PartitionProcessorFactory processorFactory) {
    this.topic = topic;
    this.consumerGroupId = consumerGroupId;

    // Mandatory settings, not changeable
    props.put("group.id", consumerGroupId);
    props.put("key.deserializer", StringDeserializer.class.getName());
    props.put("value.deserializer", ByteArrayDeserializer.class.getName());

    kafka = new KafkaConsumer<>(props);
    partitions = new AssignedPartitions(processorFactory);

    long now = System.currentTimeMillis();

    // start it
    consumerLoopExecutor.execute(new ConsumerLoop());
}
 
开发者ID:Sixt,项目名称:ja-micro,代码行数:18,代码来源:Consumer.java

示例3: consumeAllRecordsFromTopic

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
/**
 * This will consume all records from all partitions on the given topic.
 * @param topic Topic to consume from.
 * @return List of ConsumerRecords consumed.
 */
public List<ConsumerRecord<byte[], byte[]>> consumeAllRecordsFromTopic(final String topic) {
    // Connect to broker to determine what partitions are available.
    KafkaConsumer<byte[], byte[]> kafkaConsumer = kafkaTestServer.getKafkaConsumer(
        ByteArrayDeserializer.class,
        ByteArrayDeserializer.class
    );

    final List<Integer> partitionIds = new ArrayList<>();
    for (PartitionInfo partitionInfo: kafkaConsumer.partitionsFor(topic)) {
        partitionIds.add(partitionInfo.partition());
    }
    kafkaConsumer.close();

    return consumeAllRecordsFromTopic(topic, partitionIds);
}
 
开发者ID:salesforce,项目名称:kafka-junit,代码行数:21,代码来源:KafkaTestUtils.java

示例4: createDefaultMessageFormats

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
/**
 * Creates default message formats.
 */
private void createDefaultMessageFormats() {
    final Map<String, String> defaultFormats = new HashMap<>();
    defaultFormats.put("Short", ShortDeserializer.class.getName());
    defaultFormats.put("ByteArray", ByteArrayDeserializer.class.getName());
    defaultFormats.put("Bytes", BytesDeserializer.class.getName());
    defaultFormats.put("Double", DoubleDeserializer.class.getName());
    defaultFormats.put("Float", FloatDeserializer.class.getName());
    defaultFormats.put("Integer", IntegerDeserializer.class.getName());
    defaultFormats.put("Long", LongDeserializer.class.getName());
    defaultFormats.put("String", StringDeserializer.class.getName());

    // Create if needed.
    for (final Map.Entry<String, String> entry : defaultFormats.entrySet()) {
        MessageFormat messageFormat = messageFormatRepository.findByName(entry.getKey());
        if (messageFormat == null) {
            messageFormat = new MessageFormat();
        }
        messageFormat.setName(entry.getKey());
        messageFormat.setClasspath(entry.getValue());
        messageFormat.setJar("n/a");
        messageFormat.setDefaultFormat(true);
        messageFormatRepository.save(messageFormat);
    }
}
 
开发者ID:SourceLabOrg,项目名称:kafka-webview,代码行数:28,代码来源:DataLoaderConfig.java

示例5: consumeRecords

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
private static void consumeRecords(String bootstrapServers) {
    Properties props = new Properties();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "byte-array-consumer");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class.getName());
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());

    Consumer<Long, byte[]> consumer = new KafkaConsumer<>(props);

    consumer.subscribe(Arrays.asList(TOPIC));

    ConsumerRecords<Long, byte[]> records = consumer.poll(10000);

    for (ConsumerRecord<Long, byte[]> record : records)
        out.printf(
                "key = %s value = %s%n",
                record.key(),
                new String(record.value()));

    consumer.close();
}
 
开发者ID:jeqo,项目名称:talk-kafka-messaging-logs,代码行数:23,代码来源:ProduceConsumeLongByteArrayRecord.java

示例6: consumeRecords

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
private static void consumeRecords(String bootstrapServers) {
    Properties props = new Properties();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "avro-consumer");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());

    Consumer<String, byte[]> consumer = new KafkaConsumer<>(props);

    consumer.subscribe(Arrays.asList(TOPIC));

    ConsumerRecords<String, byte[]> records = consumer.poll(10000);

    for (ConsumerRecord<String, byte[]> record : records)
        out.printf(
                "key = %s value = %s%n",
                record.key(),
                UserAvroSerdes.deserialize(record.value()).getName().toString());

    consumer.close();
}
 
开发者ID:jeqo,项目名称:talk-kafka-messaging-logs,代码行数:23,代码来源:ProduceConsumeStringAvroRecord.java

示例7: setupAndCreateKafkaBasedLog

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
KafkaBasedLog<String, byte[]> setupAndCreateKafkaBasedLog(String topic, final WorkerConfig config) {
    Map<String, Object> producerProps = new HashMap<>();
    producerProps.putAll(config.originals());
    producerProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
    producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getName());
    producerProps.put(ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE);

    Map<String, Object> consumerProps = new HashMap<>();
    consumerProps.putAll(config.originals());
    consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
    consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());

    Map<String, Object> adminProps = new HashMap<>(config.originals());
    NewTopic topicDescription = TopicAdmin.defineTopic(topic).
            compacted().
            partitions(1).
            replicationFactor(config.getShort(DistributedConfig.CONFIG_STORAGE_REPLICATION_FACTOR_CONFIG)).
            build();

    return createKafkaBasedLog(topic, producerProps, consumerProps, new ConsumeCallback(), topicDescription, adminProps);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:22,代码来源:KafkaConfigBackingStore.java

示例8: testWindowedDeserializerNoArgConstructors

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
@Test
public void testWindowedDeserializerNoArgConstructors() {
    Map<String, String> props = new HashMap<>();
    // test key[value].deserializer.inner.class takes precedence over serializer.inner.class
    WindowedDeserializer<StringSerializer> windowedDeserializer = new WindowedDeserializer<>();
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "host:1");
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "appId");
    props.put("key.deserializer.inner.class", "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("deserializer.inner.class", "org.apache.kafka.common.serialization.StringDeserializer");
    windowedDeserializer.configure(props, true);
    Deserializer<?> inner = windowedDeserializer.innerDeserializer();
    assertNotNull("Inner deserializer should be not null", inner);
    assertTrue("Inner deserializer type should be StringDeserializer", inner instanceof StringDeserializer);
    // test deserializer.inner.class
    props.put("deserializer.inner.class", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
    props.remove("key.deserializer.inner.class");
    props.remove("value.deserializer.inner.class");
    WindowedDeserializer<?> windowedDeserializer1 = new WindowedDeserializer<>();
    windowedDeserializer1.configure(props, false);
    Deserializer<?> inner1 = windowedDeserializer1.innerDeserializer();
    assertNotNull("Inner deserializer should be not null", inner1);
    assertTrue("Inner deserializer type should be ByteArrayDeserializer", inner1 instanceof ByteArrayDeserializer);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:24,代码来源:WindowedStreamPartitionerTest.java

示例9: setProduceConsumeProperties

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
private Properties setProduceConsumeProperties(final String clientId) {
    Properties props = new Properties();
    props.put(ProducerConfig.CLIENT_ID_CONFIG, clientId);
    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka);
    props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
    props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
    // the socket buffer needs to be large, especially when running in AWS with
    // high latency. if running locally the default is fine.
    props.put(ProducerConfig.SEND_BUFFER_CONFIG, SOCKET_SIZE_BYTES);
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
    props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
    // the socket buffer needs to be large, especially when running in AWS with
    // high latency. if running locally the default is fine.
    props.put(ConsumerConfig.RECEIVE_BUFFER_CONFIG, SOCKET_SIZE_BYTES);
    props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, MAX_POLL_RECORDS);
    return props;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:19,代码来源:SimpleBenchmark.java

示例10: testListOffsetsSendsIsolationLevel

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
@Test
public void testListOffsetsSendsIsolationLevel() {
    for (final IsolationLevel isolationLevel : IsolationLevel.values()) {
        Fetcher<byte[], byte[]> fetcher = createFetcher(subscriptions, new Metrics(), new ByteArrayDeserializer(),
                new ByteArrayDeserializer(), Integer.MAX_VALUE, isolationLevel);

        subscriptions.assignFromUser(singleton(tp1));
        subscriptions.needOffsetReset(tp1, OffsetResetStrategy.LATEST);

        client.prepareResponse(new MockClient.RequestMatcher() {
            @Override
            public boolean matches(AbstractRequest body) {
                ListOffsetRequest request = (ListOffsetRequest) body;
                return request.isolationLevel() == isolationLevel;
            }
        }, listOffsetResponse(Errors.NONE, 1L, 5L));
        fetcher.updateFetchPositions(singleton(tp1));
        assertFalse(subscriptions.isOffsetResetNeeded(tp1));
        assertTrue(subscriptions.isFetchable(tp1));
        assertEquals(5, subscriptions.position(tp1).longValue());
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:23,代码来源:FetcherTest.java

示例11: testConstructorClose

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
@Test
public void testConstructorClose() throws Exception {
    Properties props = new Properties();
    props.setProperty(ConsumerConfig.CLIENT_ID_CONFIG, "testConstructorClose");
    props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "some.invalid.hostname.foo.bar.local:9999");
    props.setProperty(ConsumerConfig.METRIC_REPORTER_CLASSES_CONFIG, MockMetricsReporter.class.getName());

    final int oldInitCount = MockMetricsReporter.INIT_COUNT.get();
    final int oldCloseCount = MockMetricsReporter.CLOSE_COUNT.get();
    try {
        new KafkaConsumer<>(props, new ByteArrayDeserializer(), new ByteArrayDeserializer());
    } catch (KafkaException e) {
        assertEquals(oldInitCount + 1, MockMetricsReporter.INIT_COUNT.get());
        assertEquals(oldCloseCount + 1, MockMetricsReporter.CLOSE_COUNT.get());
        assertEquals("Failed to construct kafka consumer", e.getMessage());
        return;
    }
    Assert.fail("should have caught an exception and returned");
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:20,代码来源:KafkaConsumerTest.java

示例12: createFetcher

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
private Fetcher<byte[], byte[]> createFetcher(int maxPollRecords,
                                              SubscriptionState subscriptions,
                                              Metrics metrics) {
    return new Fetcher<>(consumerClient,
            minBytes,
            maxWaitMs,
            fetchSize,
            maxPollRecords,
            true, // check crc
            new ByteArrayDeserializer(),
            new ByteArrayDeserializer(),
            metadata,
            subscriptions,
            metrics,
            "consumer" + groupId,
            time,
            retryBackoffMs);
}
 
开发者ID:txazo,项目名称:kafka,代码行数:19,代码来源:FetcherTest.java

示例13: testConstructorClose

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
@Test
public void testConstructorClose() throws Exception {
    Properties props = new Properties();
    props.setProperty(ConsumerConfig.CLIENT_ID_CONFIG, "testConstructorClose");
    props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "some.invalid.hostname.foo.bar:9999");
    props.setProperty(ConsumerConfig.METRIC_REPORTER_CLASSES_CONFIG, MockMetricsReporter.class.getName());

    final int oldInitCount = MockMetricsReporter.INIT_COUNT.get();
    final int oldCloseCount = MockMetricsReporter.CLOSE_COUNT.get();
    try {
        KafkaConsumer<byte[], byte[]> consumer = new KafkaConsumer<byte[], byte[]>(
                props, new ByteArrayDeserializer(), new ByteArrayDeserializer());
    } catch (KafkaException e) {
        assertEquals(oldInitCount + 1, MockMetricsReporter.INIT_COUNT.get());
        assertEquals(oldCloseCount + 1, MockMetricsReporter.CLOSE_COUNT.get());
        assertEquals("Failed to construct kafka consumer", e.getMessage());
        return;
    }
    Assert.fail("should have caught an exception and returned");
}
 
开发者ID:txazo,项目名称:kafka,代码行数:21,代码来源:KafkaConsumerTest.java

示例14: testZeroLengthValue

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
@Test
public void testZeroLengthValue() throws Exception {
  Properties producerPropertyOverrides = new Properties();
  producerPropertyOverrides.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getName());

  try (LiKafkaProducer producer =  createProducer(producerPropertyOverrides)) {
    producer.send(new ProducerRecord<>("testZeroLengthValue", "key", new byte[0])).get();
  }
  Properties consumerProps = new Properties();
  consumerProps.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
  consumerProps.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());

  try (LiKafkaConsumer consumer = createConsumer(consumerProps)) {
    consumer.subscribe(Collections.singleton("testZeroLengthValue"));
    long startMs = System.currentTimeMillis();
    ConsumerRecords records = ConsumerRecords.empty();
    while (records.isEmpty() && System.currentTimeMillis() < startMs + 30000) {
      records = consumer.poll(100);
    }
    assertEquals(1, records.count());
    ConsumerRecord record = (ConsumerRecord) records.iterator().next();
    assertEquals("key", record.key());
    assertEquals(((byte[]) record.value()).length, 0);
  }
}
 
开发者ID:linkedin,项目名称:li-apache-kafka-clients,代码行数:26,代码来源:LiKafkaProducerIntegrationTest.java

示例15: createConsumerFactory

import org.apache.kafka.common.serialization.ByteArrayDeserializer; //导入依赖的package包/类
private ConsumerFactory<?, ?> createConsumerFactory(String group) {
	if (defaultConsumerFactory != null) {
		return defaultConsumerFactory;
	}
	Map<String, Object> props = new HashMap<>();
	props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
	props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
	if (!ObjectUtils.isEmpty(binderConfigurationProperties.getConsumerConfiguration())) {
		props.putAll(binderConfigurationProperties.getConsumerConfiguration());
	}
	if (!props.containsKey(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG)) {
		props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
				this.binderConfigurationProperties.getKafkaConnectionString());
	}
	props.put("group.id", group);
	return new DefaultKafkaConsumerFactory<>(props);
}
 
开发者ID:spring-cloud,项目名称:spring-cloud-stream-binder-kafka,代码行数:18,代码来源:KafkaBinderMetrics.java


注:本文中的org.apache.kafka.common.serialization.ByteArrayDeserializer类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。