当前位置: 首页>>代码示例>>Java>>正文


Java Deserializer.deserialize方法代码示例

本文整理汇总了Java中org.apache.kafka.common.serialization.Deserializer.deserialize方法的典型用法代码示例。如果您正苦于以下问题:Java Deserializer.deserialize方法的具体用法?Java Deserializer.deserialize怎么用?Java Deserializer.deserialize使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.common.serialization.Deserializer的用法示例。


在下文中一共展示了Deserializer.deserialize方法的11个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testSerde

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
@Test
public void testSerde() {
  Serializer<String> stringSerializer = new StringSerializer();
  Deserializer<String> stringDeserializer = new StringDeserializer();
  Serializer<LargeMessageSegment> segmentSerializer = new DefaultSegmentSerializer();
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();

  String s = LiKafkaClientsTestUtils.getRandomString(100);
  assertEquals(s.length(), 100);
  byte[] stringBytes = stringSerializer.serialize("topic", s);
  assertEquals(stringBytes.length, 100);
  LargeMessageSegment segment =
      new LargeMessageSegment(LiKafkaClientsUtils.randomUUID(), 0, 2, stringBytes.length, ByteBuffer.wrap(stringBytes));
  // String bytes + segment header
  byte[] serializedSegment = segmentSerializer.serialize("topic", segment);
  assertEquals(serializedSegment.length, 1 + stringBytes.length + LargeMessageSegment.SEGMENT_INFO_OVERHEAD + 4);

  LargeMessageSegment deserializedSegment = segmentDeserializer.deserialize("topic", serializedSegment);
  assertEquals(deserializedSegment.messageId, segment.messageId);
  assertEquals(deserializedSegment.messageSizeInBytes, segment.messageSizeInBytes);
  assertEquals(deserializedSegment.numberOfSegments, segment.numberOfSegments);
  assertEquals(deserializedSegment.sequenceNumber, segment.sequenceNumber);
  assertEquals(deserializedSegment.payload.limit(), 100);
  String deserializedString = stringDeserializer.deserialize("topic", deserializedSegment.payloadArray());
  assertEquals(deserializedString.length(), s.length());
}
 
开发者ID:linkedin,项目名称:li-apache-kafka-clients,代码行数:27,代码来源:SerializerDeserializerTest.java

示例2: decodePayload

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
public static <K, V> Payload<K, V> decodePayload(Deserializer<V> valueDeserializer, ConsumerRecord<K, byte[]> originConsumerRecord) {
    TracingHeader tracingHeader = null;
    ConsumerRecord<K, V> dataRecord = null;
    boolean sampled = false;
    byte[] data = originConsumerRecord.value();
    byte[] vData = null;
    if (data.length <= HEADER_LENGTH) {
        vData = data;
    } else {
        ByteBuffer byteBuf = ByteBuffer.wrap(data);
        short magic = byteBuf.getShort(0);
        short tpLen = byteBuf.getShort(2);
        if (magic == MAGIC && tpLen == TracingHeader.LENGTH) {
            byte[] tpBytes = new byte[tpLen];
            System.arraycopy(byteBuf.array(), HEADER_LENGTH, tpBytes, 0, tpLen);
            tracingHeader = TracingHeader.fromBytes(tpBytes);
            sampled = true;
            int dataOffset = tpLen + HEADER_LENGTH;
            vData = new byte[byteBuf.array().length - dataOffset];
            System.arraycopy(byteBuf.array(), dataOffset, vData, 0, vData.length);
        } else {
            vData = data;
        }
    }
    dataRecord = new ConsumerRecord<>(originConsumerRecord.topic(),
            originConsumerRecord.partition(), originConsumerRecord.offset(),
            originConsumerRecord.key(), valueDeserializer.deserialize(originConsumerRecord.topic(), vData));
    return new Payload<>(tracingHeader, dataRecord, sampled);
}
 
开发者ID:YanXs,项目名称:nighthawk,代码行数:30,代码来源:PayloadCodec.java

示例3: testWithDeserializer

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
/**
 * Test creating a Deserializer.
 */
@Test
public void testWithDeserializer() throws LoaderException {
    final String jarFilename = "testPlugins.jar";
    final String classPath = "examples.deserializer.ExampleDeserializer";

    // Find jar on filesystem.
    final URL jar = getClass().getClassLoader().getResource("testDeserializer/" + jarFilename);
    final String jarPath = new File(jar.getFile()).getParent();

    // Create factory
    final PluginFactory<Deserializer> factory = new PluginFactory<>(jarPath, Deserializer.class);
    final Path pathForJar = factory.getPathForJar(jarFilename);

    // Validate path is correct
    assertEquals("Has expected Path", jar.getPath(), pathForJar.toString());

    // Get class instance
    final Class<? extends Deserializer> pluginFilterClass = factory.getPluginClass(jarFilename, classPath);

    // Validate
    assertNotNull(pluginFilterClass);
    assertEquals("Has expected name", classPath, pluginFilterClass.getName());
    assertTrue("Validate came from correct class loader", pluginFilterClass.getClassLoader() instanceof PluginClassLoader);

    // Crete filter instance
    final Deserializer deserializer = factory.getPlugin(jarFilename, classPath);
    assertNotNull(deserializer);
    assertEquals("Has correct name", classPath, deserializer.getClass().getName());

    // Call method on interface
    final String value = "MyValue";
    final String result = (String) deserializer.deserialize("MyTopic", value.getBytes(StandardCharsets.UTF_8));
}
 
开发者ID:SourceLabOrg,项目名称:kafka-webview,代码行数:37,代码来源:PluginFactoryTest.java

示例4: readOutput

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
/**
 * Read the next record from the given topic. These records were output by the topology during the previous calls to
 * {@link #process(String, byte[], byte[])}.
 *
 * @param topic the name of the topic
 * @param keyDeserializer the deserializer for the key type
 * @param valueDeserializer the deserializer for the value type
 * @return the next record on that topic, or null if there is no record available
 */
public <K, V> ProducerRecord<K, V> readOutput(final String topic,
                                              final Deserializer<K> keyDeserializer,
                                              final Deserializer<V> valueDeserializer) {
    final ProducerRecord<byte[], byte[]> record = readOutput(topic);
    if (record == null) {
        return null;
    }
    final K key = keyDeserializer.deserialize(record.topic(), record.key());
    final V value = valueDeserializer.deserialize(record.topic(), record.value());
    return new ProducerRecord<>(record.topic(), record.partition(), record.timestamp(), key, value);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:21,代码来源:ProcessorTopologyTestDriver.java

示例5: testSerde

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
@Test
public void testSerde() {
  Serializer<String> stringSerializer = new StringSerializer();
  Deserializer<String> stringDeserializer = new StringDeserializer();
  Serializer<LargeMessageSegment> segmentSerializer = new DefaultSegmentSerializer();
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();

  String s = TestUtils.getRandomString(100);
  assertEquals(s.length(), 100);
  byte[] stringBytes = stringSerializer.serialize("topic", s);
  assertEquals(stringBytes.length, 100);
  LargeMessageSegment segment =
      new LargeMessageSegment(UUID.randomUUID(), 0, 2, stringBytes.length, ByteBuffer.wrap(stringBytes));
  // String bytes + segment header
  byte[] serializedSegment = segmentSerializer.serialize("topic", segment);
  assertEquals(serializedSegment.length, 1 + stringBytes.length + LargeMessageSegment.SEGMENT_INFO_OVERHEAD + 4);

  LargeMessageSegment deserializedSegment = segmentDeserializer.deserialize("topic", serializedSegment);
  assertEquals(deserializedSegment.messageId, segment.messageId);
  assertEquals(deserializedSegment.messageSizeInBytes, segment.messageSizeInBytes);
  assertEquals(deserializedSegment.numberOfSegments, segment.numberOfSegments);
  assertEquals(deserializedSegment.sequenceNumber, segment.sequenceNumber);
  assertEquals(deserializedSegment.payload.limit(), 100);
  String deserializedString = stringDeserializer.deserialize("topic", deserializedSegment.payloadArray());
  assertEquals(deserializedString.length(), s.length());

}
 
开发者ID:becketqin,项目名称:likafka-clients,代码行数:28,代码来源:SerializerDeserializerTest.java

示例6: testSplit

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
@Test
public void testSplit() {
  TopicPartition tp = new TopicPartition("topic", 0);
  UUID id = UUID.randomUUID();
  String message = TestUtils.getRandomString(1000);
  Serializer<String> stringSerializer = new StringSerializer();
  Deserializer<String> stringDeserializer = new StringDeserializer();
  Serializer<LargeMessageSegment> segmentSerializer = new DefaultSegmentSerializer();
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();
  MessageSplitter splitter = new MessageSplitterImpl(200, segmentSerializer);

  byte[] serializedMessage = stringSerializer.serialize("topic", message);
  List<ProducerRecord<byte[], byte[]>> records = splitter.split("topic", id, serializedMessage);
  assertEquals(records.size(), 5, "Should have 6 segments.");
  MessageAssembler assembler = new MessageAssemblerImpl(10000, 10000, true, segmentDeserializer);
  String assembledMessage = null;
  UUID uuid = null;
  for (int i = 0; i < records.size(); i++) {
    ProducerRecord<byte[], byte[]> record = records.get(i);
    LargeMessageSegment segment = segmentDeserializer.deserialize("topic", record.value());
    if (uuid == null) {
      uuid = segment.messageId;
    } else {
      assertEquals(segment.messageId, uuid, "messageId should match.");
    }
    assertEquals(segment.numberOfSegments, 5, "segment number should be 5");
    assertEquals(segment.messageSizeInBytes, serializedMessage.length, "message size should the same");
    assertEquals(segment.sequenceNumber, i, "SequenceNumber should match");

    assembledMessage = stringDeserializer.deserialize(null, assembler.assemble(tp, i, record.value()).messageBytes());
  }
  assertEquals(assembledMessage, message, "messages should match.");
}
 
开发者ID:becketqin,项目名称:likafka-clients,代码行数:34,代码来源:MessageSplitterTest.java

示例7: testSplit

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
@Test
public void testSplit() {
  TopicPartition tp = new TopicPartition("topic", 0);
  UUID id = LiKafkaClientsUtils.randomUUID();
  String message = LiKafkaClientsTestUtils.getRandomString(1000);
  Serializer<String> stringSerializer = new StringSerializer();
  Deserializer<String> stringDeserializer = new StringDeserializer();
  Serializer<LargeMessageSegment> segmentSerializer = new DefaultSegmentSerializer();
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();
  MessageSplitter splitter = new MessageSplitterImpl(200, segmentSerializer, new UUIDFactory.DefaultUUIDFactory<>());

  byte[] serializedMessage = stringSerializer.serialize("topic", message);
  List<ProducerRecord<byte[], byte[]>> records = splitter.split("topic", id, serializedMessage);
  assertEquals(records.size(), 5, "Should have 6 segments.");
  MessageAssembler assembler = new MessageAssemblerImpl(10000, 10000, true, segmentDeserializer);
  String assembledMessage = null;
  UUID uuid = null;
  for (int i = 0; i < records.size(); i++) {
    ProducerRecord<byte[], byte[]> record = records.get(i);
    LargeMessageSegment segment = segmentDeserializer.deserialize("topic", record.value());
    if (uuid == null) {
      uuid = segment.messageId;
    } else {
      assertEquals(segment.messageId, uuid, "messageId should match.");
    }
    assertEquals(segment.numberOfSegments, 5, "segment number should be 5");
    assertEquals(segment.messageSizeInBytes, serializedMessage.length, "message size should the same");
    assertEquals(segment.sequenceNumber, i, "SequenceNumber should match");

    assembledMessage = stringDeserializer.deserialize(null, assembler.assemble(tp, i, record.value()).messageBytes());
  }
  assertEquals(assembledMessage, message, "messages should match.");
}
 
开发者ID:linkedin,项目名称:li-apache-kafka-clients,代码行数:34,代码来源:MessageSplitterTest.java

示例8: onReceive

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
@Override
public void onReceive(Object message) throws Exception {
  if (message.equals("Start")) {
    Logger.info("Starting Thread: " + _threadId + " for topic: " + _topic);
    final ConsumerIterator<byte[], byte[]> it = _kafkaStream.iterator();
    final Deserializer<Object> avroDeserializer = new KafkaAvroDeserializer(_schemaRegistryRestfulClient);

    while (it.hasNext()) { // block for next input
      _receivedRecordCount++;

      try {
        MessageAndMetadata<byte[], byte[]> msg = it.next();
        GenericData.Record kafkaMsgRecord = (GenericData.Record) avroDeserializer.deserialize(_topic, msg.message());
        // Logger.debug("Kafka worker ThreadId " + _threadId + " Topic " + _topic + " record: " + rec);

        // invoke processor
        final AbstractRecord record = (AbstractRecord) _processorMethod.invoke(
            _processorClass, kafkaMsgRecord, _topic);

        // save record to database
        if (record != null) {
          _dbWriter.append(record);
          // _dbWriter.close();
          _dbWriter.insert();
          _processedRecordCount++;
        }
      } catch (InvocationTargetException ite) {
        Logger.error("Processing topic " + _topic + " record error: " + ite.getCause()
            + " - " + ite.getTargetException());
      } catch (SQLException | IOException e) {
        Logger.error("Error while inserting event record: ", e);
      } catch (Throwable ex) {
        Logger.error("Error in notify order. ", ex);
      }

      if (_receivedRecordCount % 1000 == 0) {
        Logger.debug(_topic + " received " + _receivedRecordCount + " processed " + _processedRecordCount);
      }
    }
    Logger.info("Shutting down Thread: " + _threadId + " for topic: " + _topic);
  } else {
    unhandled(message);
  }
}
 
开发者ID:thomas-young-2013,项目名称:wherehowsX,代码行数:45,代码来源:KafkaConsumerWorker.java

示例9: extractKey

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
private static <K> K extractKey(final byte[] binaryKey, final Deserializer<K> deserializer, final String topic) {
    return deserializer.deserialize(topic, extractKeyBytes(binaryKey));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:4,代码来源:SessionKeySerde.java

示例10: maybeDeserialize

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
private Object maybeDeserialize(final Object keyOrValue, final Deserializer<?> deserializer) {
    if (keyOrValue instanceof byte[]) {
        return deserializer.deserialize(this.context.topic(), (byte[]) keyOrValue);
    }
    return keyOrValue;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:7,代码来源:KStreamPrint.java

示例11: testDeserializationException

import org.apache.kafka.common.serialization.Deserializer; //导入方法依赖的package包/类
@Test
public void testDeserializationException() {
  TopicPartition tp0 = new TopicPartition("topic", 0);
  TopicPartition tp1 = new TopicPartition("topic", 1);
  TopicPartition tp2 = new TopicPartition("topic", 2);
  Deserializer<String> stringDeserializer = new StringDeserializer();
  Deserializer<String> errorThrowingDeserializer = new Deserializer<String>() {
    @Override
    public void configure(Map<String, ?> configs, boolean isKey) {

    }

    @Override
    public String deserialize(String topic, byte[] data) {
      String s = stringDeserializer.deserialize(topic, data);
      if (s.equals("ErrorBytes")) {
        throw new SkippableException();
      }
      return s;
    }

    @Override
    public void close() {

    }
  };
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();
  MessageAssembler assembler = new MessageAssemblerImpl(5000, 100, false, segmentDeserializer);
  DeliveredMessageOffsetTracker deliveredMessageOffsetTracker = new DeliveredMessageOffsetTracker(4);
  ConsumerRecordsProcessor processor =  new ConsumerRecordsProcessor<>(assembler, stringDeserializer, errorThrowingDeserializer,
                                                                        deliveredMessageOffsetTracker, null);

  StringSerializer stringSerializer = new StringSerializer();
  ConsumerRecord<byte[], byte[]> consumerRecord0 = new ConsumerRecord<>("topic", 0, 0, null,
                                                                        stringSerializer.serialize("topic", "value"));
  ConsumerRecord<byte[], byte[]> consumerRecord1 = new ConsumerRecord<>("topic", 0, 1, null,
                                                                        stringSerializer.serialize("topic", "ErrorBytes"));
  ConsumerRecord<byte[], byte[]> consumerRecord2 = new ConsumerRecord<>("topic", 0, 2, null,
                                                                        stringSerializer.serialize("topic", "value"));

  ConsumerRecord<byte[], byte[]> consumerRecord3 = new ConsumerRecord<>("topic", 1, 0, null,
                                                                        stringSerializer.serialize("topic", "ErrorBytes"));
  ConsumerRecord<byte[], byte[]> consumerRecord4 = new ConsumerRecord<>("topic", 1, 1, null,
                                                                        stringSerializer.serialize("topic", "value"));

  ConsumerRecord<byte[], byte[]> consumerRecord5 = new ConsumerRecord<>("topic", 2, 0, null,
                                                                        stringSerializer.serialize("topic", "value"));

  Map<TopicPartition, List<ConsumerRecord<byte[], byte[]>>> recordMap = new HashMap<>();
  recordMap.put(tp0, Arrays.asList(consumerRecord0, consumerRecord1, consumerRecord2));
  recordMap.put(tp1, Arrays.asList(consumerRecord3, consumerRecord4));
  recordMap.put(tp2, Collections.singletonList(consumerRecord5));

  ConsumerRecords<byte[], byte[]> consumerRecords = new ConsumerRecords<>(recordMap);

  ConsumerRecordsProcessResult result = processor.process(consumerRecords);
  assertEquals(result.consumerRecords().count(), 4);
  assertEquals(result.consumerRecords().records(tp0).size(), 2);
  assertEquals(result.consumerRecords().records(tp1).size(), 1);
  assertEquals(result.consumerRecords().records(tp2).size(), 1);
  assertTrue(result.resumeOffsets().isEmpty());
  assertNull(result.exception());
}
 
开发者ID:linkedin,项目名称:li-apache-kafka-clients,代码行数:64,代码来源:ConsumerRecordsProcessorTest.java


注:本文中的org.apache.kafka.common.serialization.Deserializer.deserialize方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。