當前位置: 首頁>>代碼示例>>Java>>正文


Java Deserializer.deserialize方法代碼示例

本文整理匯總了Java中org.apache.kafka.common.serialization.Deserializer.deserialize方法的典型用法代碼示例。如果您正苦於以下問題:Java Deserializer.deserialize方法的具體用法?Java Deserializer.deserialize怎麽用?Java Deserializer.deserialize使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.kafka.common.serialization.Deserializer的用法示例。


在下文中一共展示了Deserializer.deserialize方法的11個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: testSerde

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
@Test
public void testSerde() {
  Serializer<String> stringSerializer = new StringSerializer();
  Deserializer<String> stringDeserializer = new StringDeserializer();
  Serializer<LargeMessageSegment> segmentSerializer = new DefaultSegmentSerializer();
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();

  String s = LiKafkaClientsTestUtils.getRandomString(100);
  assertEquals(s.length(), 100);
  byte[] stringBytes = stringSerializer.serialize("topic", s);
  assertEquals(stringBytes.length, 100);
  LargeMessageSegment segment =
      new LargeMessageSegment(LiKafkaClientsUtils.randomUUID(), 0, 2, stringBytes.length, ByteBuffer.wrap(stringBytes));
  // String bytes + segment header
  byte[] serializedSegment = segmentSerializer.serialize("topic", segment);
  assertEquals(serializedSegment.length, 1 + stringBytes.length + LargeMessageSegment.SEGMENT_INFO_OVERHEAD + 4);

  LargeMessageSegment deserializedSegment = segmentDeserializer.deserialize("topic", serializedSegment);
  assertEquals(deserializedSegment.messageId, segment.messageId);
  assertEquals(deserializedSegment.messageSizeInBytes, segment.messageSizeInBytes);
  assertEquals(deserializedSegment.numberOfSegments, segment.numberOfSegments);
  assertEquals(deserializedSegment.sequenceNumber, segment.sequenceNumber);
  assertEquals(deserializedSegment.payload.limit(), 100);
  String deserializedString = stringDeserializer.deserialize("topic", deserializedSegment.payloadArray());
  assertEquals(deserializedString.length(), s.length());
}
 
開發者ID:linkedin,項目名稱:li-apache-kafka-clients,代碼行數:27,代碼來源:SerializerDeserializerTest.java

示例2: decodePayload

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
public static <K, V> Payload<K, V> decodePayload(Deserializer<V> valueDeserializer, ConsumerRecord<K, byte[]> originConsumerRecord) {
    TracingHeader tracingHeader = null;
    ConsumerRecord<K, V> dataRecord = null;
    boolean sampled = false;
    byte[] data = originConsumerRecord.value();
    byte[] vData = null;
    if (data.length <= HEADER_LENGTH) {
        vData = data;
    } else {
        ByteBuffer byteBuf = ByteBuffer.wrap(data);
        short magic = byteBuf.getShort(0);
        short tpLen = byteBuf.getShort(2);
        if (magic == MAGIC && tpLen == TracingHeader.LENGTH) {
            byte[] tpBytes = new byte[tpLen];
            System.arraycopy(byteBuf.array(), HEADER_LENGTH, tpBytes, 0, tpLen);
            tracingHeader = TracingHeader.fromBytes(tpBytes);
            sampled = true;
            int dataOffset = tpLen + HEADER_LENGTH;
            vData = new byte[byteBuf.array().length - dataOffset];
            System.arraycopy(byteBuf.array(), dataOffset, vData, 0, vData.length);
        } else {
            vData = data;
        }
    }
    dataRecord = new ConsumerRecord<>(originConsumerRecord.topic(),
            originConsumerRecord.partition(), originConsumerRecord.offset(),
            originConsumerRecord.key(), valueDeserializer.deserialize(originConsumerRecord.topic(), vData));
    return new Payload<>(tracingHeader, dataRecord, sampled);
}
 
開發者ID:YanXs,項目名稱:nighthawk,代碼行數:30,代碼來源:PayloadCodec.java

示例3: testWithDeserializer

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
/**
 * Test creating a Deserializer.
 */
@Test
public void testWithDeserializer() throws LoaderException {
    final String jarFilename = "testPlugins.jar";
    final String classPath = "examples.deserializer.ExampleDeserializer";

    // Find jar on filesystem.
    final URL jar = getClass().getClassLoader().getResource("testDeserializer/" + jarFilename);
    final String jarPath = new File(jar.getFile()).getParent();

    // Create factory
    final PluginFactory<Deserializer> factory = new PluginFactory<>(jarPath, Deserializer.class);
    final Path pathForJar = factory.getPathForJar(jarFilename);

    // Validate path is correct
    assertEquals("Has expected Path", jar.getPath(), pathForJar.toString());

    // Get class instance
    final Class<? extends Deserializer> pluginFilterClass = factory.getPluginClass(jarFilename, classPath);

    // Validate
    assertNotNull(pluginFilterClass);
    assertEquals("Has expected name", classPath, pluginFilterClass.getName());
    assertTrue("Validate came from correct class loader", pluginFilterClass.getClassLoader() instanceof PluginClassLoader);

    // Crete filter instance
    final Deserializer deserializer = factory.getPlugin(jarFilename, classPath);
    assertNotNull(deserializer);
    assertEquals("Has correct name", classPath, deserializer.getClass().getName());

    // Call method on interface
    final String value = "MyValue";
    final String result = (String) deserializer.deserialize("MyTopic", value.getBytes(StandardCharsets.UTF_8));
}
 
開發者ID:SourceLabOrg,項目名稱:kafka-webview,代碼行數:37,代碼來源:PluginFactoryTest.java

示例4: readOutput

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
/**
 * Read the next record from the given topic. These records were output by the topology during the previous calls to
 * {@link #process(String, byte[], byte[])}.
 *
 * @param topic the name of the topic
 * @param keyDeserializer the deserializer for the key type
 * @param valueDeserializer the deserializer for the value type
 * @return the next record on that topic, or null if there is no record available
 */
public <K, V> ProducerRecord<K, V> readOutput(final String topic,
                                              final Deserializer<K> keyDeserializer,
                                              final Deserializer<V> valueDeserializer) {
    final ProducerRecord<byte[], byte[]> record = readOutput(topic);
    if (record == null) {
        return null;
    }
    final K key = keyDeserializer.deserialize(record.topic(), record.key());
    final V value = valueDeserializer.deserialize(record.topic(), record.value());
    return new ProducerRecord<>(record.topic(), record.partition(), record.timestamp(), key, value);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:21,代碼來源:ProcessorTopologyTestDriver.java

示例5: testSerde

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
@Test
public void testSerde() {
  Serializer<String> stringSerializer = new StringSerializer();
  Deserializer<String> stringDeserializer = new StringDeserializer();
  Serializer<LargeMessageSegment> segmentSerializer = new DefaultSegmentSerializer();
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();

  String s = TestUtils.getRandomString(100);
  assertEquals(s.length(), 100);
  byte[] stringBytes = stringSerializer.serialize("topic", s);
  assertEquals(stringBytes.length, 100);
  LargeMessageSegment segment =
      new LargeMessageSegment(UUID.randomUUID(), 0, 2, stringBytes.length, ByteBuffer.wrap(stringBytes));
  // String bytes + segment header
  byte[] serializedSegment = segmentSerializer.serialize("topic", segment);
  assertEquals(serializedSegment.length, 1 + stringBytes.length + LargeMessageSegment.SEGMENT_INFO_OVERHEAD + 4);

  LargeMessageSegment deserializedSegment = segmentDeserializer.deserialize("topic", serializedSegment);
  assertEquals(deserializedSegment.messageId, segment.messageId);
  assertEquals(deserializedSegment.messageSizeInBytes, segment.messageSizeInBytes);
  assertEquals(deserializedSegment.numberOfSegments, segment.numberOfSegments);
  assertEquals(deserializedSegment.sequenceNumber, segment.sequenceNumber);
  assertEquals(deserializedSegment.payload.limit(), 100);
  String deserializedString = stringDeserializer.deserialize("topic", deserializedSegment.payloadArray());
  assertEquals(deserializedString.length(), s.length());

}
 
開發者ID:becketqin,項目名稱:likafka-clients,代碼行數:28,代碼來源:SerializerDeserializerTest.java

示例6: testSplit

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
@Test
public void testSplit() {
  TopicPartition tp = new TopicPartition("topic", 0);
  UUID id = UUID.randomUUID();
  String message = TestUtils.getRandomString(1000);
  Serializer<String> stringSerializer = new StringSerializer();
  Deserializer<String> stringDeserializer = new StringDeserializer();
  Serializer<LargeMessageSegment> segmentSerializer = new DefaultSegmentSerializer();
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();
  MessageSplitter splitter = new MessageSplitterImpl(200, segmentSerializer);

  byte[] serializedMessage = stringSerializer.serialize("topic", message);
  List<ProducerRecord<byte[], byte[]>> records = splitter.split("topic", id, serializedMessage);
  assertEquals(records.size(), 5, "Should have 6 segments.");
  MessageAssembler assembler = new MessageAssemblerImpl(10000, 10000, true, segmentDeserializer);
  String assembledMessage = null;
  UUID uuid = null;
  for (int i = 0; i < records.size(); i++) {
    ProducerRecord<byte[], byte[]> record = records.get(i);
    LargeMessageSegment segment = segmentDeserializer.deserialize("topic", record.value());
    if (uuid == null) {
      uuid = segment.messageId;
    } else {
      assertEquals(segment.messageId, uuid, "messageId should match.");
    }
    assertEquals(segment.numberOfSegments, 5, "segment number should be 5");
    assertEquals(segment.messageSizeInBytes, serializedMessage.length, "message size should the same");
    assertEquals(segment.sequenceNumber, i, "SequenceNumber should match");

    assembledMessage = stringDeserializer.deserialize(null, assembler.assemble(tp, i, record.value()).messageBytes());
  }
  assertEquals(assembledMessage, message, "messages should match.");
}
 
開發者ID:becketqin,項目名稱:likafka-clients,代碼行數:34,代碼來源:MessageSplitterTest.java

示例7: testSplit

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
@Test
public void testSplit() {
  TopicPartition tp = new TopicPartition("topic", 0);
  UUID id = LiKafkaClientsUtils.randomUUID();
  String message = LiKafkaClientsTestUtils.getRandomString(1000);
  Serializer<String> stringSerializer = new StringSerializer();
  Deserializer<String> stringDeserializer = new StringDeserializer();
  Serializer<LargeMessageSegment> segmentSerializer = new DefaultSegmentSerializer();
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();
  MessageSplitter splitter = new MessageSplitterImpl(200, segmentSerializer, new UUIDFactory.DefaultUUIDFactory<>());

  byte[] serializedMessage = stringSerializer.serialize("topic", message);
  List<ProducerRecord<byte[], byte[]>> records = splitter.split("topic", id, serializedMessage);
  assertEquals(records.size(), 5, "Should have 6 segments.");
  MessageAssembler assembler = new MessageAssemblerImpl(10000, 10000, true, segmentDeserializer);
  String assembledMessage = null;
  UUID uuid = null;
  for (int i = 0; i < records.size(); i++) {
    ProducerRecord<byte[], byte[]> record = records.get(i);
    LargeMessageSegment segment = segmentDeserializer.deserialize("topic", record.value());
    if (uuid == null) {
      uuid = segment.messageId;
    } else {
      assertEquals(segment.messageId, uuid, "messageId should match.");
    }
    assertEquals(segment.numberOfSegments, 5, "segment number should be 5");
    assertEquals(segment.messageSizeInBytes, serializedMessage.length, "message size should the same");
    assertEquals(segment.sequenceNumber, i, "SequenceNumber should match");

    assembledMessage = stringDeserializer.deserialize(null, assembler.assemble(tp, i, record.value()).messageBytes());
  }
  assertEquals(assembledMessage, message, "messages should match.");
}
 
開發者ID:linkedin,項目名稱:li-apache-kafka-clients,代碼行數:34,代碼來源:MessageSplitterTest.java

示例8: onReceive

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
@Override
public void onReceive(Object message) throws Exception {
  if (message.equals("Start")) {
    Logger.info("Starting Thread: " + _threadId + " for topic: " + _topic);
    final ConsumerIterator<byte[], byte[]> it = _kafkaStream.iterator();
    final Deserializer<Object> avroDeserializer = new KafkaAvroDeserializer(_schemaRegistryRestfulClient);

    while (it.hasNext()) { // block for next input
      _receivedRecordCount++;

      try {
        MessageAndMetadata<byte[], byte[]> msg = it.next();
        GenericData.Record kafkaMsgRecord = (GenericData.Record) avroDeserializer.deserialize(_topic, msg.message());
        // Logger.debug("Kafka worker ThreadId " + _threadId + " Topic " + _topic + " record: " + rec);

        // invoke processor
        final AbstractRecord record = (AbstractRecord) _processorMethod.invoke(
            _processorClass, kafkaMsgRecord, _topic);

        // save record to database
        if (record != null) {
          _dbWriter.append(record);
          // _dbWriter.close();
          _dbWriter.insert();
          _processedRecordCount++;
        }
      } catch (InvocationTargetException ite) {
        Logger.error("Processing topic " + _topic + " record error: " + ite.getCause()
            + " - " + ite.getTargetException());
      } catch (SQLException | IOException e) {
        Logger.error("Error while inserting event record: ", e);
      } catch (Throwable ex) {
        Logger.error("Error in notify order. ", ex);
      }

      if (_receivedRecordCount % 1000 == 0) {
        Logger.debug(_topic + " received " + _receivedRecordCount + " processed " + _processedRecordCount);
      }
    }
    Logger.info("Shutting down Thread: " + _threadId + " for topic: " + _topic);
  } else {
    unhandled(message);
  }
}
 
開發者ID:thomas-young-2013,項目名稱:wherehowsX,代碼行數:45,代碼來源:KafkaConsumerWorker.java

示例9: extractKey

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
private static <K> K extractKey(final byte[] binaryKey, final Deserializer<K> deserializer, final String topic) {
    return deserializer.deserialize(topic, extractKeyBytes(binaryKey));
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:4,代碼來源:SessionKeySerde.java

示例10: maybeDeserialize

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
private Object maybeDeserialize(final Object keyOrValue, final Deserializer<?> deserializer) {
    if (keyOrValue instanceof byte[]) {
        return deserializer.deserialize(this.context.topic(), (byte[]) keyOrValue);
    }
    return keyOrValue;
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:7,代碼來源:KStreamPrint.java

示例11: testDeserializationException

import org.apache.kafka.common.serialization.Deserializer; //導入方法依賴的package包/類
@Test
public void testDeserializationException() {
  TopicPartition tp0 = new TopicPartition("topic", 0);
  TopicPartition tp1 = new TopicPartition("topic", 1);
  TopicPartition tp2 = new TopicPartition("topic", 2);
  Deserializer<String> stringDeserializer = new StringDeserializer();
  Deserializer<String> errorThrowingDeserializer = new Deserializer<String>() {
    @Override
    public void configure(Map<String, ?> configs, boolean isKey) {

    }

    @Override
    public String deserialize(String topic, byte[] data) {
      String s = stringDeserializer.deserialize(topic, data);
      if (s.equals("ErrorBytes")) {
        throw new SkippableException();
      }
      return s;
    }

    @Override
    public void close() {

    }
  };
  Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();
  MessageAssembler assembler = new MessageAssemblerImpl(5000, 100, false, segmentDeserializer);
  DeliveredMessageOffsetTracker deliveredMessageOffsetTracker = new DeliveredMessageOffsetTracker(4);
  ConsumerRecordsProcessor processor =  new ConsumerRecordsProcessor<>(assembler, stringDeserializer, errorThrowingDeserializer,
                                                                        deliveredMessageOffsetTracker, null);

  StringSerializer stringSerializer = new StringSerializer();
  ConsumerRecord<byte[], byte[]> consumerRecord0 = new ConsumerRecord<>("topic", 0, 0, null,
                                                                        stringSerializer.serialize("topic", "value"));
  ConsumerRecord<byte[], byte[]> consumerRecord1 = new ConsumerRecord<>("topic", 0, 1, null,
                                                                        stringSerializer.serialize("topic", "ErrorBytes"));
  ConsumerRecord<byte[], byte[]> consumerRecord2 = new ConsumerRecord<>("topic", 0, 2, null,
                                                                        stringSerializer.serialize("topic", "value"));

  ConsumerRecord<byte[], byte[]> consumerRecord3 = new ConsumerRecord<>("topic", 1, 0, null,
                                                                        stringSerializer.serialize("topic", "ErrorBytes"));
  ConsumerRecord<byte[], byte[]> consumerRecord4 = new ConsumerRecord<>("topic", 1, 1, null,
                                                                        stringSerializer.serialize("topic", "value"));

  ConsumerRecord<byte[], byte[]> consumerRecord5 = new ConsumerRecord<>("topic", 2, 0, null,
                                                                        stringSerializer.serialize("topic", "value"));

  Map<TopicPartition, List<ConsumerRecord<byte[], byte[]>>> recordMap = new HashMap<>();
  recordMap.put(tp0, Arrays.asList(consumerRecord0, consumerRecord1, consumerRecord2));
  recordMap.put(tp1, Arrays.asList(consumerRecord3, consumerRecord4));
  recordMap.put(tp2, Collections.singletonList(consumerRecord5));

  ConsumerRecords<byte[], byte[]> consumerRecords = new ConsumerRecords<>(recordMap);

  ConsumerRecordsProcessResult result = processor.process(consumerRecords);
  assertEquals(result.consumerRecords().count(), 4);
  assertEquals(result.consumerRecords().records(tp0).size(), 2);
  assertEquals(result.consumerRecords().records(tp1).size(), 1);
  assertEquals(result.consumerRecords().records(tp2).size(), 1);
  assertTrue(result.resumeOffsets().isEmpty());
  assertNull(result.exception());
}
 
開發者ID:linkedin,項目名稱:li-apache-kafka-clients,代碼行數:64,代碼來源:ConsumerRecordsProcessorTest.java


注:本文中的org.apache.kafka.common.serialization.Deserializer.deserialize方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。