当前位置: 首页>>代码示例>>Java>>正文


Java RecordHeaders类代码示例

本文整理汇总了Java中org.apache.kafka.common.header.internals.RecordHeaders的典型用法代码示例。如果您正苦于以下问题:Java RecordHeaders类的具体用法?Java RecordHeaders怎么用?Java RecordHeaders使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


RecordHeaders类属于org.apache.kafka.common.header.internals包,在下文中一共展示了RecordHeaders类的13个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: extract_second_no_context

import org.apache.kafka.common.header.internals.RecordHeaders; //导入依赖的package包/类
@Test
public void extract_second_no_context() {
  MockSpan span = mockTracer.buildSpan("first").start();
  Headers headers = new RecordHeaders();
  assertTrue(headers.toArray().length == 0);

  // inject first
  TracingKafkaUtils.inject(span.context(), headers, mockTracer);
  int headersLength = headers.toArray().length;
  assertTrue(headersLength > 0);

  // check second
  MockSpan.MockContext spanContext2 = (MockContext) TracingKafkaUtils
      .extractSpanContext(headers, mockTracer);
  assertNull(spanContext2);
}
 
开发者ID:opentracing-contrib,项目名称:java-kafka-client,代码行数:17,代码来源:TracingKafkaUtilsTest.java

示例2: ProducerRecord

import org.apache.kafka.common.header.internals.RecordHeaders; //导入依赖的package包/类
/**
 * Creates a record with a specified timestamp to be sent to a specified topic and partition
 * 
 * @param topic The topic the record will be appended to
 * @param partition The partition to which the record should be sent
 * @param timestamp The timestamp of the record
 * @param key The key that will be included in the record
 * @param value The record contents
 * @param headers the headers that will be included in the record
 */
public ProducerRecord(String topic, Integer partition, Long timestamp, K key, V value, Iterable<Header> headers) {
    if (topic == null)
        throw new IllegalArgumentException("Topic cannot be null.");
    if (timestamp != null && timestamp < 0)
        throw new IllegalArgumentException(
                String.format("Invalid timestamp: %d. Timestamp should always be non-negative or null.", timestamp));
    if (partition != null && partition < 0)
        throw new IllegalArgumentException(
                String.format("Invalid partition: %d. Partition number should always be non-negative or null.", partition));
    this.topic = topic;
    this.partition = partition;
    this.key = key;
    this.value = value;
    this.timestamp = timestamp;
    this.headers = new RecordHeaders(headers);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:27,代码来源:ProducerRecord.java

示例3: testOldConstructor

import org.apache.kafka.common.header.internals.RecordHeaders; //导入依赖的package包/类
@Test
@SuppressWarnings("deprecation")
public void testOldConstructor() {
    String topic = "topic";
    int partition = 0;
    long offset = 23;
    String key = "key";
    String value = "value";

    ConsumerRecord<String, String> record = new ConsumerRecord<>(topic, partition, offset, key, value);
    assertEquals(topic, record.topic());
    assertEquals(partition, record.partition());
    assertEquals(offset, record.offset());
    assertEquals(key, record.key());
    assertEquals(value, record.value());
    assertEquals(TimestampType.NO_TIMESTAMP_TYPE, record.timestampType());
    assertEquals(ConsumerRecord.NO_TIMESTAMP, record.timestamp());
    assertEquals(ConsumerRecord.NULL_CHECKSUM, record.checksum());
    assertEquals(ConsumerRecord.NULL_SIZE, record.serializedKeySize());
    assertEquals(ConsumerRecord.NULL_SIZE, record.serializedValueSize());
    assertEquals(new RecordHeaders(), record.headers());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:23,代码来源:ConsumerRecordTest.java

示例4: testHasRoomForMethodWithHeaders

import org.apache.kafka.common.header.internals.RecordHeaders; //导入依赖的package包/类
@Test
public void testHasRoomForMethodWithHeaders() {
    if (magic >= RecordBatch.MAGIC_VALUE_V2) {
        MemoryRecordsBuilder builder = MemoryRecords.builder(ByteBuffer.allocate(100), magic, compression,
                TimestampType.CREATE_TIME, 0L);
        RecordHeaders headers = new RecordHeaders();
        headers.add("hello", "world.world".getBytes());
        headers.add("hello", "world.world".getBytes());
        headers.add("hello", "world.world".getBytes());
        headers.add("hello", "world.world".getBytes());
        headers.add("hello", "world.world".getBytes());
        builder.append(logAppendTime, "key".getBytes(), "value".getBytes());
        // Make sure that hasRoomFor accounts for header sizes by letting a record without headers pass, but stopping
        // a record with a large number of headers.
        assertTrue(builder.hasRoomFor(logAppendTime, "key".getBytes(), "value".getBytes(), Record.EMPTY_HEADERS));
        assertFalse(builder.hasRoomFor(logAppendTime, "key".getBytes(), "value".getBytes(), headers.toArray()));
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:19,代码来源:MemoryRecordsTest.java

示例5: inject

import org.apache.kafka.common.header.internals.RecordHeaders; //导入依赖的package包/类
@Test
public void inject() {
  MockSpan span = mockTracer.buildSpan("test").start();
  Headers headers = new RecordHeaders();
  assertTrue(headers.toArray().length == 0);

  TracingKafkaUtils.inject(span.context(), headers, mockTracer);

  assertTrue(headers.toArray().length > 0);
}
 
开发者ID:opentracing-contrib,项目名称:java-kafka-client,代码行数:11,代码来源:TracingKafkaUtilsTest.java

示例6: extract

import org.apache.kafka.common.header.internals.RecordHeaders; //导入依赖的package包/类
@Test
public void extract() {
  MockSpan span = mockTracer.buildSpan("test").start();
  Headers headers = new RecordHeaders();
  TracingKafkaUtils.inject(span.context(), headers, mockTracer);

  MockSpan.MockContext spanContext = (MockSpan.MockContext) TracingKafkaUtils
      .extract(headers, mockTracer);

  assertEquals(span.context().spanId(), spanContext.spanId());
  assertEquals(span.context().traceId(), spanContext.traceId());
}
 
开发者ID:opentracing-contrib,项目名称:java-kafka-client,代码行数:13,代码来源:TracingKafkaUtilsTest.java

示例7: extract_no_context

import org.apache.kafka.common.header.internals.RecordHeaders; //导入依赖的package包/类
@Test
public void extract_no_context() {
  Headers headers = new RecordHeaders();

  // first
  MockSpan.MockContext spanContext = (MockSpan.MockContext) TracingKafkaUtils
      .extract(headers, mockTracer);
  assertNull(spanContext);

  // second
  MockSpan.MockContext spanContext2 = (MockContext) TracingKafkaUtils
      .extractSpanContext(headers, mockTracer);
  assertNull(spanContext2);
}
 
开发者ID:opentracing-contrib,项目名称:java-kafka-client,代码行数:15,代码来源:TracingKafkaUtilsTest.java

示例8: inject_and_extract_two_contexts

import org.apache.kafka.common.header.internals.RecordHeaders; //导入依赖的package包/类
@Test
public void inject_and_extract_two_contexts() {
  MockSpan span = mockTracer.buildSpan("first").start();
  Headers headers = new RecordHeaders();
  assertTrue(headers.toArray().length == 0);

  // inject first
  TracingKafkaUtils.inject(span.context(), headers, mockTracer);
  int headersLength = headers.toArray().length;
  assertTrue(headersLength > 0);

  // inject second
  MockSpan span2 = mockTracer.buildSpan("second").asChildOf(span.context()).start();
  TracingKafkaUtils.injectSecond(span2.context(), headers, mockTracer);
  assertTrue(headers.toArray().length > headersLength);

  // check first
  MockSpan.MockContext spanContext = (MockSpan.MockContext) TracingKafkaUtils
      .extract(headers, mockTracer);
  assertEquals(span.context().spanId(), spanContext.spanId());
  assertEquals(span.context().traceId(), spanContext.traceId());

  // check second
  MockSpan.MockContext spanContext2 = (MockContext) TracingKafkaUtils
      .extractSpanContext(headers, mockTracer);
  assertEquals(span2.context().spanId(), spanContext2.spanId());
  assertEquals(span2.context().traceId(), spanContext2.traceId());
  assertEquals(spanContext.traceId(), spanContext2.traceId());
  assertNotEquals(spanContext.spanId(), spanContext2.spanId());
}
 
开发者ID:opentracing-contrib,项目名称:java-kafka-client,代码行数:31,代码来源:TracingKafkaUtilsTest.java

示例9: shouldProvideTopicHeadersAndDataToKeyDeserializer

import org.apache.kafka.common.header.internals.RecordHeaders; //导入依赖的package包/类
@Test
public void shouldProvideTopicHeadersAndDataToKeyDeserializer() {
    final SourceNode<String, String> sourceNode = new MockSourceNode<>(new String[]{""}, new TheExtendedDeserializer(), new TheExtendedDeserializer());
    final RecordHeaders headers = new RecordHeaders();
    final String deserializeKey = sourceNode.deserializeKey("topic", headers, "data".getBytes(StandardCharsets.UTF_8));
    assertThat(deserializeKey, is("topic" + headers + "data"));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:8,代码来源:SourceNodeTest.java

示例10: shouldProvideTopicHeadersAndDataToValueDeserializer

import org.apache.kafka.common.header.internals.RecordHeaders; //导入依赖的package包/类
@Test
public void shouldProvideTopicHeadersAndDataToValueDeserializer() {
    final SourceNode<String, String> sourceNode = new MockSourceNode<>(new String[]{""}, new TheExtendedDeserializer(), new TheExtendedDeserializer());
    final RecordHeaders headers = new RecordHeaders();
    final String deserializedValue = sourceNode.deserializeValue("topic", headers, "data".getBytes(StandardCharsets.UTF_8));
    assertThat(deserializedValue, is("topic" + headers + "data"));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:8,代码来源:SourceNodeTest.java

示例11: testNullChecksumInConstructor

import org.apache.kafka.common.header.internals.RecordHeaders; //导入依赖的package包/类
@Test
@SuppressWarnings("deprecation")
public void testNullChecksumInConstructor() {
    String key = "key";
    String value = "value";
    long timestamp = 242341324L;
    ConsumerRecord<String, String> record = new ConsumerRecord<>("topic", 0, 23L, timestamp,
            TimestampType.CREATE_TIME, null, key.length(), value.length(), key, value, new RecordHeaders());
    assertEquals(DefaultRecord.computePartialChecksum(timestamp, key.length(), value.length()), record.checksum());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:11,代码来源:ConsumerRecordTest.java

示例12: setReadOnly

import org.apache.kafka.common.header.internals.RecordHeaders; //导入依赖的package包/类
private void setReadOnly(Headers headers) {
    if (headers instanceof RecordHeaders) {
        ((RecordHeaders) headers).setReadOnly();
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:6,代码来源:KafkaProducer.java

示例13: ConsumerRecord

import org.apache.kafka.common.header.internals.RecordHeaders; //导入依赖的package包/类
/**
 * Creates a record to be received from a specified topic and partition (provided for
 * compatibility with Kafka 0.10 before the message format supported headers).
 *
 * @param topic The topic this record is received from
 * @param partition The partition of the topic this record is received from
 * @param offset The offset of this record in the corresponding Kafka partition
 * @param timestamp The timestamp of the record.
 * @param timestampType The timestamp type
 * @param checksum The checksum (CRC32) of the full record
 * @param serializedKeySize The length of the serialized key
 * @param serializedValueSize The length of the serialized value
 * @param key The key of the record, if one exists (null is allowed)
 * @param value The record contents
 */
public ConsumerRecord(String topic,
                      int partition,
                      long offset,
                      long timestamp,
                      TimestampType timestampType,
                      long checksum,
                      int serializedKeySize,
                      int serializedValueSize,
                      K key,
                      V value) {
    this(topic, partition, offset, timestamp, timestampType, checksum, serializedKeySize, serializedValueSize,
            key, value, new RecordHeaders());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:29,代码来源:ConsumerRecord.java


注:本文中的org.apache.kafka.common.header.internals.RecordHeaders类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。