当前位置: 首页>>代码示例>>Java>>正文


Java SchemaAndValue.value方法代码示例

本文整理汇总了Java中org.apache.kafka.connect.data.SchemaAndValue.value方法的典型用法代码示例。如果您正苦于以下问题:Java SchemaAndValue.value方法的具体用法?Java SchemaAndValue.value怎么用?Java SchemaAndValue.value使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.connect.data.SchemaAndValue的用法示例。


在下文中一共展示了SchemaAndValue.value方法的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: toSourceRecord

import org.apache.kafka.connect.data.SchemaAndValue; //导入方法依赖的package包/类
/**
 * Convert a message into a Kafka Connect SourceRecord.
 * 
 * @param context            the JMS context to use for building messages
 * @param topic              the Kafka topic
 * @param messageBodyJms     whether to interpret MQ messages as JMS messages
 * @param message            the message
 * 
 * @return the Kafka Connect SourceRecord
 * 
 * @throws JMSException      Message could not be converted
 */
@Override public SourceRecord toSourceRecord(JMSContext context, String topic, boolean messageBodyJms, Message message) throws JMSException {
    byte[] payload;
    if (message instanceof BytesMessage) {
        payload = message.getBody(byte[].class);
    }
    else if (message instanceof TextMessage) {
        String s = message.getBody(String.class);
        payload = s.getBytes(UTF_8);
    }
    else {
        log.error("Unsupported JMS message type {}", message.getClass());
        throw new ConnectException("Unsupported JMS message type");
    }

    SchemaAndValue sv = converter.toConnectData(topic, payload);
    return new SourceRecord(null, null, topic, sv.schema(), sv.value());
}
 
开发者ID:ibm-messaging,项目名称:kafka-connect-mq-source,代码行数:30,代码来源:JsonRecordBuilder.java

示例2: convertMessages

import org.apache.kafka.connect.data.SchemaAndValue; //导入方法依赖的package包/类
private void convertMessages(ConsumerRecords<byte[], byte[]> msgs) {
    for (ConsumerRecord<byte[], byte[]> msg : msgs) {
        log.trace("Consuming message with key {}, value {}", msg.key(), msg.value());
        SchemaAndValue keyAndSchema = keyConverter.toConnectData(msg.topic(), msg.key());
        SchemaAndValue valueAndSchema = valueConverter.toConnectData(msg.topic(), msg.value());
        SinkRecord record = new SinkRecord(msg.topic(), msg.partition(),
                keyAndSchema.schema(), keyAndSchema.value(),
                valueAndSchema.schema(), valueAndSchema.value(),
                msg.offset(),
                ConnectUtils.checkAndConvertTimestamp(msg.timestamp()),
                msg.timestampType());
        record = transformationChain.apply(record);
        if (record != null) {
            messageBatch.add(record);
        }
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:18,代码来源:WorkerSinkTask.java

示例3: parseConnectorStatus

import org.apache.kafka.connect.data.SchemaAndValue; //导入方法依赖的package包/类
private ConnectorStatus parseConnectorStatus(String connector, byte[] data) {
    try {
        SchemaAndValue schemaAndValue = converter.toConnectData(topic, data);
        if (!(schemaAndValue.value() instanceof Map)) {
            log.error("Invalid connector status type {}", schemaAndValue.value().getClass());
            return null;
        }

        @SuppressWarnings("unchecked")
        Map<String, Object> statusMap = (Map<String, Object>) schemaAndValue.value();
        TaskStatus.State state = TaskStatus.State.valueOf((String) statusMap.get(STATE_KEY_NAME));
        String trace = (String) statusMap.get(TRACE_KEY_NAME);
        String workerUrl = (String) statusMap.get(WORKER_ID_KEY_NAME);
        int generation = ((Long) statusMap.get(GENERATION_KEY_NAME)).intValue();
        return new ConnectorStatus(connector, state, trace, workerUrl, generation);
    } catch (Exception e) {
        log.error("Failed to deserialize connector status", e);
        return null;
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:21,代码来源:KafkaStatusBackingStore.java

示例4: parseTaskStatus

import org.apache.kafka.connect.data.SchemaAndValue; //导入方法依赖的package包/类
private TaskStatus parseTaskStatus(ConnectorTaskId taskId, byte[] data) {
    try {
        SchemaAndValue schemaAndValue = converter.toConnectData(topic, data);
        if (!(schemaAndValue.value() instanceof Map)) {
            log.error("Invalid connector status type {}", schemaAndValue.value().getClass());
            return null;
        }
        @SuppressWarnings("unchecked")
        Map<String, Object> statusMap = (Map<String, Object>) schemaAndValue.value();
        TaskStatus.State state = TaskStatus.State.valueOf((String) statusMap.get(STATE_KEY_NAME));
        String trace = (String) statusMap.get(TRACE_KEY_NAME);
        String workerUrl = (String) statusMap.get(WORKER_ID_KEY_NAME);
        int generation = ((Long) statusMap.get(GENERATION_KEY_NAME)).intValue();
        return new TaskStatus(taskId, state, workerUrl, generation, trace);
    } catch (Exception e) {
        log.error("Failed to deserialize task status", e);
        return null;
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:20,代码来源:KafkaStatusBackingStore.java

示例5: deserializeImpl

import org.apache.kafka.connect.data.SchemaAndValue; //导入方法依赖的package包/类
protected Struct deserializeImpl(String topic, byte[] payload) {
	/*
	 * if (isKey){ return deserializeKey(topic, op); }else{ return
	 * deserializeVal(topic, op); }
	 */
	if (!isKey && payload == null){
		// A delete mutation... 
		return null;
	}
	try {
		SchemaAndValue res = converter.toConnectData(topic, payload);
		Schema schema = res.schema();
		Object val = res.value();
		Struct struct = (Struct) val;
		// Schema will be null for delete mutation vals
		if (schema != struct.schema()) {
			throw new SerializationException(
					"Object schema doesn't match given schema");
		}
		return struct;
	} catch (RuntimeException e) {
		throw new SerializationException(
				"Error deserializing Avro message  ", e);
	}

}
 
开发者ID:rogers,项目名称:change-data-capture,代码行数:27,代码来源:SpecificAvroMutationDeserializer.java

示例6: decimalToConnect

import org.apache.kafka.connect.data.SchemaAndValue; //导入方法依赖的package包/类
@Test
public void decimalToConnect() {
    Schema schema = Decimal.schema(2);
    BigDecimal reference = new BigDecimal(new BigInteger("156"), 2);
    // Payload is base64 encoded byte[]{0, -100}, which is the two's complement encoding of 156.
    String msg = "{ \"schema\": { \"type\": \"bytes\", \"name\": \"org.apache.kafka.connect.data.Decimal\", \"version\": 1, \"parameters\": { \"scale\": \"2\" } }, \"payload\": \"AJw=\" }";
    SchemaAndValue schemaAndValue = converter.toConnectData(TOPIC, msg.getBytes());
    BigDecimal converted = (BigDecimal) schemaAndValue.value();
    assertEquals(schema, schemaAndValue.schema());
    assertEquals(reference, converted);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:12,代码来源:JsonConverterTest.java

示例7: dateToConnect

import org.apache.kafka.connect.data.SchemaAndValue; //导入方法依赖的package包/类
@Test
public void dateToConnect() {
    Schema schema = Date.SCHEMA;
    GregorianCalendar calendar = new GregorianCalendar(1970, Calendar.JANUARY, 1, 0, 0, 0);
    calendar.setTimeZone(TimeZone.getTimeZone("UTC"));
    calendar.add(Calendar.DATE, 10000);
    java.util.Date reference = calendar.getTime();
    String msg = "{ \"schema\": { \"type\": \"int32\", \"name\": \"org.apache.kafka.connect.data.Date\", \"version\": 1 }, \"payload\": 10000 }";
    SchemaAndValue schemaAndValue = converter.toConnectData(TOPIC, msg.getBytes());
    java.util.Date converted = (java.util.Date) schemaAndValue.value();
    assertEquals(schema, schemaAndValue.schema());
    assertEquals(reference, converted);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:14,代码来源:JsonConverterTest.java

示例8: timeToConnect

import org.apache.kafka.connect.data.SchemaAndValue; //导入方法依赖的package包/类
@Test
public void timeToConnect() {
    Schema schema = Time.SCHEMA;
    GregorianCalendar calendar = new GregorianCalendar(1970, Calendar.JANUARY, 1, 0, 0, 0);
    calendar.setTimeZone(TimeZone.getTimeZone("UTC"));
    calendar.add(Calendar.MILLISECOND, 14400000);
    java.util.Date reference = calendar.getTime();
    String msg = "{ \"schema\": { \"type\": \"int32\", \"name\": \"org.apache.kafka.connect.data.Time\", \"version\": 1 }, \"payload\": 14400000 }";
    SchemaAndValue schemaAndValue = converter.toConnectData(TOPIC, msg.getBytes());
    java.util.Date converted = (java.util.Date) schemaAndValue.value();
    assertEquals(schema, schemaAndValue.schema());
    assertEquals(reference, converted);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:14,代码来源:JsonConverterTest.java

示例9: timestampToConnect

import org.apache.kafka.connect.data.SchemaAndValue; //导入方法依赖的package包/类
@Test
public void timestampToConnect() {
    Schema schema = Timestamp.SCHEMA;
    GregorianCalendar calendar = new GregorianCalendar(1970, Calendar.JANUARY, 1, 0, 0, 0);
    calendar.setTimeZone(TimeZone.getTimeZone("UTC"));
    calendar.add(Calendar.MILLISECOND, 2000000000);
    calendar.add(Calendar.MILLISECOND, 2000000000);
    java.util.Date reference = calendar.getTime();
    String msg = "{ \"schema\": { \"type\": \"int64\", \"name\": \"org.apache.kafka.connect.data.Timestamp\", \"version\": 1 }, \"payload\": 4000000000 }";
    SchemaAndValue schemaAndValue = converter.toConnectData(TOPIC, msg.getBytes());
    java.util.Date converted = (java.util.Date) schemaAndValue.value();
    assertEquals(schema, schemaAndValue.schema());
    assertEquals(reference, converted);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:15,代码来源:JsonConverterTest.java

示例10: convert

import org.apache.kafka.connect.data.SchemaAndValue; //导入方法依赖的package包/类
public SourceRecord convert(String topic, String tag, Long timestamp, EventEntry entry) {
    if (config.isFluentdSchemasEnable()) {
        SchemaAndValue schemaAndValue = convert(topic, entry);
        return new SourceRecord(
                null,
                null,
                topic,
                null,
                Schema.STRING_SCHEMA,
                tag,
                schemaAndValue.schema(),
                schemaAndValue.value(),
                timestamp
        );
    } else {
        Object record;
        try {
            record = new ObjectMapper().readValue(entry.getRecord().toJson(), LinkedHashMap.class);
        } catch (IOException e) {
            record = entry.getRecord().toJson();
        }
        return new SourceRecord(
                null,
                null,
                topic,
                null,
                null,
                null,
                null,
                record,
                timestamp
        );
    }
}
 
开发者ID:fluent,项目名称:kafka-connect-fluentd,代码行数:35,代码来源:MessagePackConverver.java


注:本文中的org.apache.kafka.connect.data.SchemaAndValue.value方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。