当前位置: 首页>>代码示例>>Java>>正文


Java Struct.put方法代码示例

本文整理汇总了Java中org.apache.kafka.connect.data.Struct.put方法的典型用法代码示例。如果您正苦于以下问题:Java Struct.put方法的具体用法?Java Struct.put怎么用?Java Struct.put使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.connect.data.Struct的用法示例。


在下文中一共展示了Struct.put方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: applyWithSchema

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
private R applyWithSchema(R record) {
    Schema valueSchema = operatingSchema(record);
    Schema updatedSchema = getOrBuildSchema(valueSchema);

    // Whole-record casting
    if (wholeValueCastType != null)
        return newRecord(record, updatedSchema, castValueToType(operatingValue(record), wholeValueCastType));

    // Casting within a struct
    final Struct value = requireStruct(operatingValue(record), PURPOSE);

    final Struct updatedValue = new Struct(updatedSchema);
    for (Field field : value.schema().fields()) {
        final Object origFieldValue = value.get(field);
        final Schema.Type targetType = casts.get(field.name());
        final Object newFieldValue = targetType != null ? castValueToType(origFieldValue, targetType) : origFieldValue;
        updatedValue.put(updatedSchema.field(field.name()), newFieldValue);
    }
    return newRecord(record, updatedSchema, updatedValue);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:21,代码来源:Cast.java

示例2: buildWithSchema

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
private void buildWithSchema(Struct record, String fieldNamePrefix, Struct newRecord) {
    for (Field field : record.schema().fields()) {
        final String fieldName = fieldName(fieldNamePrefix, field.name());
        switch (field.schema().type()) {
            case INT8:
            case INT16:
            case INT32:
            case INT64:
            case FLOAT32:
            case FLOAT64:
            case BOOLEAN:
            case STRING:
            case BYTES:
                newRecord.put(fieldName, record.get(field));
                break;
            case STRUCT:
                buildWithSchema(record.getStruct(field.name()), fieldName, newRecord);
                break;
            default:
                throw new DataException("Flatten transformation does not support " + field.schema().type()
                        + " for record without schemas (for field " + fieldName + ").");
        }
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:25,代码来源:Flatten.java

示例3: applyWithSchema

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
private R applyWithSchema(R record) {
    final Struct value = requireStruct(operatingValue(record), PURPOSE);

    Schema updatedSchema = schemaUpdateCache.get(value.schema());
    if (updatedSchema == null) {
        updatedSchema = makeUpdatedSchema(value.schema());
        schemaUpdateCache.put(value.schema(), updatedSchema);
    }

    final Struct updatedValue = new Struct(updatedSchema);

    for (Field field : updatedSchema.fields()) {
        final Object fieldValue = value.get(reverseRenamed(field.name()));
        updatedValue.put(field.name(), fieldValue);
    }

    return newRecord(record, updatedSchema, updatedValue);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:19,代码来源:ReplaceField.java

示例4: convert

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
@Override
public Object convert(Schema schema, JsonNode value) {
    if (!value.isObject())
        throw new DataException("Structs should be encoded as JSON objects, but found " + value.getNodeType());

    // We only have ISchema here but need Schema, so we need to materialize the actual schema. Using ISchema
    // avoids having to materialize the schema for non-Struct types but it cannot be avoided for Structs since
    // they require a schema to be provided at construction. However, the schema is only a SchemaBuilder during
    // translation of schemas to JSON; during the more common translation of data to JSON, the call to schema.schema()
    // just returns the schema Object and has no overhead.
    Struct result = new Struct(schema.schema());
    for (Field field : schema.fields())
        result.put(field, convertToConnect(field.schema(), value.get(field.name())));

    return result;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:17,代码来源:JsonConverter.java

示例5: buildRecordValue

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
public Struct buildRecordValue(Issue issue){

        // Issue top level fields
        Struct valueStruct = new Struct(VALUE_SCHEMA)
                .put(URL_FIELD, issue.getUrl())
                .put(TITLE_FIELD, issue.getTitle())
                .put(CREATED_AT_FIELD, Date.from(issue.getCreatedAt()))
                .put(UPDATED_AT_FIELD, Date.from(issue.getUpdatedAt()))
                .put(NUMBER_FIELD, issue.getNumber())
                .put(STATE_FIELD, issue.getState());

        // User is mandatory
        User user = issue.getUser();
        Struct userStruct = new Struct(USER_SCHEMA)
                .put(USER_URL_FIELD, user.getUrl())
                .put(USER_ID_FIELD, user.getId())
                .put(USER_LOGIN_FIELD, user.getLogin());
        valueStruct.put(USER_FIELD, userStruct);

        // Pull request is optional
        PullRequest pullRequest = issue.getPullRequest();
        if (pullRequest != null) {
            Struct prStruct = new Struct(PR_SCHEMA)
                    .put(PR_URL_FIELD, pullRequest.getUrl())
                    .put(PR_HTML_URL_FIELD, pullRequest.getHtmlUrl());
            valueStruct.put(PR_FIELD, prStruct);
        }

        return valueStruct;
    }
 
开发者ID:simplesteph,项目名称:kafka-connect-github-source,代码行数:31,代码来源:GitHubSourceTask.java

示例6: applyValueWithSchema

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
private Struct applyValueWithSchema(Struct value, Schema updatedSchema) {
    Struct updatedValue = new Struct(updatedSchema);
    for (Field field : value.schema().fields()) {
        final Object updatedFieldValue;
        if (field.name().equals(config.field)) {
            updatedFieldValue = convertTimestamp(value.get(field), timestampTypeFromSchema(field.schema()));
        } else {
            updatedFieldValue = value.get(field);
        }
        updatedValue.put(field.name(), updatedFieldValue);
    }
    return updatedValue;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:14,代码来源:TimestampConverter.java

示例7: applyWithSchema

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
private R applyWithSchema(R record) {
    final Struct value = requireStruct(operatingValue(record), PURPOSE);
    final Struct updatedValue = new Struct(value.schema());
    for (Field field : value.schema().fields()) {
        final Object origFieldValue = value.get(field);
        updatedValue.put(field, maskedFields.contains(field.name()) ? masked(origFieldValue) : origFieldValue);
    }
    return newRecord(record, updatedValue);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:10,代码来源:MaskField.java

示例8: applyWithSchema

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
private R applyWithSchema(R record) {
    final Struct value = requireStruct(operatingValue(record), PURPOSE);

    Schema updatedSchema = schemaUpdateCache.get(value.schema());
    if (updatedSchema == null) {
        updatedSchema = makeUpdatedSchema(value.schema());
        schemaUpdateCache.put(value.schema(), updatedSchema);
    }

    final Struct updatedValue = new Struct(updatedSchema);

    for (Field field : value.schema().fields()) {
        updatedValue.put(field.name(), value.get(field));
    }

    if (topicField != null) {
        updatedValue.put(topicField.name, record.topic());
    }
    if (partitionField != null && record.kafkaPartition() != null) {
        updatedValue.put(partitionField.name, record.kafkaPartition());
    }
    if (offsetField != null) {
        updatedValue.put(offsetField.name, requireSinkRecord(record, PURPOSE).kafkaOffset());
    }
    if (timestampField != null && record.timestamp() != null) {
        updatedValue.put(timestampField.name, new Date(record.timestamp()));
    }
    if (staticField != null && staticValue != null) {
        updatedValue.put(staticField.name, staticValue);
    }

    return newRecord(record, updatedSchema, updatedValue);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:34,代码来源:InsertField.java

示例9: withSchema

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
@Test
public void withSchema() {
    final ValueToKey<SinkRecord> xform = new ValueToKey<>();
    xform.configure(Collections.singletonMap("fields", "a,b"));

    final Schema valueSchema = SchemaBuilder.struct()
            .field("a", Schema.INT32_SCHEMA)
            .field("b", Schema.INT32_SCHEMA)
            .field("c", Schema.INT32_SCHEMA)
            .build();

    final Struct value = new Struct(valueSchema);
    value.put("a", 1);
    value.put("b", 2);
    value.put("c", 3);

    final SinkRecord record = new SinkRecord("", 0, null, null, valueSchema, value, 0);
    final SinkRecord transformedRecord = xform.apply(record);

    final Schema expectedKeySchema = SchemaBuilder.struct()
            .field("a", Schema.INT32_SCHEMA)
            .field("b", Schema.INT32_SCHEMA)
            .build();

    final Struct expectedKey = new Struct(expectedKeySchema)
            .put("a", 1)
            .put("b", 2);

    assertEquals(expectedKeySchema, transformedRecord.keySchema());
    assertEquals(expectedKey, transformedRecord.key());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:32,代码来源:ValueToKeyTest.java

示例10: withSchema

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
@Test
public void withSchema() {
    final ReplaceField<SinkRecord> xform = new ReplaceField.Value<>();

    final Map<String, String> props = new HashMap<>();
    props.put("whitelist", "abc,foo");
    props.put("renames", "abc:xyz,foo:bar");

    xform.configure(props);

    final Schema schema = SchemaBuilder.struct()
            .field("dont", Schema.STRING_SCHEMA)
            .field("abc", Schema.INT32_SCHEMA)
            .field("foo", Schema.BOOLEAN_SCHEMA)
            .field("etc", Schema.STRING_SCHEMA)
            .build();

    final Struct value = new Struct(schema);
    value.put("dont", "whatever");
    value.put("abc", 42);
    value.put("foo", true);
    value.put("etc", "etc");

    final SinkRecord record = new SinkRecord("test", 0, null, null, schema, value, 0);
    final SinkRecord transformedRecord = xform.apply(record);

    final Struct updatedValue = (Struct) transformedRecord.value();

    assertEquals(2, updatedValue.schema().fields().size());
    assertEquals(new Integer(42), updatedValue.getInt32("xyz"));
    assertEquals(true, updatedValue.getBoolean("bar"));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:33,代码来源:ReplaceFieldTest.java

示例11: testOptionalFieldStruct

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
@Test
public void testOptionalFieldStruct() {
    final Flatten<SourceRecord> xform = new Flatten.Value<>();
    xform.configure(Collections.<String, String>emptyMap());

    SchemaBuilder builder = SchemaBuilder.struct();
    builder.field("opt_int32", Schema.OPTIONAL_INT32_SCHEMA);
    Schema supportedTypesSchema = builder.build();

    builder = SchemaBuilder.struct();
    builder.field("B", supportedTypesSchema);
    Schema oneLevelNestedSchema = builder.build();

    Struct supportedTypes = new Struct(supportedTypesSchema);
    supportedTypes.put("opt_int32", null);

    Struct oneLevelNestedStruct = new Struct(oneLevelNestedSchema);
    oneLevelNestedStruct.put("B", supportedTypes);

    SourceRecord transformed = xform.apply(new SourceRecord(null, null,
            "topic", 0,
            oneLevelNestedSchema, oneLevelNestedStruct));

    assertEquals(Schema.Type.STRUCT, transformed.valueSchema().type());
    Struct transformedStruct = (Struct) transformed.value();
    assertNull(transformedStruct.get("B.opt_int32"));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:28,代码来源:FlattenTest.java

示例12: testOptionalAndDefaultValuesNested

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
@Test
public void testOptionalAndDefaultValuesNested() {
    // If we have a nested structure where an entire sub-Struct is optional, all flattened fields generated from its
    // children should also be optional. Similarly, if the parent Struct has a default value, the default value for
    // the flattened field

    final Flatten<SourceRecord> xform = new Flatten.Value<>();
    xform.configure(Collections.<String, String>emptyMap());

    SchemaBuilder builder = SchemaBuilder.struct().optional();
    builder.field("req_field", Schema.STRING_SCHEMA);
    builder.field("opt_field", SchemaBuilder.string().optional().defaultValue("child_default").build());
    Struct childDefaultValue = new Struct(builder);
    childDefaultValue.put("req_field", "req_default");
    builder.defaultValue(childDefaultValue);
    Schema schema = builder.build();
    // Intentionally leave this entire value empty since it is optional
    Struct value = new Struct(schema);

    SourceRecord transformed = xform.apply(new SourceRecord(null, null, "topic", 0, schema, value));

    assertNotNull(transformed);
    Schema transformedSchema = transformed.valueSchema();
    assertEquals(Schema.Type.STRUCT, transformedSchema.type());
    assertEquals(2, transformedSchema.fields().size());
    // Required field should pick up both being optional and the default value from the parent
    Schema transformedReqFieldSchema = SchemaBuilder.string().optional().defaultValue("req_default").build();
    assertEquals(transformedReqFieldSchema, transformedSchema.field("req_field").schema());
    // The optional field should still be optional but should have picked up the default value. However, since
    // the parent didn't specify the default explicitly, we should still be using the field's normal default
    Schema transformedOptFieldSchema = SchemaBuilder.string().optional().defaultValue("child_default").build();
    assertEquals(transformedOptFieldSchema, transformedSchema.field("opt_field").schema());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:34,代码来源:FlattenTest.java

示例13: testWithSchemaFieldConversion

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
@Test
public void testWithSchemaFieldConversion() {
    TimestampConverter<SourceRecord> xform = new TimestampConverter.Value<>();
    Map<String, String> config = new HashMap<>();
    config.put(TimestampConverter.TARGET_TYPE_CONFIG, "Timestamp");
    config.put(TimestampConverter.FIELD_CONFIG, "ts");
    xform.configure(config);

    // ts field is a unix timestamp
    Schema structWithTimestampFieldSchema = SchemaBuilder.struct()
            .field("ts", Schema.INT64_SCHEMA)
            .field("other", Schema.STRING_SCHEMA)
            .build();
    Struct original = new Struct(structWithTimestampFieldSchema);
    original.put("ts", DATE_PLUS_TIME_UNIX);
    original.put("other", "test");

    SourceRecord transformed = xform.apply(new SourceRecord(null, null, "topic", 0, structWithTimestampFieldSchema, original));

    Schema expectedSchema = SchemaBuilder.struct()
            .field("ts", Timestamp.SCHEMA)
            .field("other", Schema.STRING_SCHEMA)
            .build();
    assertEquals(expectedSchema, transformed.valueSchema());
    assertEquals(DATE_PLUS_TIME.getTime(), ((Struct) transformed.value()).get("ts"));
    assertEquals("test", ((Struct) transformed.value()).get("other"));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:28,代码来源:TimestampConverterTest.java

示例14: serialize

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
private byte[] serialize(AbstractStatus status) {
    Struct struct = new Struct(STATUS_SCHEMA_V0);
    struct.put(STATE_KEY_NAME, status.state().name());
    if (status.trace() != null)
        struct.put(TRACE_KEY_NAME, status.trace());
    struct.put(WORKER_ID_KEY_NAME, status.workerId());
    struct.put(GENERATION_KEY_NAME, status.generation());
    return converter.fromConnectData(topic, STATUS_SCHEMA_V0, struct);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:10,代码来源:KafkaStatusBackingStore.java

示例15: putConnectorConfig

import org.apache.kafka.connect.data.Struct; //导入方法依赖的package包/类
/**
 * Write this connector configuration to persistent storage and wait until it has been acknowledged and read back by
 * tailing the Kafka log with a consumer.
 *
 * @param connector  name of the connector to write data for
 * @param properties the configuration to write
 */
@Override
public void putConnectorConfig(String connector, Map<String, String> properties) {
    log.debug("Writing connector configuration {} for connector {} configuration", properties, connector);
    Struct connectConfig = new Struct(CONNECTOR_CONFIGURATION_V0);
    connectConfig.put("properties", properties);
    byte[] serializedConfig = converter.fromConnectData(topic, CONNECTOR_CONFIGURATION_V0, connectConfig);
    updateConnectorConfig(connector, serializedConfig);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:16,代码来源:KafkaConfigBackingStore.java


注:本文中的org.apache.kafka.connect.data.Struct.put方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。