当前位置: 首页>>代码示例>>Java>>正文


Java SchemaBuilder.build方法代码示例

本文整理汇总了Java中org.apache.kafka.connect.data.SchemaBuilder.build方法的典型用法代码示例。如果您正苦于以下问题:Java SchemaBuilder.build方法的具体用法?Java SchemaBuilder.build怎么用?Java SchemaBuilder.build使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.connect.data.SchemaBuilder的用法示例。


在下文中一共展示了SchemaBuilder.build方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: DelimitedTextFileReader

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
public DelimitedTextFileReader(FileSystem fs, Path filePath, Map<String, Object> config) throws IOException {
    super(fs, filePath, new DelimitedTxtToStruct(), config);

    //mapping encoding for text file reader
    if (config.get(FILE_READER_DELIMITED_ENCODING) != null) {
        config.put(TextFileReader.FILE_READER_TEXT_ENCODING, config.get(FILE_READER_DELIMITED_ENCODING));
    }
    this.inner = new TextFileReader(fs, filePath, config);
    this.offset = new DelimitedTextOffset(0, hasHeader);

    SchemaBuilder schemaBuilder = SchemaBuilder.struct();
    if (hasNext()) {
        String firstLine = inner.nextRecord().getValue();
        String columns[] = firstLine.split(token);
        IntStream.range(0, columns.length).forEach(index -> {
            String columnName = hasHeader ? columns[index] : DEFAULT_COLUMN_NAME + "_" + ++index;
            schemaBuilder.field(columnName, SchemaBuilder.STRING_SCHEMA);
        });

        if (!hasHeader) {
            //back to the first line
            inner.seek(this.offset);
        }
    }
    this.schema = schemaBuilder.build();
}
 
开发者ID:mmolimar,项目名称:kafka-connect-fs,代码行数:27,代码来源:DelimitedTextFileReader.java

示例2: applyWithSchema

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
private R applyWithSchema(R record) {
    final Struct value = requireStruct(operatingValue(record), PURPOSE);

    Schema updatedSchema = schemaUpdateCache.get(value.schema());
    if (updatedSchema == null) {
        final SchemaBuilder builder = SchemaUtil.copySchemaBasics(value.schema(), SchemaBuilder.struct());
        Struct defaultValue = (Struct) value.schema().defaultValue();
        buildUpdatedSchema(value.schema(), "", builder, value.schema().isOptional(), defaultValue);
        updatedSchema = builder.build();
        schemaUpdateCache.put(value.schema(), updatedSchema);
    }

    final Struct updatedValue = new Struct(updatedSchema);
    buildWithSchema(value, "", updatedValue);
    return newRecord(record, updatedSchema, updatedValue);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:17,代码来源:Flatten.java

示例3: makeUpdatedSchema

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
private Schema makeUpdatedSchema(Schema schema) {
    final SchemaBuilder builder = SchemaUtil.copySchemaBasics(schema, SchemaBuilder.struct());

    for (Field field : schema.fields()) {
        builder.field(field.name(), field.schema());
    }

    if (topicField != null) {
        builder.field(topicField.name, topicField.optional ? Schema.OPTIONAL_STRING_SCHEMA : Schema.STRING_SCHEMA);
    }
    if (partitionField != null) {
        builder.field(partitionField.name, partitionField.optional ? Schema.OPTIONAL_INT32_SCHEMA : Schema.INT32_SCHEMA);
    }
    if (offsetField != null) {
        builder.field(offsetField.name, offsetField.optional ? Schema.OPTIONAL_INT64_SCHEMA : Schema.INT64_SCHEMA);
    }
    if (timestampField != null) {
        builder.field(timestampField.name, timestampField.optional ? OPTIONAL_TIMESTAMP_SCHEMA : Timestamp.SCHEMA);
    }
    if (staticField != null) {
        builder.field(staticField.name, staticField.optional ? Schema.OPTIONAL_STRING_SCHEMA : Schema.STRING_SCHEMA);
    }

    return builder.build();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:26,代码来源:InsertField.java

示例4: buildDatasourceSchema

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
private Schema buildDatasourceSchema(String name, ArrayNode fields) {
  SchemaBuilder dataSourceBuilder = SchemaBuilder.struct().name(name);
  for (int i = 0; i < fields.size(); i++) {
    String fieldName = fields.get(i).get("name").textValue();
    String fieldType;
    if (fields.get(i).get("type").isArray()) {
      fieldType = fields.get(i).get("type").get(0).textValue();
    } else {
      fieldType = fields.get(i).get("type").textValue();
    }

    dataSourceBuilder.field(fieldName, getKsqlType(fieldType));
  }

  return dataSourceBuilder.build();
}
 
开发者ID:confluentinc,项目名称:ksql,代码行数:17,代码来源:MetastoreUtil.java

示例5: valueSchema

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
public static Schema valueSchema(SObjectDescriptor descriptor) {
  String name = String.format("%s.%s", SObjectHelper.class.getPackage().getName(), descriptor.name());
  SchemaBuilder builder = SchemaBuilder.struct();
  builder.name(name);

  for (SObjectDescriptor.Field field : descriptor.fields()) {
    if (isTextArea(field)) {
      continue;
    }
    Schema schema = schema(field);
    builder.field(field.name(), schema);
  }

  builder.field(FIELD_OBJECT_TYPE, Schema.OPTIONAL_STRING_SCHEMA);
  builder.field(FIELD_EVENT_TYPE, Schema.OPTIONAL_STRING_SCHEMA);

  return builder.build();
}
 
开发者ID:jcustenborder,项目名称:kafka-connect-salesforce,代码行数:19,代码来源:SObjectHelper.java

示例6: buildAggregateSchema

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
private Schema buildAggregateSchema(final Schema schema,
                                    final FunctionRegistry functionRegistry) {
  final SchemaBuilder schemaBuilder = SchemaBuilder.struct();
  final List<Field> fields = schema.fields();
  for (int i = 0; i < getRequiredColumnList().size(); i++) {
    schemaBuilder.field(fields.get(i).name(), fields.get(i).schema());
  }
  for (int aggFunctionVarSuffix = 0;
       aggFunctionVarSuffix < getFunctionList().size(); aggFunctionVarSuffix++) {
    Schema fieldSchema;
    String udafName = getFunctionList().get(aggFunctionVarSuffix).getName()
        .getSuffix();
    KsqlAggregateFunction aggregateFunction = functionRegistry.getAggregateFunction(udafName,
        getFunctionList()
            .get(aggFunctionVarSuffix).getArguments(), schema);
    fieldSchema = aggregateFunction.getReturnType();
    schemaBuilder.field(AggregateExpressionRewriter.AGGREGATE_FUNCTION_VARIABLE_PREFIX
        + aggFunctionVarSuffix, fieldSchema);
  }

  return schemaBuilder.build();
}
 
开发者ID:confluentinc,项目名称:ksql,代码行数:23,代码来源:AggregateNode.java

示例7: createSelectValueMapperAndSchema

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
Pair<Schema, SelectValueMapper> createSelectValueMapperAndSchema(final List<Pair<String, Expression>> expressionPairList)  {
  try {
    final CodeGenRunner codeGenRunner = new CodeGenRunner(schema, functionRegistry);
    final SchemaBuilder schemaBuilder = SchemaBuilder.struct();
    final List<ExpressionMetadata> expressionEvaluators = new ArrayList<>();
    for (Pair<String, Expression> expressionPair : expressionPairList) {
      final ExpressionMetadata
          expressionEvaluator =
          codeGenRunner.buildCodeGenFromParseTree(expressionPair.getRight());
      schemaBuilder.field(expressionPair.getLeft(), expressionEvaluator.getExpressionType());
      expressionEvaluators.add(expressionEvaluator);
    }
    return new Pair<>(schemaBuilder.build(), new SelectValueMapper(genericRowValueTypeEnforcer,
        expressionPairList,
        expressionEvaluators));
  } catch (Exception e) {
    throw new KsqlException("Code generation failed for SelectValueMapper", e);
  }
}
 
开发者ID:confluentinc,项目名称:ksql,代码行数:20,代码来源:SchemaKStream.java

示例8: getOrBuildSchema

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
private Schema getOrBuildSchema(Schema valueSchema) {
    Schema updatedSchema = schemaUpdateCache.get(valueSchema);
    if (updatedSchema != null)
        return updatedSchema;

    final SchemaBuilder builder;
    if (wholeValueCastType != null) {
        builder = SchemaUtil.copySchemaBasics(valueSchema, convertFieldType(wholeValueCastType));
    } else {
        builder = SchemaUtil.copySchemaBasics(valueSchema, SchemaBuilder.struct());
        for (Field field : valueSchema.fields()) {
            SchemaBuilder fieldBuilder =
                    convertFieldType(casts.containsKey(field.name()) ? casts.get(field.name()) : field.schema().type());
            if (field.schema().isOptional())
                fieldBuilder.optional();
            if (field.schema().defaultValue() != null)
                fieldBuilder.defaultValue(castValueToType(field.schema().defaultValue(), fieldBuilder.type()));
            builder.field(field.name(), fieldBuilder.build());
        }
    }

    if (valueSchema.isOptional())
        builder.optional();
    if (valueSchema.defaultValue() != null)
        builder.defaultValue(castValueToType(valueSchema.defaultValue(), builder.type()));

    updatedSchema = builder.build();
    schemaUpdateCache.put(valueSchema, updatedSchema);
    return updatedSchema;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:31,代码来源:Cast.java

示例9: convertFieldSchema

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
/**
 * Convert the schema for a field of a Struct with a primitive schema to the schema to be used for the flattened
 * version, taking into account that we may need to override optionality and default values in the flattened version
 * to take into account the optionality and default values of parent/ancestor schemas
 * @param orig the original schema for the field
 * @param optional whether the new flattened field should be optional
 * @param defaultFromParent the default value either taken from the existing field or provided by the parent
 */
private Schema convertFieldSchema(Schema orig, boolean optional, Object defaultFromParent) {
    // Note that we don't use the schema translation cache here. It might save us a bit of effort, but we really
    // only care about caching top-level schema translations.

    final SchemaBuilder builder = SchemaUtil.copySchemaBasics(orig);
    if (optional)
        builder.optional();
    if (defaultFromParent != null)
        builder.defaultValue(defaultFromParent);
    return builder.build();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:20,代码来源:Flatten.java

示例10: makeUpdatedSchema

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
private Schema makeUpdatedSchema(Schema schema) {
    final SchemaBuilder builder = SchemaUtil.copySchemaBasics(schema, SchemaBuilder.struct());
    for (Field field : schema.fields()) {
        if (filter(field.name())) {
            builder.field(renamed(field.name()), field.schema());
        }
    }
    return builder.build();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:10,代码来源:ReplaceField.java

示例11: testOptionalFieldStruct

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
@Test
public void testOptionalFieldStruct() {
    final Flatten<SourceRecord> xform = new Flatten.Value<>();
    xform.configure(Collections.<String, String>emptyMap());

    SchemaBuilder builder = SchemaBuilder.struct();
    builder.field("opt_int32", Schema.OPTIONAL_INT32_SCHEMA);
    Schema supportedTypesSchema = builder.build();

    builder = SchemaBuilder.struct();
    builder.field("B", supportedTypesSchema);
    Schema oneLevelNestedSchema = builder.build();

    Struct supportedTypes = new Struct(supportedTypesSchema);
    supportedTypes.put("opt_int32", null);

    Struct oneLevelNestedStruct = new Struct(oneLevelNestedSchema);
    oneLevelNestedStruct.put("B", supportedTypes);

    SourceRecord transformed = xform.apply(new SourceRecord(null, null,
            "topic", 0,
            oneLevelNestedSchema, oneLevelNestedStruct));

    assertEquals(Schema.Type.STRUCT, transformed.valueSchema().type());
    Struct transformedStruct = (Struct) transformed.value();
    assertNull(transformedStruct.get("B.opt_int32"));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:28,代码来源:FlattenTest.java

示例12: testOptionalAndDefaultValuesNested

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
@Test
public void testOptionalAndDefaultValuesNested() {
    // If we have a nested structure where an entire sub-Struct is optional, all flattened fields generated from its
    // children should also be optional. Similarly, if the parent Struct has a default value, the default value for
    // the flattened field

    final Flatten<SourceRecord> xform = new Flatten.Value<>();
    xform.configure(Collections.<String, String>emptyMap());

    SchemaBuilder builder = SchemaBuilder.struct().optional();
    builder.field("req_field", Schema.STRING_SCHEMA);
    builder.field("opt_field", SchemaBuilder.string().optional().defaultValue("child_default").build());
    Struct childDefaultValue = new Struct(builder);
    childDefaultValue.put("req_field", "req_default");
    builder.defaultValue(childDefaultValue);
    Schema schema = builder.build();
    // Intentionally leave this entire value empty since it is optional
    Struct value = new Struct(schema);

    SourceRecord transformed = xform.apply(new SourceRecord(null, null, "topic", 0, schema, value));

    assertNotNull(transformed);
    Schema transformedSchema = transformed.valueSchema();
    assertEquals(Schema.Type.STRUCT, transformedSchema.type());
    assertEquals(2, transformedSchema.fields().size());
    // Required field should pick up both being optional and the default value from the parent
    Schema transformedReqFieldSchema = SchemaBuilder.string().optional().defaultValue("req_default").build();
    assertEquals(transformedReqFieldSchema, transformedSchema.field("req_field").schema());
    // The optional field should still be optional but should have picked up the default value. However, since
    // the parent didn't specify the default explicitly, we should still be using the field's normal default
    Schema transformedOptFieldSchema = SchemaBuilder.string().optional().defaultValue("child_default").build();
    assertEquals(transformedOptFieldSchema, transformedSchema.field("opt_field").schema());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:34,代码来源:FlattenTest.java

示例13: generate

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
public Map.Entry<Schema, Schema> generate(File inputFile, List<String> keyFields) throws IOException {
  log.trace("generate() - inputFile = '{}', keyFields = {}", inputFile, keyFields);

  final Map<String, Schema.Type> fieldTypes;

  log.info("Determining fields from {}", inputFile);
  try (InputStream inputStream = new FileInputStream(inputFile)) {
    fieldTypes = determineFieldTypes(inputStream);
  }

  log.trace("generate() - Building key schema.");
  SchemaBuilder keySchemaBuilder = SchemaBuilder.struct()
      .name("com.github.jcustenborder.kafka.connect.model.Key");

  for (String keyFieldName : keyFields) {
    log.trace("generate() - Adding keyFieldName field '{}'", keyFieldName);
    if (fieldTypes.containsKey(keyFieldName)) {
      Schema.Type schemaType = fieldTypes.get(keyFieldName);
      addField(keySchemaBuilder, keyFieldName, schemaType);
    } else {
      log.warn("Key field '{}' is not in the data.", keyFieldName);
    }
  }

  log.trace("generate() - Building value schema.");
  SchemaBuilder valueSchemaBuilder = SchemaBuilder.struct()
      .name("com.github.jcustenborder.kafka.connect.model.Value");

  for (Map.Entry<String, Schema.Type> kvp : fieldTypes.entrySet()) {
    addField(valueSchemaBuilder, kvp.getKey(), kvp.getValue());
  }

  return new AbstractMap.SimpleEntry<>(keySchemaBuilder.build(), valueSchemaBuilder.build());
}
 
开发者ID:jcustenborder,项目名称:kafka-connect-spooldir,代码行数:35,代码来源:SchemaGenerator.java

示例14: toTable

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
@Override
protected Table toTable(TableMetaData tb){
 String tableName = tb.getTableName().getOriginalShortName().toLowerCase();
 String databaseName = tb.getTableName().getOriginalSchemaName().toLowerCase();
 Table table=  new Table(databaseName, tableName);
 SchemaBuilder builder = SchemaBuilder.struct().name(table.schemaName());
 
 List<String> pkColumnNames = new ArrayList();
 setSchema(builder, tb, pkColumnNames);
 Schema schema = builder.build();

 table.setSchema( schema, pkColumnNames );
 return table;

}
 
开发者ID:rogers,项目名称:change-data-capture,代码行数:16,代码来源:TypedMutationMapper.java

示例15: createEnumSchema

import org.apache.kafka.connect.data.SchemaBuilder; //导入方法依赖的package包/类
public Schema createEnumSchema() {
  // Enums are just converted to strings, original enum is preserved in parameters
  SchemaBuilder builder = SchemaBuilder.string().name("TestEnum");
  builder.parameter(CONNECT_ENUM_DOC_PROP, null);
  builder.parameter(AVRO_TYPE_ENUM, "TestEnum");
  for(String enumSymbol : new String[]{"foo", "bar", "baz"}) {
    builder.parameter(AVRO_TYPE_ENUM+"."+enumSymbol, enumSymbol);
  }
  return builder.build();
}
 
开发者ID:confluentinc,项目名称:kafka-connect-storage-cloud,代码行数:11,代码来源:DataWriterAvroTest.java


注:本文中的org.apache.kafka.connect.data.SchemaBuilder.build方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。