當前位置: 首頁>>代碼示例>>Java>>正文


Java Schema類代碼示例

本文整理匯總了Java中org.apache.kafka.connect.data.Schema的典型用法代碼示例。如果您正苦於以下問題:Java Schema類的具體用法?Java Schema怎麽用?Java Schema使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


Schema類屬於org.apache.kafka.connect.data包,在下文中一共展示了Schema類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: build

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
public void build(String tableName, Schema keySchema, Schema valueSchema) {
  log.trace("build() - tableName = '{}'", tableName);
  final CassandraSchemaKey key = CassandraSchemaKey.of(this.config.keyspace, tableName);
  if (null != this.schemaLookup.getIfPresent(key)) {
    return;
  }
  if (null == keySchema || null == valueSchema) {
    log.warn("build() - Schemaless mode detected. Cannot generate DDL so assuming table is correct.");
    this.schemaLookup.put(key, DEFAULT);
  }


  final CassandraTableMetadata tableMetadata = this.session.tableMetadata(tableName);

  if (null != tableMetadata) {
    alter(key, tableName, keySchema, valueSchema, tableMetadata);
  } else {
    create(key, tableName, keySchema, valueSchema);
  }
}
 
開發者ID:jcustenborder,項目名稱:kafka-connect-cassandra,代碼行數:21,代碼來源:ConnectSchemaBuilder.java

示例2: createComplexPrimaryKey

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
@Test
public void createComplexPrimaryKey() {
  final Schema keySchema = SchemaBuilder.struct()
      .field("username", Schema.STRING_SCHEMA)
      .field("companyID", Schema.INT64_SCHEMA)
      .build();
  final Schema valueSchema = SchemaBuilder.struct()
      .field("username", Schema.STRING_SCHEMA)
      .field("companyID", Schema.INT64_SCHEMA)
      .field("firstName", Schema.STRING_SCHEMA)
      .field("lastName", Schema.STRING_SCHEMA)
      .field("created", Timestamp.SCHEMA)
      .field("updated", Timestamp.SCHEMA)
      .build();

  this.builder.build("foo", keySchema, valueSchema);
  verify(this.session, times(1)).executeStatement(any(Create.class));
}
 
開發者ID:jcustenborder,項目名稱:kafka-connect-cassandra,代碼行數:19,代碼來源:ConnectSchemaBuilderTest.java

示例3: configure

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
@Override
protected void configure(Map<String, Object> config) {
    String valueFieldName;
    if (config.get(FILE_READER_TEXT_FIELD_NAME_VALUE) == null ||
            config.get(FILE_READER_TEXT_FIELD_NAME_VALUE).toString().equals("")) {
        valueFieldName = FIELD_NAME_VALUE_DEFAULT;
    } else {
        valueFieldName = config.get(FILE_READER_TEXT_FIELD_NAME_VALUE).toString();
    }
    this.schema = SchemaBuilder.struct()
            .field(valueFieldName, Schema.STRING_SCHEMA)
            .build();

    if (config.get(FILE_READER_TEXT_ENCODING) == null ||
            config.get(FILE_READER_TEXT_ENCODING).toString().equals("")) {
        this.charset = Charset.defaultCharset();
    } else {
        this.charset = Charset.forName(config.get(FILE_READER_TEXT_ENCODING).toString());
    }
}
 
開發者ID:mmolimar,項目名稱:kafka-connect-fs,代碼行數:21,代碼來源:TextFileReader.java

示例4: updateSchemaOfStruct

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
@Test
public void updateSchemaOfStruct() {
    final String fieldName1 = "f1";
    final String fieldName2 = "f2";
    final String fieldValue1 = "value1";
    final int fieldValue2 = 1;
    final Schema schema = SchemaBuilder.struct()
                                  .name("my.orig.SchemaDefn")
                                  .field(fieldName1, Schema.STRING_SCHEMA)
                                  .field(fieldName2, Schema.INT32_SCHEMA)
                                  .build();
    final Struct value = new Struct(schema).put(fieldName1, fieldValue1).put(fieldName2, fieldValue2);

    final Schema newSchema = SchemaBuilder.struct()
                                  .name("my.updated.SchemaDefn")
                                  .field(fieldName1, Schema.STRING_SCHEMA)
                                  .field(fieldName2, Schema.INT32_SCHEMA)
                                  .build();

    Struct newValue = (Struct) SetSchemaMetadata.updateSchemaIn(value, newSchema);
    assertMatchingSchema(newValue, newSchema);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:23,代碼來源:SetSchemaMetadataTest.java

示例5: constructAvroTable

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
private Table constructAvroTable(String database, String tableName, Schema schema, Partitioner partitioner)
    throws HiveMetaStoreException {
  Table table = newTable(database, tableName);
  table.setTableType(TableType.EXTERNAL_TABLE);
  table.getParameters().put("EXTERNAL", "TRUE");
  String tablePath = FileUtils.hiveDirectoryName(url, topicsDir, tableName);
  table.setDataLocation(new Path(tablePath));
  table.setSerializationLib(avroSerde);
  try {
    table.setInputFormatClass(avroInputFormat);
    table.setOutputFormatClass(avroOutputFormat);
  } catch (HiveException e) {
    throw new HiveMetaStoreException("Cannot find input/output format:", e);
  }
  List<FieldSchema> columns = HiveSchemaConverter.convertSchema(schema);
  table.setFields(columns);
  table.setPartCols(partitioner.partitionFields());
  table.getParameters().put(AVRO_SCHEMA_LITERAL, avroData.fromConnectSchema(schema).toString());
  return table;
}
 
開發者ID:jiangxiluning,項目名稱:kafka-connect-hdfs,代碼行數:21,代碼來源:AvroHiveUtil.java

示例6: convertRecord

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
public static DeletableRecord convertRecord(SinkRecord record, boolean ignoreSchema, String versionType) {
  final Schema schema;
  final Object value;
  if (!ignoreSchema) {
    schema = preProcessSchema(record.valueSchema());
    value = preProcessValue(record.value(), record.valueSchema(), schema);
  } else {
    schema = record.valueSchema();
    value = record.value();
  }

  final String payload = new String(JSON_CONVERTER.fromConnectData(record.topic(), schema, value), StandardCharsets.UTF_8);

  if (StringUtils.isNotBlank(payload)) {
    DeleteEvent deleteEvent = GSON.fromJson(payload, DeleteEvent.class);
    return new DeletableRecord(new Key(deleteEvent.getIndex(), deleteEvent.getType(), deleteEvent.getId()), deleteEvent.getVersion(), versionType);
  } else {
    return null;
  }

}
 
開發者ID:chaokunyang,項目名稱:jkes,代碼行數:22,代碼來源:DataConverter.java

示例7: makeUpdatedSchema

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
private Schema makeUpdatedSchema(Schema schema) {
    final SchemaBuilder builder = SchemaUtil.copySchemaBasics(schema, SchemaBuilder.struct());

    for (Field field : schema.fields()) {
        builder.field(field.name(), field.schema());
    }

    if (topicField != null) {
        builder.field(topicField.name, topicField.optional ? Schema.OPTIONAL_STRING_SCHEMA : Schema.STRING_SCHEMA);
    }
    if (partitionField != null) {
        builder.field(partitionField.name, partitionField.optional ? Schema.OPTIONAL_INT32_SCHEMA : Schema.INT32_SCHEMA);
    }
    if (offsetField != null) {
        builder.field(offsetField.name, offsetField.optional ? Schema.OPTIONAL_INT64_SCHEMA : Schema.INT64_SCHEMA);
    }
    if (timestampField != null) {
        builder.field(timestampField.name, timestampField.optional ? OPTIONAL_TIMESTAMP_SCHEMA : Timestamp.SCHEMA);
    }
    if (staticField != null) {
        builder.field(staticField.name, staticField.optional ? Schema.OPTIONAL_STRING_SCHEMA : Schema.STRING_SCHEMA);
    }

    return builder.build();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:26,代碼來源:InsertField.java

示例8: shouldChangeSchema

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
public static boolean shouldChangeSchema(Schema valueSchema, Schema currentSchema, Compatibility compatibility) {
  if (currentSchema == null) {
    return true;
  }
  if ((valueSchema.version() == null || currentSchema.version() == null) && compatibility != Compatibility.NONE) {
    throw new SchemaProjectorException("Schema version required for " + compatibility.toString() + " compatibility");
  }
  switch (compatibility) {
    case BACKWARD:
    case FULL:
      return (valueSchema.version()).compareTo(currentSchema.version()) > 0;
    case FORWARD:
      return (valueSchema.version()).compareTo(currentSchema.version()) < 0;
    default:
      return !valueSchema.equals(currentSchema);
  }
}
 
開發者ID:jiangxiluning,項目名稱:kafka-connect-hdfs,代碼行數:18,代碼來源:SchemaUtils.java

示例9: applyWithSchema

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
private R applyWithSchema(R record) {
    Schema valueSchema = operatingSchema(record);
    Schema updatedSchema = getOrBuildSchema(valueSchema);

    // Whole-record casting
    if (wholeValueCastType != null)
        return newRecord(record, updatedSchema, castValueToType(operatingValue(record), wholeValueCastType));

    // Casting within a struct
    final Struct value = requireStruct(operatingValue(record), PURPOSE);

    final Struct updatedValue = new Struct(updatedSchema);
    for (Field field : value.schema().fields()) {
        final Object origFieldValue = value.get(field);
        final Schema.Type targetType = casts.get(field.name());
        final Object newFieldValue = targetType != null ? castValueToType(origFieldValue, targetType) : origFieldValue;
        updatedValue.put(updatedSchema.field(field.name()), newFieldValue);
    }
    return newRecord(record, updatedSchema, updatedValue);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:21,代碼來源:Cast.java

示例10: testCacheSchemaToConnectConversion

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
@Test
public void testCacheSchemaToConnectConversion() {
    Cache<JsonNode, Schema> cache = Whitebox.getInternalState(converter, "toConnectSchemaCache");
    assertEquals(0, cache.size());

    converter.toConnectData(TOPIC, "{ \"schema\": { \"type\": \"boolean\" }, \"payload\": true }".getBytes());
    assertEquals(1, cache.size());

    converter.toConnectData(TOPIC, "{ \"schema\": { \"type\": \"boolean\" }, \"payload\": true }".getBytes());
    assertEquals(1, cache.size());

    // Different schema should also get cached
    converter.toConnectData(TOPIC, "{ \"schema\": { \"type\": \"boolean\", \"optional\": true }, \"payload\": true }".getBytes());
    assertEquals(2, cache.size());

    // Even equivalent, but different JSON encoding of schema, should get different cache entry
    converter.toConnectData(TOPIC, "{ \"schema\": { \"type\": \"boolean\", \"optional\": false }, \"payload\": true }".getBytes());
    assertEquals(3, cache.size());
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:20,代碼來源:JsonConverterTest.java

示例11: apply

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
@Override
public R apply(R record) {
    final Schema schema = operatingSchema(record);
    requireSchema(schema, "updating schema metadata");
    final boolean isArray = schema.type() == Schema.Type.ARRAY;
    final boolean isMap = schema.type() == Schema.Type.MAP;
    final Schema updatedSchema = new ConnectSchema(
            schema.type(),
            schema.isOptional(),
            schema.defaultValue(),
            schemaName != null ? schemaName : schema.name(),
            schemaVersion != null ? schemaVersion : schema.version(),
            schema.doc(),
            schema.parameters(),
            schema.fields(),
            isMap ? schema.keySchema() : null,
            isMap || isArray ? schema.valueSchema() : null
    );
    return newRecord(record, updatedSchema);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:21,代碼來源:SetSchemaMetadata.java

示例12: putConnectorStateNonRetriableFailure

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
@Test
public void putConnectorStateNonRetriableFailure() {
    KafkaBasedLog<String, byte[]> kafkaBasedLog = mock(KafkaBasedLog.class);
    Converter converter = mock(Converter.class);
    KafkaStatusBackingStore store = new KafkaStatusBackingStore(new MockTime(), converter, STATUS_TOPIC, kafkaBasedLog);

    byte[] value = new byte[0];
    expect(converter.fromConnectData(eq(STATUS_TOPIC), anyObject(Schema.class), anyObject(Struct.class)))
            .andStubReturn(value);

    final Capture<Callback> callbackCapture = newCapture();
    kafkaBasedLog.send(eq("status-connector-conn"), eq(value), capture(callbackCapture));
    expectLastCall()
            .andAnswer(new IAnswer<Void>() {
                @Override
                public Void answer() throws Throwable {
                    callbackCapture.getValue().onCompletion(null, new UnknownServerException());
                    return null;
                }
            });
    replayAll();

    // the error is logged and ignored
    ConnectorStatus status = new ConnectorStatus(CONNECTOR, ConnectorStatus.State.RUNNING, WORKER_ID, 0);
    store.put(status);

    // state is not visible until read back from the log
    assertEquals(null, store.get(CONNECTOR));

    verifyAll();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:32,代碼來源:KafkaStatusBackingStoreTest.java

示例13: castWholeRecordKeyWithSchema

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
@Test
public void castWholeRecordKeyWithSchema() {
    final Cast<SourceRecord> xform = new Cast.Key<>();
    xform.configure(Collections.singletonMap(Cast.SPEC_CONFIG, "int8"));
    SourceRecord transformed = xform.apply(new SourceRecord(null, null, "topic", 0,
            Schema.INT32_SCHEMA, 42, Schema.STRING_SCHEMA, "bogus"));

    assertEquals(Schema.Type.INT8, transformed.keySchema().type());
    assertEquals((byte) 42, transformed.key());
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:11,代碼來源:CastTest.java

示例14: convert

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
public SourceRecord convert(String topic, String tag, Long timestamp, EventEntry entry) {
    if (config.isFluentdSchemasEnable()) {
        SchemaAndValue schemaAndValue = convert(topic, entry);
        return new SourceRecord(
                null,
                null,
                topic,
                null,
                Schema.STRING_SCHEMA,
                tag,
                schemaAndValue.schema(),
                schemaAndValue.value(),
                timestamp
        );
    } else {
        Object record;
        try {
            record = new ObjectMapper().readValue(entry.getRecord().toJson(), LinkedHashMap.class);
        } catch (IOException e) {
            record = entry.getRecord().toJson();
        }
        return new SourceRecord(
                null,
                null,
                topic,
                null,
                null,
                null,
                null,
                record,
                timestamp
        );
    }
}
 
開發者ID:fluent,項目名稱:kafka-connect-fluentd,代碼行數:35,代碼來源:MessagePackConverver.java

示例15: putConnectorState

import org.apache.kafka.connect.data.Schema; //導入依賴的package包/類
@Test
public void putConnectorState() {
    KafkaBasedLog<String, byte[]> kafkaBasedLog = mock(KafkaBasedLog.class);
    Converter converter = mock(Converter.class);
    KafkaStatusBackingStore store = new KafkaStatusBackingStore(new MockTime(), converter, STATUS_TOPIC, kafkaBasedLog);

    byte[] value = new byte[0];
    expect(converter.fromConnectData(eq(STATUS_TOPIC), anyObject(Schema.class), anyObject(Struct.class)))
            .andStubReturn(value);

    final Capture<Callback> callbackCapture = newCapture();
    kafkaBasedLog.send(eq("status-connector-conn"), eq(value), capture(callbackCapture));
    expectLastCall()
            .andAnswer(new IAnswer<Void>() {
                @Override
                public Void answer() throws Throwable {
                    callbackCapture.getValue().onCompletion(null, null);
                    return null;
                }
            });
    replayAll();

    ConnectorStatus status = new ConnectorStatus(CONNECTOR, ConnectorStatus.State.RUNNING, WORKER_ID, 0);
    store.put(status);

    // state is not visible until read back from the log
    assertEquals(null, store.get(CONNECTOR));

    verifyAll();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:31,代碼來源:KafkaStatusBackingStoreTest.java


注:本文中的org.apache.kafka.connect.data.Schema類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。