当前位置: 首页>>代码示例>>Java>>正文


Java AvroData类代码示例

本文整理汇总了Java中io.confluent.connect.avro.AvroData的典型用法代码示例。如果您正苦于以下问题:Java AvroData类的具体用法?Java AvroData怎么用?Java AvroData使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


AvroData类属于io.confluent.connect.avro包,在下文中一共展示了AvroData类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: putRecords

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
public static byte[] putRecords(Collection<SinkRecord> records, AvroData avroData) throws IOException {
  final DataFileWriter<Object> writer = new DataFileWriter<>(new GenericDatumWriter<>());
  ByteArrayOutputStream out = new ByteArrayOutputStream();
  Schema schema = null;
  for (SinkRecord record : records) {
    if (schema == null) {
      schema = record.valueSchema();
      org.apache.avro.Schema avroSchema = avroData.fromConnectSchema(schema);
      writer.create(avroSchema, out);
    }
    Object value = avroData.fromConnectData(schema, record.value());
    // AvroData wraps primitive types so their schema can be included. We need to unwrap
    // NonRecordContainers to just their value to properly handle these types
    if (value instanceof NonRecordContainer) {
      value = ((NonRecordContainer) value).getValue();
    }
    writer.append(value);
  }
  writer.flush();
  return out.toByteArray();
}
 
开发者ID:confluentinc,项目名称:kafka-connect-storage-cloud,代码行数:22,代码来源:AvroUtils.java

示例2: testRetrieveSchema

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
@Test
public void testRetrieveSchema() throws Exception {
  final TableId table = TableId.of("test", "kafka_topic");
  final String testTopic = "kafka-topic";
  final String testSubject = "kafka-topic-value";
  final String testAvroSchemaString =
      "{\"type\": \"record\", "
      + "\"name\": \"testrecord\", "
      + "\"fields\": [{\"name\": \"f1\", \"type\": \"string\"}]}";
  final SchemaMetadata testSchemaMetadata = new SchemaMetadata(1, 1, testAvroSchemaString);

  SchemaRegistryClient schemaRegistryClient = mock(SchemaRegistryClient.class);
  when(schemaRegistryClient.getLatestSchemaMetadata(testSubject)).thenReturn(testSchemaMetadata);

  SchemaRegistrySchemaRetriever testSchemaRetriever = new SchemaRegistrySchemaRetriever(
      schemaRegistryClient,
      new AvroData(0)
  );

  Schema expectedKafkaConnectSchema =
      SchemaBuilder.struct().field("f1", Schema.STRING_SCHEMA).name("testrecord").build();

  assertEquals(expectedKafkaConnectSchema, testSchemaRetriever.retrieveSchema(table, testTopic));
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:25,代码来源:SchemaRegistrySchemaRetrieverTest.java

示例3: TopicPartitionWriter

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
public TopicPartitionWriter(
    TopicPartition tp,
    Storage storage,
    RecordWriterProvider writerProvider,
    Partitioner partitioner,
    HdfsSinkConnectorConfig connectorConfig,
    SinkTaskContext context,
    AvroData avroData) {
  this(tp, storage, writerProvider, partitioner, connectorConfig, context, avroData, null, null, null, null, null);
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:11,代码来源:TopicPartitionWriter.java

示例4: configureConnector

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
protected void configureConnector() {
  connectorConfig = new HdfsSinkConnectorConfig(connectorProps);
  topicsDir = connectorConfig.getString(HdfsSinkConnectorConfig.TOPICS_DIR_CONFIG);
  logsDir = connectorConfig.getString(HdfsSinkConnectorConfig.LOGS_DIR_CONFIG);
  int schemaCacheSize = connectorConfig.getInt(HdfsSinkConnectorConfig.SCHEMA_CACHE_SIZE_CONFIG);
  avroData = new AvroData(schemaCacheSize);
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:8,代码来源:HdfsSinkConnectorTestBase.java

示例5: getRecordWriter

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
@Override
public RecordWriter<SinkRecord> getRecordWriter(
    Configuration conf, final String fileName, SinkRecord record, final AvroData avroData)
    throws IOException {

  final Map<String, List<Object>> data = Data.getData();

  if (!data.containsKey(fileName)) {
    data.put(fileName, new LinkedList<>());
  }

  return new MemoryRecordWriter(fileName);
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:14,代码来源:MemoryRecordWriterProvider.java

示例6: calcByteSize

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
private int calcByteSize(List<SinkRecord> sinkRecords) throws IOException {
  ByteArrayOutputStream baos = new ByteArrayOutputStream();
  DataFileWriter<Object> writer = new DataFileWriter<>(new GenericDatumWriter<>());
  AvroData avroData = new AvroData(1);
  boolean writerInit = false;
  for(SinkRecord sinkRecord: sinkRecords){
    if(!writerInit){
      writer.create(avroData.fromConnectSchema(sinkRecord.valueSchema()), baos);
      writerInit = true;
    }
    writer.append(avroData.fromConnectData(sinkRecord.valueSchema(), sinkRecord.value()));
  }
  return baos.size();
}
 
开发者ID:confluentinc,项目名称:kafka-connect-storage-cloud,代码行数:15,代码来源:S3SinkTaskTest.java

示例7: configure

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
@Override
public void configure(Map<String, String> properties) {
  SchemaRegistrySchemaRetrieverConfig config =
      new SchemaRegistrySchemaRetrieverConfig(properties);
  schemaRegistryClient =
      new CachedSchemaRegistryClient(config.getString(config.LOCATION_CONFIG), 0);
  avroData = new AvroData(config.getInt(config.AVRO_DATA_CACHE_SIZE_CONFIG));
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:9,代码来源:SchemaRegistrySchemaRetriever.java

示例8: GenericRecordToStruct

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
public GenericRecordToStruct() {
    this.avroData = new AvroData(CACHE_SIZE);
}
 
开发者ID:mmolimar,项目名称:kafka-connect-fs,代码行数:4,代码来源:AvroFileReader.java

示例9: HiveUtil

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
public HiveUtil(HdfsSinkConnectorConfig connectorConfig, AvroData avroData, HiveMetaStore hiveMetaStore) {
  this.url = connectorConfig.getString(HdfsSinkConnectorConfig.HDFS_URL_CONFIG);
  this.topicsDir = connectorConfig.getString(HdfsSinkConnectorConfig.TOPICS_DIR_CONFIG);
  this.avroData = avroData;
  this.hiveMetaStore = hiveMetaStore;
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:7,代码来源:HiveUtil.java

示例10: AvroHiveUtil

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
public AvroHiveUtil(HdfsSinkConnectorConfig connectorConfig, AvroData avroData, HiveMetaStore hiveMetaStore) {
  super(connectorConfig, avroData, hiveMetaStore);
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:4,代码来源:AvroHiveUtil.java

示例11: AvroFileReader

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
public AvroFileReader(AvroData avroData) {
  this.avroData = avroData;
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:4,代码来源:AvroFileReader.java

示例12: getSchemaFileReader

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
public SchemaFileReader getSchemaFileReader(AvroData avroData) {
  return new AvroFileReader(avroData);
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:4,代码来源:AvroFormat.java

示例13: getHiveUtil

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
public HiveUtil getHiveUtil(HdfsSinkConnectorConfig config, AvroData avroData, HiveMetaStore hiveMetaStore) {
  return new AvroHiveUtil(config, avroData, hiveMetaStore);
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:4,代码来源:AvroFormat.java

示例14: getAvroData

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
public AvroData getAvroData() {
  return avroData;
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:4,代码来源:HdfsSinkTask.java

示例15: ParquetHiveUtil

import io.confluent.connect.avro.AvroData; //导入依赖的package包/类
public ParquetHiveUtil(HdfsSinkConnectorConfig connectorConfig, AvroData avroData, HiveMetaStore hiveMetaStore) {
  super(connectorConfig, avroData, hiveMetaStore);
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:4,代码来源:ParquetHiveUtil.java


注:本文中的io.confluent.connect.avro.AvroData类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。