當前位置: 首頁>>代碼示例>>Java>>正文


Java GenericRecord類代碼示例

本文整理匯總了Java中org.apache.avro.generic.GenericRecord的典型用法代碼示例。如果您正苦於以下問題:Java GenericRecord類的具體用法?Java GenericRecord怎麽用?Java GenericRecord使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


GenericRecord類屬於org.apache.avro.generic包,在下文中一共展示了GenericRecord類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: applyMapDiff

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
private static void applyMapDiff(Schema.Field field, GenericRecord avroObj, GenericRecord fieldsValue, Map<Object, Object> modifiedObj, Object key) throws IOException {
    Map<String, Object> changedKeys = ((MapDiff) fieldsValue).getChangedKeys();

    for (String changedKey : changedKeys.keySet()) {
        Class<?> clazz = changedKeys.get(changedKey).getClass();

        if (clazz.isAssignableFrom(PrimitiveDiff.class)) {
            AvroDiffPrimitive.applyPrimitiveDiff(field, avroObj, changedKeys.get(changedKey), changedKeys, changedKey);
            modifiedObj.put(key, changedKeys);
        } else if (clazz.isAssignableFrom(MapDiff.class)) {
            AvroDiffMap.applyMapDiff(field, avroObj, (GenericRecord) changedKeys.get(changedKey), Maps.newHashMap(changedKeys), changedKey);
        } else if (clazz.isAssignableFrom(ArrayDiff.class)) {
            AvroDiffArray.applyArrayDiff(field, avroObj, (GenericRecord) changedKeys.get(changedKey), null);
        } else if (clazz.isAssignableFrom(RecordDiff.class)) {
            Object avroField = ((Map) avroObj.get(field.pos())).get(key);
            GenericRecord genericRecord = AvroDiff.applyDiff((GenericRecord) ((Map) avroField).get(changedKey), (RecordDiff) changedKeys.get(changedKey),
                    ((GenericRecord) ((Map) avroField).get(changedKey)).getSchema());
            ((Map) avroField).put(changedKey, genericRecord);
            modifiedObj.put(key, avroField);
        }
    }
}
 
開發者ID:atlascon,項目名稱:avro-diff,代碼行數:23,代碼來源:AvroDiffMap.java

示例2: write

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
@Override
public Object write( final Object obj ) throws IOException{
  GenericRecord record = new GenericData.Record( avroSchema );
  if( ! ( obj instanceof Map ) ){
    return record;
  }

  Map<Object,Object> mapObj = (Map<Object,Object>)obj;

  for( KeyAndFormatter childFormatter : childContainer ){
    childFormatter.clear();
    record.put( childFormatter.getName() , childFormatter.get( mapObj ) );
  }

  return record;
}
 
開發者ID:yahoojapan,項目名稱:dataplatform-schema-lib,代碼行數:17,代碼來源:AvroRecordFormatter.java

示例3: processRecordY

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
public static List processRecordY(CSVPrinter printer, GenericRecord record, List<Column> columns)
		throws IOException {
	List r = new ArrayList<>();
	columns.forEach(c -> {
		try {
			r.add(record.get(c.getField().name()));
		} catch (Exception e) {

			try {
				r.add(c.getDefaultValue());
			} catch (Exception e2) {
				r.add("NULL");
			}
		}
	});

	printer.printRecord(r);
	printer.flush();
	return r;
}
 
開發者ID:cslbehring,項目名稱:public_hdf_processors_ConvertAvroToCSV,代碼行數:21,代碼來源:CsvProcessor.java

示例4: testBlobCompressedAvroImportInline

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
/**
 * Import blob data that is smaller than inline lob limit and compress with
 * deflate codec. Blob data should be encoded and saved as Avro bytes.
 * @throws IOException
 * @throws SQLException
 */
public void testBlobCompressedAvroImportInline()
    throws IOException, SQLException {
  String [] types = { getBlobType() };
  String expectedVal = "This is short BLOB data";
  String [] vals = { getBlobInsertStr(expectedVal) };

  createTableWithColTypes(types, vals);

  runImport(getArgv("--compression-codec", CodecMap.DEFLATE));

  Path outputFile = new Path(getTablePath(), "part-m-00000.avro");
  DataFileReader<GenericRecord> reader = read(outputFile);
  GenericRecord record = reader.next();

  // Verify that the data block of the Avro file is compressed with deflate
  // codec.
  assertEquals(CodecMap.DEFLATE,
      reader.getMetaString(DataFileConstants.CODEC));

  // Verify that all columns are imported correctly.
  ByteBuffer buf = (ByteBuffer) record.get(getColName(0));
  String returnVal = new String(buf.array());

  assertEquals(getColName(0), expectedVal, returnVal);
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:32,代碼來源:LobAvroImportTestCase.java

示例5: shouldCreateAndApplyRecordDiff

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
@Test
public void shouldCreateAndApplyRecordDiff() throws IOException {
    map1.put("a", 3f);
    map1.put("c", true);
    map2.put("c", true);
    map2.put("b", 5l);

    list1.add("asf");
    list2.add("ddd");

    RecordDiff diff = AvroDiff.createDiff(recordSpecificRecord1, recordSpecificRecord2, recordSpecificRecord1.getSchema());
    GenericRecord modifiedRecord = AvroDiff.applyDiff(recordSpecificRecord1, diff, recordSpecificRecord1.getSchema());

    Assert.assertEquals(modifiedRecord, recordSpecificRecord2);
    Assert.assertNotEquals(modifiedRecord, recordSpecificRecord1);
}
 
開發者ID:atlascon,項目名稱:avro-diff,代碼行數:17,代碼來源:RecordDiffTest.java

示例6: createParquetFile

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
/**
 * Create a data file that gets exported to the db.
 * @param fileNum the number of the file (for multi-file export)
 * @param numRecords how many records to write to the file.
 */
protected void createParquetFile(int fileNum, int numRecords,
    ColumnGenerator... extraCols) throws IOException {

  String uri = "dataset:file:" + getTablePath();
  Schema schema = buildSchema(extraCols);
  DatasetDescriptor descriptor = new DatasetDescriptor.Builder()
    .schema(schema)
    .format(Formats.PARQUET)
    .build();
  Dataset dataset = Datasets.create(uri, descriptor);
  DatasetWriter writer = dataset.newWriter();
  try {
    for (int i = 0; i < numRecords; i++) {
      GenericRecord record = new GenericData.Record(schema);
      record.put("id", i);
      record.put("msg", getMsgPrefix() + i);
      addExtraColumns(record, i, extraCols);
      writer.write(record);
    }
  } finally {
    writer.close();
  }
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:29,代碼來源:TestParquetExport.java

示例7: fullDataPollMessage

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
public static ControlMessage fullDataPollMessage(GenericRecord record, String topologyId, ConsumerRecord<String, byte[]> consumerRecord) {
    ControlMessage message = new ControlMessage();
    message.setId(System.currentTimeMillis());
    message.setFrom(topologyId);
    message.setType(FULL_DATA_PULL_REQ);
    message.addPayload("topic", consumerRecord.topic());
    message.addPayload("DBUS_DATASOURCE_ID", Utils.getDatasource().getId());
    PairWrapper<String, Object> wrapper = BoltCommandHandlerHelper.convertAvroRecord(record, Constants.MessageBodyKey.noorderKeys);
    message.addPayload("OP_TS", wrapper.getProperties(Constants.MessageBodyKey.OP_TS).toString());
    message.addPayload("POS", wrapper.getProperties(Constants.MessageBodyKey.POS).toString());
    for (Pair<String,Object> pair : wrapper.getPairs()) {
        message.addPayload(pair.getKey(), pair.getValue());
    }

    return message;
}
 
開發者ID:BriData,項目名稱:DBus,代碼行數:17,代碼來源:ControlMessageEncoder.java

示例8: convertAvroRecordUseBeforeMap

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
public static <T extends Object> PairWrapper<String, Object> convertAvroRecordUseBeforeMap(GenericRecord record, Set<T> noorderKeys) {
    Schema schema = record.getSchema();
    List<Schema.Field> fields = schema.getFields();
    PairWrapper<String, Object> wrapper = new PairWrapper<>();

    for (Schema.Field field : fields) {
        String key = field.name();
        Object value = record.get(key);
        // 分離存儲是否關心順序的key-value
        if (noorderKeys.contains(field.name())) {
            wrapper.addProperties(key, value);
        }
    }

    GenericRecord before = getFromRecord(MessageBodyKey.BEFORE, record);

    Map<String, Object> beforeMap = convert2map(before);

    for (Map.Entry<String, Object> entry : beforeMap.entrySet()) {
        if(!entry.getKey().endsWith(MessageBodyKey.IS_MISSING_SUFFIX)) {
            wrapper.addPair(new Pair<>(entry.getKey(), CharSequence.class.isInstance(entry.getValue())?entry.getValue().toString():entry.getValue()));
        }
    }

    return wrapper;
}
 
開發者ID:BriData,項目名稱:DBus,代碼行數:27,代碼來源:BoltCommandHandlerHelper.java

示例9: convertToAvroRecord

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
private GenericRecord convertToAvroRecord(Schema avroRecordSchema, Object[] values) {
  // TODO can be improve to create once and reuse
  GenericRecord avroRec = new GenericData.Record(avroRecordSchema);
  List<ColumnConverterDescriptor> columnConverters = converterDescriptor.getColumnConverters();
  if (values.length != columnConverters.size()) {
    // mismatch schema
    // TODO better exception
    throw new RuntimeException("Expecting " + columnConverters.size() + " fields, received "
        + values.length + " values");
  }
  for (int i = 0; i < values.length; i++) {
    Object value = values[i];
    ColumnConverterDescriptor columnConverterDescriptor = columnConverters.get(i);
    Object valueToWrite = columnConverterDescriptor.getWritable(value);
    avroRec.put(columnConverterDescriptor.getColumnName(), valueToWrite);
  }
  return avroRec;
}
 
開發者ID:ampool,項目名稱:monarch,代碼行數:19,代碼來源:ParquetWriterWrapper.java

示例10: deserialize

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
@SuppressWarnings("unchecked")
@Override
public T deserialize(String topic, byte[] data) {
	try {
		T result = null;

		if (data != null) {
			LOGGER.debug("data='{}'", DatatypeConverter.printHexBinary(data));

			DatumReader<GenericRecord> datumReader = new SpecificDatumReader<>(
					targetType.newInstance().getSchema());
			Decoder decoder = DecoderFactory.get().binaryDecoder(data, null);

			result = (T) datumReader.read(null, decoder);
			LOGGER.debug("deserialized data='{}'", result);
		}
		return result;
	} catch (Exception ex) {
		throw new SerializationException(
				"Can't deserialize data '" + Arrays.toString(data) + "' from topic '" + topic + "'", ex);
	}
}
 
開發者ID:italia,項目名稱:daf-replicate-ingestion,代碼行數:23,代碼來源:AvroDeserializer.java

示例11: testFirstUnderscoreInColumnName

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
public void testFirstUnderscoreInColumnName() throws IOException {
  String [] names = { "_NAME" };
  String [] types = { "INT" };
  String [] vals = { "1987" };
  createTableWithColTypesAndNames(names, types, vals);

  runImport(getOutputArgv(true, null));

  Schema schema = getSchema();
  assertEquals(Type.RECORD, schema.getType());
  List<Field> fields = schema.getFields();
  assertEquals(types.length, fields.size());
  checkField(fields.get(0), "__NAME", Type.INT);

  DatasetReader<GenericRecord> reader = getReader();
  try {
    assertTrue(reader.hasNext());
    GenericRecord record1 = reader.next();
    assertEquals("__NAME", 1987, record1.get("__NAME"));
    assertFalse(reader.hasNext());
  } finally {
    reader.close();
  }
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:25,代碼來源:TestParquetImport.java

示例12: createDataFile

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
private static Path createDataFile() throws IOException {
    File avroFile = File.createTempFile("test-", "." + FILE_EXTENSION);
    DatumWriter<GenericRecord> writer = new GenericDatumWriter<>(schema);
    try (DataFileWriter<GenericRecord> dataFileWriter = new DataFileWriter<>(writer)) {
        dataFileWriter.setFlushOnEveryBlock(true);
        dataFileWriter.setSyncInterval(32);
        dataFileWriter.create(schema, avroFile);

        IntStream.range(0, NUM_RECORDS).forEach(index -> {
            GenericRecord datum = new GenericData.Record(schema);
            datum.put(FIELD_INDEX, index);
            datum.put(FIELD_NAME, String.format("%d_name_%s", index, UUID.randomUUID()));
            datum.put(FIELD_SURNAME, String.format("%d_surname_%s", index, UUID.randomUUID()));
            try {
                OFFSETS_BY_INDEX.put(index, dataFileWriter.sync() - 16L);
                dataFileWriter.append(datum);
            } catch (IOException ioe) {
                throw new RuntimeException(ioe);
            }
        });
    }
    Path path = new Path(new Path(fsUri), avroFile.getName());
    fs.moveFromLocalFile(new Path(avroFile.getAbsolutePath()), path);
    return path;
}
 
開發者ID:mmolimar,項目名稱:kafka-connect-fs,代碼行數:26,代碼來源:AvroFileReaderTest.java

示例13: testOverrideTypeMapping

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
public void testOverrideTypeMapping() throws IOException {
  String [] types = { "INT" };
  String [] vals = { "10" };
  createTableWithColTypes(types, vals);

  String [] extraArgs = { "--map-column-java", "DATA_COL0=String"};
  runImport(getOutputArgv(true, extraArgs));

  Schema schema = getSchema();
  assertEquals(Type.RECORD, schema.getType());
  List<Field> fields = schema.getFields();
  assertEquals(types.length, fields.size());
  checkField(fields.get(0), "DATA_COL0", Type.STRING);

  DatasetReader<GenericRecord> reader = getReader();
  try {
    assertTrue(reader.hasNext());
    GenericRecord record1 = reader.next();
    assertEquals("DATA_COL0", "10", record1.get("DATA_COL0"));
    assertFalse(reader.hasNext());
  } finally {
    reader.close();
  }
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:25,代碼來源:TestParquetImport.java

示例14: testTimedFileRolling

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
@Test
public void testTimedFileRolling()
    throws EventDeliveryException, InterruptedException {
  // use a new roll interval
  config.put("kite.rollInterval", "1"); // in seconds

  DatasetSink sink = sink(in, config);

  Dataset<GenericRecord> records = Datasets.load(FILE_DATASET_URI);

  // run the sink
  sink.start();
  sink.process();

  Assert.assertEquals("Should have committed", 0, remaining(in));

  Thread.sleep(1100); // sleep longer than the roll interval
  sink.process(); // rolling happens in the process method

  Assert.assertEquals(Sets.newHashSet(expected), read(records));

  // wait until the end to stop because it would close the files
  sink.stop();
}
 
開發者ID:moueimei,項目名稱:flume-release-1.7.0,代碼行數:25,代碼來源:TestDatasetSink.java

示例15: serialize

import org.apache.avro.generic.GenericRecord; //導入依賴的package包/類
/**
 * Converts the avro binary data to the json format
 */
@Override
public XContentBuilder serialize(Event event) {
    XContentBuilder builder = null;
    try {
        if (datumReader != null) {
            Decoder decoder = new DecoderFactory().binaryDecoder(event.getBody(), null);
            GenericRecord data = datumReader.read(null, decoder);
            logger.trace("Record in event " + data);
            XContentParser parser = XContentFactory
                    .xContent(XContentType.JSON)
                    .createParser(NamedXContentRegistry.EMPTY, data.toString());
            builder = jsonBuilder().copyCurrentStructure(parser);
            parser.close();
        } else {
            logger.error("Schema File is not configured");
        }
    } catch (IOException e) {
        logger.error("Exception in parsing avro format data but continuing serialization to process further records",
                e.getMessage(), e);
    }
    return builder;
}
 
開發者ID:cognitree,項目名稱:flume-elasticsearch-sink,代碼行數:26,代碼來源:AvroSerializer.java


注:本文中的org.apache.avro.generic.GenericRecord類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。