當前位置: 首頁>>代碼示例>>Java>>正文


Java GenericRecord.put方法代碼示例

本文整理匯總了Java中org.apache.avro.generic.GenericRecord.put方法的典型用法代碼示例。如果您正苦於以下問題:Java GenericRecord.put方法的具體用法?Java GenericRecord.put怎麽用?Java GenericRecord.put使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.avro.generic.GenericRecord的用法示例。


在下文中一共展示了GenericRecord.put方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: write

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
@Override
public Object write( final Object obj ) throws IOException{
  GenericRecord record = new GenericData.Record( avroSchema );
  if( ! ( obj instanceof Map ) ){
    return record;
  }

  Map<Object,Object> mapObj = (Map<Object,Object>)obj;

  for( KeyAndFormatter childFormatter : childContainer ){
    childFormatter.clear();
    record.put( childFormatter.getName() , childFormatter.get( mapObj ) );
  }

  return record;
}
 
開發者ID:yahoojapan,項目名稱:dataplatform-schema-lib,代碼行數:17,代碼來源:AvroRecordFormatter.java

示例2: createParquetFile

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
/**
 * Create a data file that gets exported to the db.
 * @param fileNum the number of the file (for multi-file export)
 * @param numRecords how many records to write to the file.
 */
protected void createParquetFile(int fileNum, int numRecords,
    ColumnGenerator... extraCols) throws IOException {

  String uri = "dataset:file:" + getTablePath();
  Schema schema = buildSchema(extraCols);
  DatasetDescriptor descriptor = new DatasetDescriptor.Builder()
    .schema(schema)
    .format(Formats.PARQUET)
    .build();
  Dataset dataset = Datasets.create(uri, descriptor);
  DatasetWriter writer = dataset.newWriter();
  try {
    for (int i = 0; i < numRecords; i++) {
      GenericRecord record = new GenericData.Record(schema);
      record.put("id", i);
      record.put("msg", getMsgPrefix() + i);
      addExtraColumns(record, i, extraCols);
      writer.write(record);
    }
  } finally {
    writer.close();
  }
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:29,代碼來源:TestParquetExport.java

示例3: testParquetRecordsNotSupported

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
public void testParquetRecordsNotSupported() throws IOException, SQLException {
  String[] argv = {};
  final int TOTAL_RECORDS = 1;

  Schema schema =  Schema.createRecord("nestedrecord", null, null, false);
  schema.setFields(Lists.newArrayList(buildField("myint",
      Schema.Type.INT)));
  GenericRecord record = new GenericData.Record(schema);
  record.put("myint", 100);
  // DB type is not used so can be anything:
  ColumnGenerator gen = colGenerator(record, schema, null, "VARCHAR(64)");
  createParquetFile(0, TOTAL_RECORDS,  gen);
  createTable(gen);
  try {
    runExport(getArgv(true, 10, 10, newStrArray(argv, "-m", "" + 1)));
    fail("Parquet records can not be exported.");
  } catch (Exception e) {
    // expected
    assertTrue(true);
  }
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:22,代碼來源:TestParquetExport.java

示例4: testAvroRecordsNotSupported

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
public void testAvroRecordsNotSupported() throws IOException, SQLException {
  String[] argv = {};
  final int TOTAL_RECORDS = 1;

  Schema schema =  Schema.createRecord("nestedrecord", null, null, false);
  schema.setFields(Lists.newArrayList(buildAvroField("myint",
      Schema.Type.INT)));
  GenericRecord record = new GenericData.Record(schema);
  record.put("myint", 100);
  // DB type is not used so can be anything:
  ColumnGenerator gen = colGenerator(record, schema, null, "VARCHAR(64)");
  createAvroFile(0, TOTAL_RECORDS,  gen);
  createTable(gen);
  try {
    runExport(getArgv(true, 10, 10, newStrArray(argv, "-m", "" + 1)));
    fail("Avro records can not be exported.");
  } catch (Exception e) {
    // expected
    assertTrue(true);
  }
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:22,代碼來源:TestAvroExport.java

示例5: convertToAvroRecord

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
private GenericRecord convertToAvroRecord(Schema avroRecordSchema, Object[] values) {
  // TODO can be improve to create once and reuse
  GenericRecord avroRec = new GenericData.Record(avroRecordSchema);
  List<ColumnConverterDescriptor> columnConverters = converterDescriptor.getColumnConverters();
  if (values.length != columnConverters.size()) {
    // mismatch schema
    // TODO better exception
    throw new RuntimeException("Expecting " + columnConverters.size() + " fields, received "
        + values.length + " values");
  }
  for (int i = 0; i < values.length; i++) {
    Object value = values[i];
    ColumnConverterDescriptor columnConverterDescriptor = columnConverters.get(i);
    Object valueToWrite = columnConverterDescriptor.getWritable(value);
    avroRec.put(columnConverterDescriptor.getColumnName(), valueToWrite);
  }
  return avroRec;
}
 
開發者ID:ampool,項目名稱:monarch,代碼行數:19,代碼來源:ParquetWriterWrapper.java

示例6: applyDiff

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
public static GenericRecord applyDiff(GenericRecord avroObj, RecordDiff diff, Schema schema) throws IOException {
    GenericRecord modifiedAvroObj = createGenericRecordWithSchema(schema, avroObj);
    Map<String, Object> diffFields = diff.getDiffFields();
    List<Schema.Field> fields = schema.getFields();

    for (Schema.Field field : fields) {
        if (diffFields.containsKey(field.name())) {
            GenericRecord fieldsValue = (GenericRecord) diffFields.get(field.name());
            Class<? extends GenericRecord> fieldsValueClass = fieldsValue.getClass();

            if (fieldsValueClass.isAssignableFrom(PrimitiveDiff.class)) {
                AvroDiffPrimitive.applyPrimitiveDiff(field, avroObj, fieldsValue, modifiedAvroObj, null);
            } else if (fieldsValueClass.isAssignableFrom(MapDiff.class)) {
                AvroDiffMap.applyMapDiff(field, avroObj, fieldsValue, modifiedAvroObj);
            } else if (fieldsValueClass.isAssignableFrom(ArrayDiff.class)) {
                AvroDiffArray.applyArrayDiff(field, avroObj, fieldsValue, modifiedAvroObj);
            } else if (fieldsValueClass.isAssignableFrom(RecordDiff.class)) {
                GenericRecord recordField = (GenericRecord) modifiedAvroObj.get(field.pos());
                GenericRecord genericRecord = applyDiff(recordField, (RecordDiff) fieldsValue, recordField.getSchema());
                modifiedAvroObj.put(field.pos(), genericRecord);
            } else {
                LOGGER.error("Field from RecordDiff has unknown type.");
            }
        } else {
            modifiedAvroObj.put(field.pos(), avroObj.get(field.pos()));
        }
    }

    return SpecificData.get().deepCopy(schema, modifiedAvroObj);
}
 
開發者ID:atlascon,項目名稱:avro-diff,代碼行數:31,代碼來源:AvroDiff.java

示例7: generateGenericRecord

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
private GenericRecord generateGenericRecord(Schema schema) {
    GenericRecord user = new GenericData.Record(schema);
    user.put(ID, id);
    user.put(NAME, name);
    user.put(GENDER, gender);
    return user;
}
 
開發者ID:cognitree,項目名稱:flume-elasticsearch-sink,代碼行數:8,代碼來源:TestAvroSerializer.java

示例8: writeParser

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
@Override
public Object writeParser(final PrimitiveObject obj , final IParser parser ) throws IOException{
  GenericRecord record = new GenericData.Record( avroSchema );

  for( KeyAndFormatter childFormatter : childContainer ){
    childFormatter.clear();
    record.put( childFormatter.getName() , childFormatter.get( parser ) );
  }

  return record;
}
 
開發者ID:yahoojapan,項目名稱:dataplatform-schema-lib,代碼行數:12,代碼來源:AvroRecordFormatter.java

示例9: toGenericRecord

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
/**
 * Manipulate a GenericRecord instance.
 */
public static GenericRecord toGenericRecord(Map<String, Object> fieldMap,
    Schema schema, boolean bigDecimalFormatString) {
  GenericRecord record = new GenericData.Record(schema);
  for (Map.Entry<String, Object> entry : fieldMap.entrySet()) {
    Object avroObject = toAvro(entry.getValue(), bigDecimalFormatString);
    String avroColumn = toAvroColumn(entry.getKey());
    record.put(avroColumn, avroObject);
  }
  return record;
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:14,代碼來源:AvroUtil.java

示例10: addExtraColumns

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
private void addExtraColumns(GenericRecord record, int rowNum,
    ColumnGenerator[] extraCols) {
  int colNum = 0;
  for (ColumnGenerator gen : extraCols) {
    if (gen.getColumnParquetSchema() != null) {
      record.put(forIdx(colNum++), gen.getExportValue(rowNum));
    }
  }
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:10,代碼來源:TestParquetExport.java

示例11: createAvroFile

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
/**
 * Create a data file that gets exported to the db.
 * @param fileNum the number of the file (for multi-file export)
 * @param numRecords how many records to write to the file.
 */
protected void createAvroFile(int fileNum, int numRecords,
    ColumnGenerator... extraCols) throws IOException {

  Path tablePath = getTablePath();
  Path filePath = new Path(tablePath, "part" + fileNum);

  Configuration conf = new Configuration();
  if (!BaseSqoopTestCase.isOnPhysicalCluster()) {
    conf.set(CommonArgs.FS_DEFAULT_NAME, CommonArgs.LOCAL_FS);
  }
  FileSystem fs = FileSystem.get(conf);
  fs.mkdirs(tablePath);
  OutputStream os = fs.create(filePath);

  Schema schema = buildAvroSchema(extraCols);
  DatumWriter<GenericRecord> datumWriter =
    new GenericDatumWriter<GenericRecord>();
  DataFileWriter<GenericRecord> dataFileWriter =
    new DataFileWriter<GenericRecord>(datumWriter);
  dataFileWriter.create(schema, os);

  for (int i = 0; i < numRecords; i++) {
    GenericRecord record = new GenericData.Record(schema);
    record.put("id", i);
    record.put("msg", getMsgPrefix() + i);
    addExtraColumns(record, i, extraCols);
    dataFileWriter.append(record);
  }

  dataFileWriter.close();
  os.close();
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:38,代碼來源:TestAvroExport.java

示例12: addExtraColumns

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
private void addExtraColumns(GenericRecord record, int rowNum,
    ColumnGenerator[] extraCols) {
  int colNum = 0;
  for (ColumnGenerator gen : extraCols) {
    if (gen.getColumnAvroSchema() != null) {
      record.put(forIdx(colNum++), gen.getExportValue(rowNum));
    }
  }
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:10,代碼來源:TestAvroExport.java

示例13: convertToAvroRecord

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
private GenericRecord convertToAvroRecord(CDCEvent event) {
  GenericRecord avroRec = new GenericData.Record(getAvroSchema());
  Row row = event.getRow();

  // EVENTID
  avroRec.put("EVENTID", String.valueOf(event.getEventSequenceID().getSequenceID()));

  // OPERATION_TYPE
  avroRec.put("OPERATION_TYPE", String.valueOf(event.getOperation()));
  // RowKey
  avroRec.put("RowKey", row.getRowId());

  // VersionID

  if (row.getRowTimeStamp() != null) {
    avroRec.put("VersionID", row.getRowTimeStamp());
  }

  // add col values
  List<Cell> cells = row.getCells();
  cells.forEach(cell -> {
    if (cell.getColumnValue() != null) {
      avroRec.put(Bytes.toString(cell.getColumnName()), cell.getColumnValue());
    }
  });

  return avroRec;
}
 
開發者ID:ampool,項目名稱:monarch,代碼行數:29,代碼來源:MTableCDCParquetListener.java

示例14: jsonColumn

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
@Override
public GenericRecord jsonColumn(Value value) {
    if (!value.isMapValue())
        throw new RuntimeException("Support only map type json record");

    Map<Value, Value> map = value.asMapValue().map();

    GenericRecord record = new GenericData.Record(avroSchema);
    for (Map.Entry<String, AbstractAvroValueConverter> entry : converterTable.entrySet()) {
        Value key = ValueFactory.newString(entry.getKey());
        if (!map.containsKey(key)) {
            record.put(entry.getKey(), null);
        } else {
            Value child = map.get(ValueFactory.newString(entry.getKey()));
            switch (child.getValueType()) {
                case STRING:
                    record.put(entry.getKey(), entry.getValue().stringColumn(child.asStringValue().toString()));
                    break;
                case INTEGER:
                    record.put(entry.getKey(), entry.getValue().longColumn(child.asIntegerValue().toLong()));
                    break;
                case FLOAT:
                    record.put(entry.getKey(), entry.getValue().doubleColumn(child.asFloatValue().toDouble()));
                    break;
                case BOOLEAN:
                    record.put(entry.getKey(), entry.getValue().booleanColumn(child.asBooleanValue().getBoolean()));
                    break;
                case ARRAY:
                    record.put(entry.getKey(), entry.getValue().jsonColumn(child.asArrayValue()));
                    break;
                case MAP:
                    record.put(entry.getKey(), entry.getValue().jsonColumn(child.asMapValue()));
                    break;
                default:
                    throw new RuntimeException("Irregular Messagepack type");
            }
        }
    }
    return record;
}
 
開發者ID:joker1007,項目名稱:embulk-formatter-avro,代碼行數:41,代碼來源:AvroRecordConverter.java

示例15: write

import org.apache.avro.generic.GenericRecord; //導入方法依賴的package包/類
/**
 * 將avro格式的數據寫入到parquet文件中
 *
 * @param parquetPath
 */
public void write(String parquetPath) {
    Schema.Parser parser = new Schema.Parser();
    try {
        Schema schema = parser.parse(AvroParquetOperation.class.getClassLoader().getResourceAsStream("StringPair.avsc"));
        GenericRecord datum = new GenericData.Record(schema);
        datum.put("left", "L");
        datum.put("right", "R");

        Path path = new Path(parquetPath);
        System.out.println(path);
        AvroParquetWriter<GenericRecord> writer = new AvroParquetWriter<GenericRecord>(path, schema);
        writer.write(datum);
        writer.close();
    } catch (IOException e) {
        e.printStackTrace();
    }
}
 
開發者ID:mumuhadoop,項目名稱:mumu-parquet,代碼行數:23,代碼來源:AvroParquetOperation.java


注:本文中的org.apache.avro.generic.GenericRecord.put方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。