当前位置: 首页>>代码示例>>Java>>正文


Java DataFileWriter.close方法代码示例

本文整理汇总了Java中org.apache.avro.file.DataFileWriter.close方法的典型用法代码示例。如果您正苦于以下问题:Java DataFileWriter.close方法的具体用法?Java DataFileWriter.close怎么用?Java DataFileWriter.close使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.avro.file.DataFileWriter的用法示例。


在下文中一共展示了DataFileWriter.close方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: MemberInfoDynSer

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
/**
 * 动态序列化:通过动态解析Schema文件进行内容设置,并序列化内容
 * 
 * @throws IOException
 */
public void MemberInfoDynSer() throws IOException {
    // 1.解析schema文件内容
    Parser parser = new Parser();
    Schema mSchema = parser.parse(this.getClass().getResourceAsStream("/Members.avsc"));
    // 2.构建数据写对象
    DatumWriter<GenericRecord> mGr = new SpecificDatumWriter<GenericRecord>(mSchema);
    DataFileWriter<GenericRecord> mDfw = new DataFileWriter<GenericRecord>(mGr);
    // 3.创建序列化文件
    mDfw.create(mSchema, new File("/Users/a/Desktop/tmp/members.avro"));
    // 4.添加序列化数据
    for (int i = 0; i < 20; i++) {
        GenericRecord gr = new GenericData.Record(mSchema);
        int r = i * new Random().nextInt(50);
        gr.put("userName", "light-" + r);
        gr.put("userPwd", "2016-" + r);
        gr.put("realName", "滔滔" + r + "号");
        mDfw.append(gr);
    }
    // 5.关闭数据文件写对象
    mDfw.close();
    System.out.println("Dyn Builder Ser Start Complete.");
}
 
开发者ID:lrtdc,项目名称:book_ldrtc,代码行数:28,代码来源:MemberServerProvider.java

示例2: close

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
@Override
public void close(TaskAttemptContext context) throws IOException {
    // Create an Avro container file and a writer to it.
    DataFileWriter<K> avroFileWriter;
    avroFileWriter = new DataFileWriter<K>(new ReflectDatumWriter<K>(writerSchema));
    avroFileWriter.setCodec(compressionCodec);

    // Writes the meta-data.
    avroFileWriter.setMeta(Constants.AVRO_NUMBER_OF_RECORDS, this.numberOfRecords);

    // Writes the file.
    avroFileWriter.create(this.writerSchema, this.outputStream);
    for (AvroKey<K> record : this.recordsList)
        avroFileWriter.append(record.datum());

    // Close the stream.
    avroFileWriter.close();
}
 
开发者ID:pasqualesalza,项目名称:elephant56,代码行数:19,代码来源:PopulationRecordWriter.java

示例3: createAvroData

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
private byte[] createAvroData(String name, int age, List<String> emails)  throws IOException {
  String AVRO_SCHEMA = "{\n"
    +"\"type\": \"record\",\n"
    +"\"name\": \"Employee\",\n"
    +"\"fields\": [\n"
    +" {\"name\": \"name\", \"type\": \"string\"},\n"
    +" {\"name\": \"age\", \"type\": \"int\"},\n"
    +" {\"name\": \"emails\", \"type\": {\"type\": \"array\", \"items\": \"string\"}},\n"
    +" {\"name\": \"boss\", \"type\": [\"Employee\",\"null\"]}\n"
    +"]}";
  Schema schema = new Schema.Parser().parse(AVRO_SCHEMA);
  ByteArrayOutputStream out = new ByteArrayOutputStream();
  GenericRecord e1 = new GenericData.Record(schema);
  e1.put("name", name);
  e1.put("age", age);
  e1.put("emails", emails);
  e1.put("boss", null);

  DatumWriter<GenericRecord> datumWriter = new GenericDatumWriter<>(schema);
  DataFileWriter<GenericRecord>dataFileWriter = new DataFileWriter<>(datumWriter);
  dataFileWriter.create(schema, out);
  dataFileWriter.append(e1);
  dataFileWriter.close();
  return out.toByteArray();
}
 
开发者ID:streamsets,项目名称:datacollector,代码行数:26,代码来源:ClusterHDFSSourceIT.java

示例4: testGenerateAvro3

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
@Test
public void testGenerateAvro3() {
	try {
		Parser parser = new Schema.Parser();
		Schema peopleSchema = parser.parse(new File(getTestResource("people.avsc").toURI()));
		GenericDatumWriter<GenericRecord> datumWriter = new GenericDatumWriter<GenericRecord>(peopleSchema);
		DataFileWriter<GenericRecord> dfw = new DataFileWriter<GenericRecord>(datumWriter);
		File tempfile = File.createTempFile("karma-people", "avro");
		
		tempfile.deleteOnExit();
		dfw.create(peopleSchema, new FileOutputStream(tempfile));
		JSONArray array = new JSONArray(IOUtils.toString(new FileInputStream(new File(getTestResource("people.json").toURI()))));
		for(int i = 0; i < array.length(); i++)
		{
			dfw.append(generatePersonRecord(peopleSchema, array.getJSONObject(i)));
		}
		dfw.flush();
		dfw.close();
	} catch (Exception e) {
		logger.error("testGenerateAvro3 failed:", e);
		fail("Execption: " + e.getMessage());
	}
}
 
开发者ID:therelaxist,项目名称:spring-usc,代码行数:24,代码来源:TestAvroRDFGenerator.java

示例5: serializing

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
/**
 * Serialize our Users to disk.
 */
private void serializing(List<User> listUsers) {
	long tiempoInicio = System.currentTimeMillis();
	// We create a DatumWriter, which converts Java objects into an in-memory serialized format.
	// The SpecificDatumWriter class is used with generated classes and extracts the schema from the specified generated type.
	DatumWriter<User> userDatumWriter = new SpecificDatumWriter<User>(User.class);
	// We create a DataFileWriter, which writes the serialized records, as well as the schema, to the file specified in the dataFileWriter.create call.
	DataFileWriter<User> dataFileWriter = new DataFileWriter<User>(userDatumWriter);

	try {
		File file = createFile();
		dataFileWriter.create(((User) listUsers.get(0)).getSchema(), file);
		for (User user : listUsers) {
			// We write our users to the file via calls to the dataFileWriter.append method.
			dataFileWriter.append(user);
		}
		// When we are done writing, we close the data file.
		dataFileWriter.close();
	} catch (IOException e) {
		e.printStackTrace();
	}
	terminaProceso("serializing", tiempoInicio);
}
 
开发者ID:sphera5,项目名称:avro-example,代码行数:26,代码来源:Avro.java

示例6: createOutputsIfDontExist

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
private static void createOutputsIfDontExist(
		Map<String, PortType> outputPortsSpecification, 
		Map<String, Path> outputPortBindings, Configuration conf) throws IOException{
	FileSystem fs = FileSystem.get(conf);
	for(Map.Entry<String, Path> entry: outputPortBindings.entrySet()){
		Path path = entry.getValue();
		if(!fs.exists(path) || isEmptyDirectory(fs, path)){
			PortType rawType = outputPortsSpecification.get(entry.getKey());
			if(!(rawType instanceof AvroPortType)){
				throw new RuntimeException("The port \""+entry.getKey()+
						"\" is not of Avro type and only Avro types are "+
						"supported");
			}
			AvroPortType type = (AvroPortType) rawType;
			FileSystemPath fsPath = new FileSystemPath(fs, path);
			DataFileWriter<GenericContainer> writer = 
					DataStore.create(fsPath, type.getSchema());
			writer.close();
		}
	}
}
 
开发者ID:openaire,项目名称:iis,代码行数:22,代码来源:ProcessWrapper.java

示例7: main

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
public static void main(String[] args) throws IOException {
  // Open data file
  File file = new File(PATH);
  if (file.getParentFile() != null) {
    file.getParentFile().mkdirs();
  }
  DatumWriter<User> userDatumWriter = new SpecificDatumWriter<User>(User.class);
  DataFileWriter<User> dataFileWriter = new DataFileWriter<User>(userDatumWriter);
  dataFileWriter.create(User.SCHEMA$, file);

  // Create random users
  User user;
  Random random = new Random();
  for (int i = 0; i < USERS; i++) {
    user = new User("user", null, COLORS[random.nextInt(COLORS.length)]);
    dataFileWriter.append(user);
    System.out.println(user);
  }

  dataFileWriter.close();
}
 
开发者ID:cloudera,项目名称:RecordServiceClient,代码行数:22,代码来源:GenerateData.java

示例8: convertToDataStore

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
/**
 * Read JSON file divided into lines, where each one corresponds to a
 * record. Next, save the extracted records in a data store.
 */
public static void convertToDataStore(Schema inputSchema,
		InputStream input, FileSystemPath outputPath) throws IOException {
	JsonStreamReader<GenericRecord> reader = new JsonStreamReader<GenericRecord>(
			inputSchema, input, GenericRecord.class);
	DataFileWriter<GenericRecord> writer = DataStore.create(outputPath,
			inputSchema);
	try {
		while (reader.hasNext()) {
			Object obj = reader.next();
			GenericRecord record = (GenericRecord) obj;
			writer.append(record);
		}
	} finally {
		if (writer != null) {
			writer.close();
		}
		if (reader != null) {
			reader.close();
		}
	}
}
 
开发者ID:openaire,项目名称:iis,代码行数:26,代码来源:JsonUtils.java

示例9: generateAvroFile

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
public void generateAvroFile(Schema schema, File file, long recourdCount) throws IOException {
  DatumWriter<GenericRecord> writer = new GenericDatumWriter<>(schema);
  DataFileWriter<GenericRecord> dataFileWriter = new DataFileWriter<>(writer);
  dataFileWriter.create(schema, file);

  for(long i = 0; i < recourdCount; i++) {
    GenericRecord datum = new GenericData.Record(schema);
    datum.put("b", i % 2 == 0);
    datum.put("s", String.valueOf(i));
    datum.put("l", i);
    datum.put("l100", i % 100);
    datum.put("s100", String.valueOf(i%100));
    dataFileWriter.append(datum);
  }

  dataFileWriter.close();
}
 
开发者ID:streamsets,项目名称:datacollector,代码行数:18,代码来源:LargeInputFileIT.java

示例10: createFileIfNotExists

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
public static void createFileIfNotExists(BlockSchema fileSchema, String path) throws IOException
{
    Configuration conf = new JobConf();
    FileSystem fs = FileSystem.get(conf);
    if (fs.exists(new Path(path)))
        return;

    Schema avroSchema = convertFromBlockSchema("CUBERT_MV_RECORD", fileSchema);
    System.out.println("Creating avro file with schema = " + avroSchema);
    GenericDatumWriter<GenericRecord> datumWriter =
            new GenericDatumWriter<GenericRecord>(avroSchema);
    DataFileWriter<GenericRecord> writer =
            new DataFileWriter<GenericRecord>(datumWriter);

    FSDataOutputStream fout =
            FileSystem.create(fs,
                              new Path(path),
                              new FsPermission(FsAction.ALL,
                                               FsAction.READ_EXECUTE,
                                               FsAction.READ_EXECUTE));
    writer.create(avroSchema, fout);
    writer.flush();
    writer.close();

}
 
开发者ID:linkedin,项目名称:Cubert,代码行数:26,代码来源:AvroUtils.java

示例11: encodeSpecificRecord

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
private byte[] encodeSpecificRecord(Object data) {
  SpecificRecord record = (SpecificRecord) data;

  ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();

  DatumWriter userDatumWriter = new SpecificDatumWriter(data.getClass());
  DataFileWriter dataFileWriter = new DataFileWriter(userDatumWriter);
  try {
    dataFileWriter.create(record.getSchema(), byteArrayOutputStream);
    dataFileWriter.append(data);
    dataFileWriter.close();

  } catch (IOException e) {
    e.printStackTrace();
  }
  return byteArrayOutputStream.toByteArray();
}
 
开发者ID:muoncore,项目名称:muon-java,代码行数:18,代码来源:AvroCodec.java

示例12: getRecordWriter

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
@Override
public RecordWriter<AvroWrapper<T>, NullWritable> getRecordWriter(
  TaskAttemptContext context) throws IOException, InterruptedException {

  boolean isMapOnly = context.getNumReduceTasks() == 0;
  Schema schema =
    isMapOnly ? AvroJob.getMapOutputSchema(context.getConfiguration())
      : AvroJob.getOutputSchema(context.getConfiguration());

  final DataFileWriter<T> WRITER =
    new DataFileWriter<T>(new ReflectDatumWriter<T>());

  configureDataFileWriter(WRITER, context);

  Path path = getDefaultWorkFile(context, EXT);
  WRITER.create(schema,
    path.getFileSystem(context.getConfiguration()).create(path));

  return new RecordWriter<AvroWrapper<T>, NullWritable>() {
    @Override
    public void write(AvroWrapper<T> wrapper, NullWritable ignore)
      throws IOException {
      WRITER.append(wrapper.datum());
    }

    @Override
    public void close(TaskAttemptContext taskAttemptContext)
      throws IOException, InterruptedException {
      WRITER.close();
    }
  };
}
 
开发者ID:aliyun,项目名称:aliyun-maxcompute-data-collectors,代码行数:33,代码来源:AvroOutputFormat.java

示例13: createAvroFile

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
/**
 * Create a data file that gets exported to the db.
 * @param fileNum the number of the file (for multi-file export)
 * @param numRecords how many records to write to the file.
 */
protected void createAvroFile(int fileNum, int numRecords,
    ColumnGenerator... extraCols) throws IOException {

  Path tablePath = getTablePath();
  Path filePath = new Path(tablePath, "part" + fileNum);

  Configuration conf = new Configuration();
  if (!BaseSqoopTestCase.isOnPhysicalCluster()) {
    conf.set(CommonArgs.FS_DEFAULT_NAME, CommonArgs.LOCAL_FS);
  }
  FileSystem fs = FileSystem.get(conf);
  fs.mkdirs(tablePath);
  OutputStream os = fs.create(filePath);

  Schema schema = buildAvroSchema(extraCols);
  DatumWriter<GenericRecord> datumWriter =
    new GenericDatumWriter<GenericRecord>();
  DataFileWriter<GenericRecord> dataFileWriter =
    new DataFileWriter<GenericRecord>(datumWriter);
  dataFileWriter.create(schema, os);

  for (int i = 0; i < numRecords; i++) {
    GenericRecord record = new GenericData.Record(schema);
    record.put("id", i);
    record.put("msg", getMsgPrefix() + i);
    addExtraColumns(record, i, extraCols);
    dataFileWriter.append(record);
  }

  dataFileWriter.close();
  os.close();
}
 
开发者ID:aliyun,项目名称:aliyun-maxcompute-data-collectors,代码行数:38,代码来源:TestAvroExport.java

示例14: addUserCompile

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
public void addUserCompile(){
	User user1 = new User();
	user1.setName("王light");
	user1.setFavoriteNumber(66);
	user1.setFavoriteColor("浅蓝色");
	
	// Alternate constructor
	User user2 = new User("魏Sunny", 88, "red");
	
	// Construct via builder
	User user3 = User.newBuilder()
	             .setName("王Sam")
	             .setFavoriteColor("blue")
	             .setFavoriteNumber(2011)
	             .build();
	
	DatumWriter<User> userDatumWriter = new SpecificDatumWriter<User>(User.class);
	DataFileWriter<User> dataFileWriter = new DataFileWriter<User>(userDatumWriter);
	try {
		dataFileWriter.create(user1.getSchema(), new File("/Users/a/Desktop/tmp/users.avro"));
		dataFileWriter.append(user1);
		dataFileWriter.append(user2);
		dataFileWriter.append(user3);
		dataFileWriter.close();
	} catch (IOException e) {
		e.printStackTrace();
	}
}
 
开发者ID:lrtdc,项目名称:book_ldrtc,代码行数:29,代码来源:TestAvro.java

示例15: MemberInfoToolsSer

import org.apache.avro.file.DataFileWriter; //导入方法依赖的package包/类
/**
 * 通过Java工具生成文件方式进行序列化操作 命令:C:\Users\Administrator>java -jar
 * E:\avro\avro-tools-1.7.7.jar compile schema E:\avro\Members.avsc E:\avro
 * 
 * @throws IOException
 */
public void MemberInfoToolsSer() throws IOException {
    // 1.为Member生成对象进行设置必要的内容,这里实现三种设置方式的演示
    // 1.1、构造方式
    Members m1 = new Members("xiaoming", "123456", "校名");
    // 1.2、属性设置
    Members m2 = new Members();
    m2.setUserName("xiaoyi");
    m2.setUserPwd("888888");
    m2.setRealName("小艺");
    // 1.3、Builder方式设置
    Members m3 = Members.newBuilder().setUserName("xiaohong").setUserPwd("999999").setRealName("小红").build();
    // 2.构建反序列化写对象
    DatumWriter<Members> mDw = new SpecificDatumWriter<Members>(Members.class);
    DataFileWriter<Members> mDfw = new DataFileWriter<Members>(mDw);
    // 2.1.通过对Members.avsc的解析创建Schema
    Schema schema = new Parser().parse(this.getClass().getResourceAsStream("/Members.avsc"));
    // 2.2.打开一个通道,把schema和输出的序列化文件关联起来
    mDfw.create(schema, new File("E:/avro/members.avro"));
    // 4.把刚刚创建的Users类数据追加到数据文件写入对象中
    mDfw.append(m1);
    mDfw.append(m2);
    mDfw.append(m3);
    // 5.关闭数据文件写入对象
    mDfw.close();
    System.out.println("Tools Builder Ser Start Complete.");
}
 
开发者ID:lrtdc,项目名称:book_ldrtc,代码行数:33,代码来源:MemberServerProvider.java


注:本文中的org.apache.avro.file.DataFileWriter.close方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。