本文整理汇总了Java中parquet.hadoop.api.WriteSupport.WriteContext方法的典型用法代码示例。如果您正苦于以下问题:Java WriteSupport.WriteContext方法的具体用法?Java WriteSupport.WriteContext怎么用?Java WriteSupport.WriteContext使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类parquet.hadoop.api.WriteSupport
的用法示例。
在下文中一共展示了WriteSupport.WriteContext方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: initParquetWriteSupportWhenSchemaIsNotNull
import parquet.hadoop.api.WriteSupport; //导入方法依赖的package包/类
@Test
public void initParquetWriteSupportWhenSchemaIsNotNull() {
int pentahoValueMetaTypeFirstRow = 2;
boolean allowNullFirstRow = false;
int pentahoValueMetaTypeSecondRow = 5;
boolean allowNullSecondRow = false;
String schemaFromString = ParquetUtils
.createSchema( pentahoValueMetaTypeFirstRow, allowNullFirstRow, pentahoValueMetaTypeSecondRow,
allowNullSecondRow ).marshall();
SchemaDescription schema = SchemaDescription.unmarshall( schemaFromString );
PentahoParquetWriteSupport writeSupport = new PentahoParquetWriteSupport( schema );
Configuration conf = new Configuration();
conf.set( "fs.defaultFS", "file:///" );
WriteSupport.WriteContext writeContext = writeSupport.init( conf );
Assert.assertNotNull( writeContext );
}
示例2: ParquetWriter
import parquet.hadoop.api.WriteSupport; //导入方法依赖的package包/类
/**
* Create a new ParquetWriter.
*
* @param file the file to create
* @param mode file creation mode
* @param writeSupport the implementation to write a record to a RecordConsumer
* @param compressionCodecName the compression codec to use
* @param blockSize the block size threshold
* @param pageSize the page size threshold
* @param dictionaryPageSize the page size threshold for the dictionary pages
* @param enableDictionary to turn dictionary encoding on
* @param validating to turn on validation using the schema
* @param writerVersion version of parquetWriter from {@link ParquetProperties.WriterVersion}
* @param conf Hadoop configuration to use while accessing the filesystem
* @throws IOException
*/
public ParquetWriter(
Path file,
ParquetFileWriter.Mode mode,
WriteSupport<T> writeSupport,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize,
int dictionaryPageSize,
boolean enableDictionary,
boolean validating,
WriterVersion writerVersion,
Configuration conf) throws IOException {
WriteSupport.WriteContext writeContext = writeSupport.init(conf);
MessageType schema = writeContext.getSchema();
ParquetFileWriter fileWriter = new ParquetFileWriter(conf, schema, file,
mode);
fileWriter.start();
CodecFactory codecFactory = new CodecFactory(conf);
CodecFactory.BytesCompressor compressor = codecFactory.getCompressor(compressionCodecName, 0);
this.writer = new InternalParquetRecordWriter<T>(
fileWriter,
writeSupport,
schema,
writeContext.getExtraMetaData(),
blockSize,
pageSize,
compressor,
dictionaryPageSize,
enableDictionary,
validating,
writerVersion);
}