当前位置: 首页>>代码示例>>Java>>正文


Java WriteSupport.WriteContext方法代码示例

本文整理汇总了Java中parquet.hadoop.api.WriteSupport.WriteContext方法的典型用法代码示例。如果您正苦于以下问题:Java WriteSupport.WriteContext方法的具体用法?Java WriteSupport.WriteContext怎么用?Java WriteSupport.WriteContext使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在parquet.hadoop.api.WriteSupport的用法示例。


在下文中一共展示了WriteSupport.WriteContext方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: initParquetWriteSupportWhenSchemaIsNotNull

import parquet.hadoop.api.WriteSupport; //导入方法依赖的package包/类
@Test
public void initParquetWriteSupportWhenSchemaIsNotNull() {

  int pentahoValueMetaTypeFirstRow = 2;
  boolean allowNullFirstRow = false;
  int pentahoValueMetaTypeSecondRow = 5;
  boolean allowNullSecondRow = false;

  String schemaFromString = ParquetUtils
    .createSchema( pentahoValueMetaTypeFirstRow, allowNullFirstRow, pentahoValueMetaTypeSecondRow,
      allowNullSecondRow ).marshall();

  SchemaDescription schema = SchemaDescription.unmarshall( schemaFromString );
  PentahoParquetWriteSupport writeSupport = new PentahoParquetWriteSupport( schema );

  Configuration conf = new Configuration();
  conf.set( "fs.defaultFS", "file:///" );

  WriteSupport.WriteContext writeContext = writeSupport.init( conf );

  Assert.assertNotNull( writeContext );
}
 
开发者ID:pentaho,项目名称:pentaho-hadoop-shims,代码行数:23,代码来源:PentahoParquetWriteSupportTest.java

示例2: ParquetWriter

import parquet.hadoop.api.WriteSupport; //导入方法依赖的package包/类
/**
 * Create a new ParquetWriter.
 *
 * @param file                 the file to create
 * @param mode                 file creation mode
 * @param writeSupport         the implementation to write a record to a RecordConsumer
 * @param compressionCodecName the compression codec to use
 * @param blockSize            the block size threshold
 * @param pageSize             the page size threshold
 * @param dictionaryPageSize   the page size threshold for the dictionary pages
 * @param enableDictionary     to turn dictionary encoding on
 * @param validating           to turn on validation using the schema
 * @param writerVersion        version of parquetWriter from {@link ParquetProperties.WriterVersion}
 * @param conf                 Hadoop configuration to use while accessing the filesystem
 * @throws IOException
 */
public ParquetWriter(
        Path file,
        ParquetFileWriter.Mode mode,
        WriteSupport<T> writeSupport,
        CompressionCodecName compressionCodecName,
        int blockSize,
        int pageSize,
        int dictionaryPageSize,
        boolean enableDictionary,
        boolean validating,
        WriterVersion writerVersion,
        Configuration conf) throws IOException {

    WriteSupport.WriteContext writeContext = writeSupport.init(conf);
    MessageType schema = writeContext.getSchema();

    ParquetFileWriter fileWriter = new ParquetFileWriter(conf, schema, file,
            mode);
    fileWriter.start();

    CodecFactory codecFactory = new CodecFactory(conf);
    CodecFactory.BytesCompressor compressor = codecFactory.getCompressor(compressionCodecName, 0);
    this.writer = new InternalParquetRecordWriter<T>(
            fileWriter,
            writeSupport,
            schema,
            writeContext.getExtraMetaData(),
            blockSize,
            pageSize,
            compressor,
            dictionaryPageSize,
            enableDictionary,
            validating,
            writerVersion);
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:52,代码来源:ParquetWriter.java


注:本文中的parquet.hadoop.api.WriteSupport.WriteContext方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。