當前位置: 首頁>>代碼示例>>Java>>正文


Java ColumnChunkMetaData.get方法代碼示例

本文整理匯總了Java中org.apache.parquet.hadoop.metadata.ColumnChunkMetaData.get方法的典型用法代碼示例。如果您正苦於以下問題:Java ColumnChunkMetaData.get方法的具體用法?Java ColumnChunkMetaData.get怎麽用?Java ColumnChunkMetaData.get使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.parquet.hadoop.metadata.ColumnChunkMetaData的用法示例。


在下文中一共展示了ColumnChunkMetaData.get方法的6個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: makeBlockFromStats

import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData; //導入方法依賴的package包/類
public static BlockMetaData makeBlockFromStats(IntStatistics stats, long valueCount) {
  BlockMetaData blockMetaData = new BlockMetaData();

  ColumnChunkMetaData column = ColumnChunkMetaData.get(ColumnPath.get("foo"),
      PrimitiveTypeName.INT32,
      CompressionCodecName.GZIP,
      new HashSet<Encoding>(Arrays.asList(Encoding.PLAIN)),
      stats,
      100l, 100l, valueCount, 100l, 100l);
  blockMetaData.addColumn(column);
  blockMetaData.setTotalByteSize(200l);
  blockMetaData.setRowCount(valueCount);
  return blockMetaData;
}
 
開發者ID:apache,項目名稱:parquet-mr,代碼行數:15,代碼來源:TestInputFormat.java

示例2: newBlock

import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData; //導入方法依賴的package包/類
private BlockMetaData newBlock(long start, long compressedBlockSize) {
  BlockMetaData blockMetaData = new BlockMetaData();
  long uncompressedSize = compressedBlockSize * 2;//assuming the compression ratio is 2
  ColumnChunkMetaData column = ColumnChunkMetaData.get(ColumnPath.get("foo"),
                                                       PrimitiveTypeName.BINARY,
                                                       CompressionCodecName.GZIP,
                                                       new HashSet<Encoding>(Arrays.asList(Encoding.PLAIN)),
                                                       new BinaryStatistics(),
                                                       start, 0l, 0l, compressedBlockSize, uncompressedSize);
  blockMetaData.addColumn(column);
  blockMetaData.setTotalByteSize(uncompressedSize);
  return blockMetaData;
}
 
開發者ID:apache,項目名稱:parquet-mr,代碼行數:14,代碼來源:TestInputFormat.java

示例3: getIntColumnMeta

import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData; //導入方法依賴的package包/類
private static ColumnChunkMetaData getIntColumnMeta(IntStatistics stats, long valueCount) {
  return ColumnChunkMetaData.get(ColumnPath.get("int", "column"),
      PrimitiveTypeName.INT32,
      CompressionCodecName.GZIP,
      new HashSet<Encoding>(Arrays.asList(Encoding.PLAIN)),
      stats,
      0L, 0L, valueCount, 0L, 0L);
}
 
開發者ID:apache,項目名稱:parquet-mr,代碼行數:9,代碼來源:TestStatisticsFilter.java

示例4: getDoubleColumnMeta

import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData; //導入方法依賴的package包/類
private static ColumnChunkMetaData getDoubleColumnMeta(DoubleStatistics stats, long valueCount) {
  return ColumnChunkMetaData.get(ColumnPath.get("double", "column"),
      PrimitiveTypeName.DOUBLE,
      CompressionCodecName.GZIP,
      new HashSet<Encoding>(Arrays.asList(Encoding.PLAIN)),
      stats,
      0L, 0L, valueCount, 0L, 0L);
}
 
開發者ID:apache,項目名稱:parquet-mr,代碼行數:9,代碼來源:TestStatisticsFilter.java

示例5: createColumnChunkMetaData

import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData; //導入方法依賴的package包/類
private ColumnChunkMetaData createColumnChunkMetaData() {
  Set<org.apache.parquet.column.Encoding> e = new HashSet<org.apache.parquet.column.Encoding>();
  PrimitiveTypeName t = PrimitiveTypeName.BINARY;
  ColumnPath p = ColumnPath.get("foo");
  CompressionCodecName c = CompressionCodecName.GZIP;
  BinaryStatistics s = new BinaryStatistics();
  ColumnChunkMetaData md = ColumnChunkMetaData.get(p, t, c, e, s,
          0, 0, 0, 0, 0);
  return md;
}
 
開發者ID:apache,項目名稱:parquet-mr,代碼行數:11,代碼來源:TestParquetMetadataConverter.java

示例6: fromParquetMetadata

import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData; //導入方法依賴的package包/類
public ParquetMetadata fromParquetMetadata(FileMetaData parquetMetadata) throws IOException {
  MessageType messageType = fromParquetSchema(parquetMetadata.getSchema(), parquetMetadata.getColumn_orders());
  List<BlockMetaData> blocks = new ArrayList<BlockMetaData>();
  List<RowGroup> row_groups = parquetMetadata.getRow_groups();
  if (row_groups != null) {
    for (RowGroup rowGroup : row_groups) {
      BlockMetaData blockMetaData = new BlockMetaData();
      blockMetaData.setRowCount(rowGroup.getNum_rows());
      blockMetaData.setTotalByteSize(rowGroup.getTotal_byte_size());
      List<ColumnChunk> columns = rowGroup.getColumns();
      String filePath = columns.get(0).getFile_path();
      for (ColumnChunk columnChunk : columns) {
        if ((filePath == null && columnChunk.getFile_path() != null)
            || (filePath != null && !filePath.equals(columnChunk.getFile_path()))) {
          throw new ParquetDecodingException("all column chunks of the same row group must be in the same file for now");
        }
        ColumnMetaData metaData = columnChunk.meta_data;
        ColumnPath path = getPath(metaData);
        ColumnChunkMetaData column = ColumnChunkMetaData.get(
            path,
            messageType.getType(path.toArray()).asPrimitiveType(),
            fromFormatCodec(metaData.codec),
            convertEncodingStats(metaData.getEncoding_stats()),
            fromFormatEncodings(metaData.encodings),
            fromParquetStatistics(
                parquetMetadata.getCreated_by(),
                metaData.statistics,
                messageType.getType(path.toArray()).asPrimitiveType()),
            metaData.data_page_offset,
            metaData.dictionary_page_offset,
            metaData.num_values,
            metaData.total_compressed_size,
            metaData.total_uncompressed_size);
        // TODO
        // index_page_offset
        // key_value_metadata
        blockMetaData.addColumn(column);
      }
      blockMetaData.setPath(filePath);
      blocks.add(blockMetaData);
    }
  }
  Map<String, String> keyValueMetaData = new HashMap<String, String>();
  List<KeyValue> key_value_metadata = parquetMetadata.getKey_value_metadata();
  if (key_value_metadata != null) {
    for (KeyValue keyValue : key_value_metadata) {
      keyValueMetaData.put(keyValue.key, keyValue.value);
    }
  }
  return new ParquetMetadata(
      new org.apache.parquet.hadoop.metadata.FileMetaData(messageType, keyValueMetaData, parquetMetadata.getCreated_by()),
      blocks);
}
 
開發者ID:apache,項目名稱:parquet-mr,代碼行數:54,代碼來源:ParquetMetadataConverter.java


注:本文中的org.apache.parquet.hadoop.metadata.ColumnChunkMetaData.get方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。