当前位置: 首页>>代码示例>>Java>>正文


Java DataBlockEncoding.ID_SIZE属性代码示例

本文整理汇总了Java中org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.ID_SIZE属性的典型用法代码示例。如果您正苦于以下问题:Java DataBlockEncoding.ID_SIZE属性的具体用法?Java DataBlockEncoding.ID_SIZE怎么用?Java DataBlockEncoding.ID_SIZE使用的例子?那么, 这里精选的属性代码示例或许可以为您提供帮助。您也可以进一步了解该属性所在org.apache.hadoop.hbase.io.encoding.DataBlockEncoding的用法示例。


在下文中一共展示了DataBlockEncoding.ID_SIZE属性的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: writeEncodedBlock

static void writeEncodedBlock(DataBlockEncoding encoding,
    DataOutputStream dos, final List<Integer> encodedSizes,
    final List<ByteBuffer> encodedBlocks, int blockId, 
    boolean includesMemstoreTS) throws IOException {
  ByteArrayOutputStream baos = new ByteArrayOutputStream();
  DoubleOutputStream doubleOutputStream =
      new DoubleOutputStream(dos, baos);

  final int rawBlockSize = writeTestKeyValues(doubleOutputStream,
      blockId, includesMemstoreTS);

  ByteBuffer rawBuf = ByteBuffer.wrap(baos.toByteArray());
  rawBuf.rewind();

  final int encodedSize;
  final ByteBuffer encodedBuf;
  if (encoding == DataBlockEncoding.NONE) {
    encodedSize = rawBlockSize;
    encodedBuf = rawBuf;
  } else {
    ByteArrayOutputStream encodedOut = new ByteArrayOutputStream();
    encoding.getEncoder().compressKeyValues(
        new DataOutputStream(encodedOut),
        rawBuf.duplicate(), includesMemstoreTS);
    // We need to account for the two-byte encoding algorithm ID that
    // comes after the 24-byte block header but before encoded KVs.
    encodedSize = encodedOut.size() + DataBlockEncoding.ID_SIZE;
    encodedBuf = ByteBuffer.wrap(encodedOut.toByteArray());
  }
  encodedSizes.add(encodedSize);
  encodedBlocks.add(encodedBuf);
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:32,代码来源:TestHFileBlock.java

示例2: writeEncodedBlock

static void writeEncodedBlock(Algorithm algo, DataBlockEncoding encoding,
     DataOutputStream dos, final List<Integer> encodedSizes,
    final List<ByteBuffer> encodedBlocks, int blockId, 
    boolean includesMemstoreTS, byte[] dummyHeader, boolean useTag) throws IOException {
  ByteArrayOutputStream baos = new ByteArrayOutputStream();
  DoubleOutputStream doubleOutputStream =
      new DoubleOutputStream(dos, baos);
  writeTestKeyValues(doubleOutputStream, blockId, includesMemstoreTS, useTag);
  ByteBuffer rawBuf = ByteBuffer.wrap(baos.toByteArray());
  rawBuf.rewind();

  DataBlockEncoder encoder = encoding.getEncoder();
  int headerLen = dummyHeader.length;
  byte[] encodedResultWithHeader = null;
  HFileContext meta = new HFileContextBuilder()
                      .withCompression(algo)
                      .withIncludesMvcc(includesMemstoreTS)
                      .withIncludesTags(useTag)
                      .build();
  if (encoder != null) {
    HFileBlockEncodingContext encodingCtx = encoder.newDataBlockEncodingContext(encoding,
        dummyHeader, meta);
    encoder.encodeKeyValues(rawBuf, encodingCtx);
    encodedResultWithHeader =
        encodingCtx.getUncompressedBytesWithHeader();
  } else {
    HFileBlockDefaultEncodingContext defaultEncodingCtx = new HFileBlockDefaultEncodingContext(
        encoding, dummyHeader, meta);
    byte[] rawBufWithHeader =
        new byte[rawBuf.array().length + headerLen];
    System.arraycopy(rawBuf.array(), 0, rawBufWithHeader,
        headerLen, rawBuf.array().length);
    defaultEncodingCtx.compressAfterEncodingWithBlockType(rawBufWithHeader,
        BlockType.DATA);
    encodedResultWithHeader =
      defaultEncodingCtx.getUncompressedBytesWithHeader();
  }
  final int encodedSize =
      encodedResultWithHeader.length - headerLen;
  if (encoder != null) {
    // We need to account for the two-byte encoding algorithm ID that
    // comes after the 24-byte block header but before encoded KVs.
    headerLen += DataBlockEncoding.ID_SIZE;
  }
  byte[] encodedDataSection =
      new byte[encodedResultWithHeader.length - headerLen];
  System.arraycopy(encodedResultWithHeader, headerLen,
      encodedDataSection, 0, encodedDataSection.length);
  final ByteBuffer encodedBuf =
      ByteBuffer.wrap(encodedDataSection);
  encodedSizes.add(encodedSize);
  encodedBlocks.add(encodedBuf);
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:53,代码来源:TestHFileBlock.java


注:本文中的org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.ID_SIZE属性示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。