当前位置: 首页>>代码示例>>Java>>正文


Java CompressionMetadata.create方法代码示例

本文整理汇总了Java中org.apache.cassandra.io.compress.CompressionMetadata.create方法的典型用法代码示例。如果您正苦于以下问题:Java CompressionMetadata.create方法的具体用法?Java CompressionMetadata.create怎么用?Java CompressionMetadata.create使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.cassandra.io.compress.CompressionMetadata的用法示例。


在下文中一共展示了CompressionMetadata.create方法的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: complete

import org.apache.cassandra.io.compress.CompressionMetadata; //导入方法依赖的package包/类
/**
 * Complete building {@link FileHandle} with the given length, which overrides the file length.
 *
 * @param overrideLength Override file length (in bytes) so that read cannot go further than this value.
 *                       If the value is less than or equal to 0, then the value is ignored.
 * @return Built file
 */
@SuppressWarnings("resource")
public FileHandle complete(long overrideLength)
{
    ChannelProxy channelCopy = ChannelProxy.newInstance(path, this.conf);
    try
    {
        if (compressed && compressionMetadata == null)
            compressionMetadata = CompressionMetadata.create(channelCopy.filePath(),
                                                            channelCopy.size(),
                                                            this.conf);

        long length = overrideLength > 0 ? overrideLength : compressed ? compressionMetadata.compressedFileLength : channelCopy.size();

        RebuffererFactory rebuffererFactory;

        if (compressed)
        {
            rebuffererFactory = maybeCached(new CompressedChunkReader.Standard(channelCopy, compressionMetadata));
        }
        else
        {
            rebuffererFactory = maybeCached(new SimpleChunkReader(channelCopy, length, bufferType, bufferSize));
        }

        Cleanup cleanup = new Cleanup(channelCopy, rebuffererFactory, compressionMetadata, chunkCache);
        return new FileHandle(cleanup, channelCopy, rebuffererFactory, compressionMetadata, length, conf);
    }
    catch (Throwable t)
    {
        channelCopy.close();
        throw t;
    }
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:41,代码来源:FileHandle.java

示例2: metadata

import org.apache.cassandra.io.compress.CompressionMetadata; //导入方法依赖的package包/类
protected CompressionMetadata metadata(String path, long overrideLength, boolean isFinal)
{
    if (writer == null)
        return CompressionMetadata.create(path);

    return writer.open(overrideLength, isFinal);
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:8,代码来源:CompressedSegmentedFile.java

示例3: metadata

import org.apache.cassandra.io.compress.CompressionMetadata; //导入方法依赖的package包/类
protected CompressionMetadata metadata(String path, long overrideLength)
{
    if (writer == null)
        return CompressionMetadata.create(path);

    return writer.open(overrideLength);
}
 
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:8,代码来源:CompressedSegmentedFile.java

示例4: overrideWithGarbage

import org.apache.cassandra.io.compress.CompressionMetadata; //导入方法依赖的package包/类
private void overrideWithGarbage(SSTableReader sstable, ByteBuffer key1, ByteBuffer key2) throws IOException
{
    boolean compression = Boolean.parseBoolean(System.getProperty("cassandra.test.compression", "false"));
    long startPosition, endPosition;

    if (compression)
    { // overwrite with garbage the compression chunks from key1 to key2
        CompressionMetadata compData = CompressionMetadata.create(sstable.getFilename());

        CompressionMetadata.Chunk chunk1 = compData.chunkFor(
                sstable.getPosition(PartitionPosition.ForKey.get(key1, sstable.getPartitioner()), SSTableReader.Operator.EQ).position);
        CompressionMetadata.Chunk chunk2 = compData.chunkFor(
                sstable.getPosition(PartitionPosition.ForKey.get(key2, sstable.getPartitioner()), SSTableReader.Operator.EQ).position);

        startPosition = Math.min(chunk1.offset, chunk2.offset);
        endPosition = Math.max(chunk1.offset + chunk1.length, chunk2.offset + chunk2.length);

        compData.close();
    }
    else
    { // overwrite with garbage from key1 to key2
        long row0Start = sstable.getPosition(PartitionPosition.ForKey.get(key1, sstable.getPartitioner()), SSTableReader.Operator.EQ).position;
        long row1Start = sstable.getPosition(PartitionPosition.ForKey.get(key2, sstable.getPartitioner()), SSTableReader.Operator.EQ).position;
        startPosition = Math.min(row0Start, row1Start);
        endPosition = Math.max(row0Start, row1Start);
    }

    overrideWithGarbage(sstable, startPosition, endPosition);
}
 
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:30,代码来源:ScrubTest.java

示例5: metadata

import org.apache.cassandra.io.compress.CompressionMetadata; //导入方法依赖的package包/类
protected CompressionMetadata metadata(String path, boolean early)
{
    if (writer == null)
        return CompressionMetadata.create(path);
    else if (early)
        return writer.openEarly();
    else
        return writer.openAfterClose();
}
 
开发者ID:daidong,项目名称:GraphTrek,代码行数:10,代码来源:CompressedSegmentedFile.java

示例6: complete

import org.apache.cassandra.io.compress.CompressionMetadata; //导入方法依赖的package包/类
public SegmentedFile complete(String path)
{
    return new CompressedPoolingSegmentedFile(path, CompressionMetadata.create(path));
}
 
开发者ID:pgaref,项目名称:ACaZoo,代码行数:5,代码来源:CompressedPoolingSegmentedFile.java

示例7: complete

import org.apache.cassandra.io.compress.CompressionMetadata; //导入方法依赖的package包/类
public SegmentedFile complete(String path)
{
    return new CompressedSegmentedFile(path, CompressionMetadata.create(path));
}
 
开发者ID:pgaref,项目名称:ACaZoo,代码行数:5,代码来源:CompressedSegmentedFile.java


注:本文中的org.apache.cassandra.io.compress.CompressionMetadata.create方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。