当前位置: 首页>>代码示例>>Java>>正文


Java CFMetaData.partitionColumns方法代码示例

本文整理汇总了Java中org.apache.cassandra.config.CFMetaData.partitionColumns方法的典型用法代码示例。如果您正苦于以下问题:Java CFMetaData.partitionColumns方法的具体用法?Java CFMetaData.partitionColumns怎么用?Java CFMetaData.partitionColumns使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.cassandra.config.CFMetaData的用法示例。


在下文中一共展示了CFMetaData.partitionColumns方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: createSSTableWriter

import org.apache.cassandra.config.CFMetaData; //导入方法依赖的package包/类
public static SSTableWriter createSSTableWriter(final Descriptor inputSSTableDescriptor,
                                                final CFMetaData outCfmMetaData,
                                                final SSTableReader inputSSTable) {
    final String sstableDirectory = System.getProperty("user.dir") + "/cassandra/compresseddata";
    LOGGER.info("Output directory: " + sstableDirectory);

    final File outputDirectory = new File(sstableDirectory + File.separatorChar
            + inputSSTableDescriptor.ksname
            + File.separatorChar + inputSSTableDescriptor.cfname);

    if (!outputDirectory.exists() && !outputDirectory.mkdirs()) {
        throw new FSWriteError(new IOException("failed to create tmp directory"),
                outputDirectory.getAbsolutePath());
    }

    final SSTableFormat.Type sstableFormat = SSTableFormat.Type.BIG;

    final BigTableWriter writer = new BigTableWriter(
            new Descriptor(
                    sstableFormat.info.getLatestVersion().getVersion(),
                    outputDirectory.getAbsolutePath(),
                    inputSSTableDescriptor.ksname, inputSSTableDescriptor.cfname,
                    inputSSTableDescriptor.generation,
                    sstableFormat,
                    inputSSTableDescriptor.getConfiguration()),
            inputSSTable.getTotalRows(), 0L, outCfmMetaData,
            new MetadataCollector(outCfmMetaData.comparator)
                    .sstableLevel(inputSSTable.getSSTableMetadata().sstableLevel),
            new SerializationHeader(true,
                    outCfmMetaData, outCfmMetaData.partitionColumns(),
                    org.apache.cassandra.db.rows.EncodingStats.NO_STATS));

    return writer;
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:35,代码来源:SSTableUtils.java

示例2: makeWithoutStats

import org.apache.cassandra.config.CFMetaData; //导入方法依赖的package包/类
public static SerializationHeader makeWithoutStats(CFMetaData metadata)
{
    return new SerializationHeader(true, metadata, metadata.partitionColumns(), EncodingStats.NO_STATS);
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:5,代码来源:SerializationHeader.java

示例3: write

import org.apache.cassandra.config.CFMetaData; //导入方法依赖的package包/类
public List<String> write(Iterator<T> data) throws IOException {
    SSTableTxnWriter writer = null;
    try {
        CFMetaData outputCFMetaData = setCFMetadataWithParams(origCFMetaData,
                                                              cassTable.getKeyspaceName(),
                                                              cassTable.getTableName());

        Descriptor outDescriptor = new Descriptor(BigFormat.latestVersion.getVersion(),
                outLocation,
                cassTable.getKeyspaceName(),
                cassTable.getTableName(),
                generation++,
                SSTableFormat.Type.BIG,
                conf);

        SerializationHeader header = new SerializationHeader(true,
                outputCFMetaData,
                outputCFMetaData.partitionColumns(),
                EncodingStats.NO_STATS);

        //Todo: fix these settings
        writer = SSTableTxnWriter.createWithNoLogging(outputCFMetaData, outDescriptor, 4, -1, 1, header);

        while (data.hasNext())
            writer.append(data.next());

    } catch (Exception e) {
        LOGGER.info(e.getMessage());
        throw e;
    } finally {
        if (writer != null) {
            writer.finish();
            LOGGER.info("Done saving sstable to: " + outLocation);
        }
        FileUtils.closeQuietly(writer);
    }

    List retVal = new LinkedList();
    retVal.add(outLocation);

    return retVal;
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:43,代码来源:SSTableSingleWriter.java

示例4: testCreatingSSTableWithTnx

import org.apache.cassandra.config.CFMetaData; //导入方法依赖的package包/类
/**
 * Test creating sstable files using SSTableTxnWriter.
 * @throws IOException
 */
@Test
public void testCreatingSSTableWithTnx() throws IOException {
    final String inputSSTableFullPathFileName = CASS3_DATA_DIR + "keyspace1/bills_compress/mc-6-big-Data.db";

    final Descriptor descriptor = Descriptor.fromFilename(inputSSTableFullPathFileName,
                                                          TestBaseSSTableFunSuite.HADOOP_CONF);
    final CFMetaData inputCFMetaData =
            SSTableUtils.metaDataFromSSTable(inputSSTableFullPathFileName,
                                                    "casspactor",
                                                    "bills_compress",
                                                    Collections.<String>emptyList(),
                                                    Collections.<String>emptyList(),
                                                    TestBaseSSTableFunSuite.HADOOP_CONF);

    final CFMetaData outputCFMetaData = SSTableUtils.createNewCFMetaData(descriptor, inputCFMetaData);
    final SerializationHeader header = new SerializationHeader(true, outputCFMetaData,
        inputCFMetaData.partitionColumns(),
        EncodingStats.NO_STATS);

    final Descriptor outDescriptor = new Descriptor(
        SSTableFormat.Type.BIG.info.getLatestVersion().getVersion(),
        "/tmp",
        "casspactor",
        "bills_compress",
        9,
        SSTableFormat.Type.BIG, TestBaseSSTableFunSuite.HADOOP_CONF);

    final SSTableTxnWriter writer = SSTableTxnWriter.create(outputCFMetaData,
                                                            outDescriptor,
                                                            4,
                                                            -1,
                                                            1,
                                                            header);

    final ColumnDefinition staticCollDef =
        ColumnDefinition.staticDef(inputCFMetaData, ByteBuffer.wrap("balance".getBytes()), Int32Type.instance);
    final ColumnDefinition regCollDef1 =
        ColumnDefinition.regularDef(inputCFMetaData, ByteBuffer.wrap("amount".getBytes()), Int32Type.instance);
    final ColumnDefinition regCollDef2 =
        ColumnDefinition.regularDef(inputCFMetaData, ByteBuffer.wrap("name".getBytes()), UTF8Type.instance);

    final DecoratedKey key = Murmur3Partitioner.instance.decorateKey(ByteBuffer.wrap("user1".getBytes()));
    final long now = System.currentTimeMillis();

    final Row.Builder builder = BTreeRow.sortedBuilder();
    builder.newRow(Clustering.STATIC_CLUSTERING);
    builder.addCell(BufferCell.live(staticCollDef, now, Int32Type.instance.decompose(123)));
    final PartitionUpdate partitionUpdate = PartitionUpdate.singleRowUpdate(inputCFMetaData,
        key, builder.build());
    final Row.Builder builder2 = BTreeRow.sortedBuilder();
    final Clustering clustering2 = new BufferClustering(Int32Type.instance.decompose(10000));
    builder2.newRow(clustering2);
    builder2.addCell(BufferCell.live(regCollDef1, now, Int32Type.instance.decompose(5)));
    builder2.addCell(BufferCell.live(regCollDef2, now, UTF8Type.instance.decompose("minh1")));

    final PartitionUpdate partitionUpdate2 = PartitionUpdate.singleRowUpdate(inputCFMetaData,
        key, builder2.build());

    final List<PartitionUpdate> partitionUpdates = new ArrayList<PartitionUpdate>() {
        private static final long serialVersionUID = 1L;
        {
            add(partitionUpdate);
            add(partitionUpdate2);
        }
    };

    final PartitionUpdate mergedUpdate = PartitionUpdate.merge(partitionUpdates);

    writer.append(mergedUpdate.unfilteredIterator());
    writer.finish(false);
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:76,代码来源:TestSSTableDataWriter.java


注:本文中的org.apache.cassandra.config.CFMetaData.partitionColumns方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。