本文整理汇总了Java中org.apache.cassandra.config.DatabaseDescriptor.getInMemoryCompactionLimit方法的典型用法代码示例。如果您正苦于以下问题:Java DatabaseDescriptor.getInMemoryCompactionLimit方法的具体用法?Java DatabaseDescriptor.getInMemoryCompactionLimit怎么用?Java DatabaseDescriptor.getInMemoryCompactionLimit使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类org.apache.cassandra.config.DatabaseDescriptor
的用法示例。
在下文中一共展示了DatabaseDescriptor.getInMemoryCompactionLimit方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: getCompactedRow
import org.apache.cassandra.config.DatabaseDescriptor; //导入方法依赖的package包/类
/**
* @return an AbstractCompactedRow implementation to write the merged rows in question.
*
* If there is a single source row, the data is from a current-version sstable, we don't
* need to purge and we aren't forcing deserialization for scrub, write it unchanged.
* Otherwise, we deserialize, purge tombstones, and reserialize in the latest version.
*/
public AbstractCompactedRow getCompactedRow(List<SSTableIdentityIterator> rows)
{
long rowSize = 0;
for (SSTableIdentityIterator row : rows)
rowSize += row.dataSize;
if (rowSize > DatabaseDescriptor.getInMemoryCompactionLimit())
{
String keyString = cfs.metadata.getKeyValidator().getString(rows.get(0).getKey().key);
logger.info(String.format("Compacting large row %s/%s:%s (%d bytes) incrementally",
cfs.keyspace.getName(), cfs.name, keyString, rowSize));
return new LazilyCompactedRow(this, rows);
}
return new PrecompactedRow(this, rows);
}
示例2: ParallelCompactionIterable
import org.apache.cassandra.config.DatabaseDescriptor; //导入方法依赖的package包/类
public ParallelCompactionIterable(OperationType type, List<ICompactionScanner> scanners, CompactionController controller)
{
this(type, scanners, controller, DatabaseDescriptor.getInMemoryCompactionLimit() / scanners.size());
}