本文整理汇总了Java中org.apache.cassandra.db.commitlog.ReplayPosition.compareTo方法的典型用法代码示例。如果您正苦于以下问题:Java ReplayPosition.compareTo方法的具体用法?Java ReplayPosition.compareTo怎么用?Java ReplayPosition.compareTo使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类org.apache.cassandra.db.commitlog.ReplayPosition
的用法示例。
在下文中一共展示了ReplayPosition.compareTo方法的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: accepts
import org.apache.cassandra.db.commitlog.ReplayPosition; //导入方法依赖的package包/类
public boolean accepts(OpOrder.Group opGroup, ReplayPosition replayPosition)
{
// if the barrier hasn't been set yet, then this memtable is still taking ALL writes
OpOrder.Barrier barrier = this.writeBarrier;
if (barrier == null)
return true;
// if the barrier has been set, but is in the past, we are definitely destined for a future memtable
if (!barrier.isAfter(opGroup))
return false;
// if we aren't durable we are directed only by the barrier
if (replayPosition == null)
return true;
while (true)
{
// otherwise we check if we are in the past/future wrt the CL boundary;
// if the boundary hasn't been finalised yet, we simply update it to the max of
// its current value and ours; if it HAS been finalised, we simply accept its judgement
// this permits us to coordinate a safe boundary, as the boundary choice is made
// atomically wrt our max() maintenance, so an operation cannot sneak into the past
ReplayPosition currentLast = lastReplayPosition.get();
if (currentLast instanceof LastReplayPosition)
return currentLast.compareTo(replayPosition) >= 0;
if (currentLast != null && currentLast.compareTo(replayPosition) >= 0)
return true;
if (lastReplayPosition.compareAndSet(currentLast, replayPosition))
return true;
}
}
示例2: accepts
import org.apache.cassandra.db.commitlog.ReplayPosition; //导入方法依赖的package包/类
public boolean accepts(OpOrder.Group opGroup, ReplayPosition replayPosition)
{
// if the barrier hasn't been set yet, then this memtable is still taking ALL writes
OpOrder.Barrier barrier = this.writeBarrier;
if (barrier == null)
return true;
// if the barrier has been set, but is in the past, we are definitely destined for a future memtable
if (!barrier.isAfter(opGroup))
return false;
// if we aren't durable we are directed only by the barrier
if (replayPosition == null)
return true;
while (true)
{
// otherwise we check if we are in the past/future wrt the CL boundary;
// if the boundary hasn't been finalised yet, we simply update it to the max of
// its current value and ours; if it HAS been finalised, we simply accept its judgement
// this permits us to coordinate a safe boundary, as the boundary choice is made
// atomically wrt our max() maintenance, so an operation cannot sneak into the past
ReplayPosition currentLast = commitLogUpperBound.get();
if (currentLast instanceof LastReplayPosition)
return currentLast.compareTo(replayPosition) >= 0;
if (currentLast != null && currentLast.compareTo(replayPosition) >= 0)
return true;
if (commitLogUpperBound.compareAndSet(currentLast, replayPosition))
return true;
}
}
示例3: setCommitLogUpperBound
import org.apache.cassandra.db.commitlog.ReplayPosition; //导入方法依赖的package包/类
private static void setCommitLogUpperBound(AtomicReference<ReplayPosition> commitLogUpperBound)
{
// we attempt to set the holder to the current commit log context. at the same time all writes to the memtables are
// also maintaining this value, so if somebody sneaks ahead of us somehow (should be rare) we simply retry,
// so that we know all operations prior to the position have not reached it yet
ReplayPosition lastReplayPosition;
while (true)
{
lastReplayPosition = new Memtable.LastReplayPosition(CommitLog.instance.getContext());
ReplayPosition currentLast = commitLogUpperBound.get();
if ((currentLast == null || currentLast.compareTo(lastReplayPosition) <= 0)
&& commitLogUpperBound.compareAndSet(currentLast, lastReplayPosition))
break;
}
}
示例4: Flush
import org.apache.cassandra.db.commitlog.ReplayPosition; //导入方法依赖的package包/类
private Flush(boolean truncate)
{
// if true, we won't flush, we'll just wait for any outstanding writes, switch the memtable, and discard
this.truncate = truncate;
metric.pendingFlushes.inc();
/**
* To ensure correctness of switch without blocking writes, run() needs to wait for all write operations
* started prior to the switch to complete. We do this by creating a Barrier on the writeOrdering
* that all write operations register themselves with, and assigning this barrier to the memtables,
* after which we *.issue()* the barrier. This barrier is used to direct write operations started prior
* to the barrier.issue() into the memtable we have switched out, and any started after to its replacement.
* In doing so it also tells the write operations to update the lastReplayPosition of the memtable, so
* that we know the CL position we are dirty to, which can be marked clean when we complete.
*/
writeBarrier = keyspace.writeOrder.newBarrier();
memtables = new ArrayList<>();
// submit flushes for the memtable for any indexed sub-cfses, and our own
AtomicReference<ReplayPosition> lastReplayPositionHolder = new AtomicReference<>();
for (ColumnFamilyStore cfs : concatWithIndexes())
{
// switch all memtables, regardless of their dirty status, setting the barrier
// so that we can reach a coordinated decision about cleanliness once they
// are no longer possible to be modified
Memtable mt = cfs.data.switchMemtable(truncate);
mt.setDiscarding(writeBarrier, lastReplayPositionHolder);
memtables.add(mt);
}
// we now attempt to define the lastReplayPosition; we do this by grabbing the current limit from the CL
// and attempting to set the holder to this value. at the same time all writes to the memtables are
// also maintaining this value, so if somebody sneaks ahead of us somehow (should be rare) we simply retry,
// so that we know all operations prior to the position have not reached it yet
ReplayPosition lastReplayPosition;
while (true)
{
lastReplayPosition = new Memtable.LastReplayPosition(CommitLog.instance.getContext());
ReplayPosition currentLast = lastReplayPositionHolder.get();
if ((currentLast == null || currentLast.compareTo(lastReplayPosition) <= 0)
&& lastReplayPositionHolder.compareAndSet(currentLast, lastReplayPosition))
break;
}
// we then issue the barrier; this lets us wait for all operations started prior to the barrier to complete;
// since this happens after wiring up the lastReplayPosition, we also know all operations with earlier
// replay positions have also completed, i.e. the memtables are done and ready to flush
writeBarrier.issue();
postFlush = new PostFlush(!truncate, writeBarrier, lastReplayPosition);
}
示例5: put
import org.apache.cassandra.db.commitlog.ReplayPosition; //导入方法依赖的package包/类
/**
* Should only be called by ColumnFamilyStore.apply via Keyspace.apply, which supplies the appropriate
* OpOrdering.
*
* replayPosition should only be null if this is a secondary index, in which case it is *expected* to be null
*/
void put(DecoratedKey key, ColumnFamily cf, SecondaryIndexManager.Updater indexer, OpOrder.Group opGroup, ReplayPosition replayPosition)
{
if (replayPosition != null && writeBarrier != null)
{
// if the writeBarrier is set, we want to maintain lastReplayPosition; this is an optimisation to avoid
// casing it for every write, but still ensure it is correct when writeBarrier.await() completes.
while (true)
{
ReplayPosition last = lastReplayPosition.get();
if (last.compareTo(replayPosition) >= 0)
break;
if (lastReplayPosition.compareAndSet(last, replayPosition))
break;
}
}
AtomicBTreeColumns previous = rows.get(key);
if (previous == null)
{
AtomicBTreeColumns empty = cf.cloneMeShallow(AtomicBTreeColumns.factory, false);
final DecoratedKey cloneKey = allocator.clone(key, opGroup);
// We'll add the columns later. This avoids wasting works if we get beaten in the putIfAbsent
previous = rows.putIfAbsent(cloneKey, empty);
if (previous == null)
{
previous = empty;
// allocate the row overhead after the fact; this saves over allocating and having to free after, but
// means we can overshoot our declared limit.
int overhead = (int) (cfs.partitioner.getHeapSizeOf(key.getToken()) + ROW_OVERHEAD_HEAP_SIZE);
allocator.onHeap().allocate(overhead, opGroup);
}
else
{
allocator.reclaimer().reclaimImmediately(cloneKey);
}
}
liveDataSize.addAndGet(previous.addAllWithSizeDelta(cf, allocator, opGroup, indexer));
currentOperations.addAndGet(cf.getColumnCount() + (cf.isMarkedForDelete() ? 1 : 0) + cf.deletionInfo().rangeCount());
}
示例6: put
import org.apache.cassandra.db.commitlog.ReplayPosition; //导入方法依赖的package包/类
/**
* Should only be called by ColumnFamilyStore.apply via Keyspace.apply, which supplies the appropriate
* OpOrdering.
*
* replayPosition should only be null if this is a secondary index, in which case it is *expected* to be null
*/
void put(DecoratedKey key, ColumnFamily cf, SecondaryIndexManager.Updater indexer, OpOrder.Group opGroup, ReplayPosition replayPosition)
{
if (replayPosition != null && writeBarrier != null)
{
// if the writeBarrier is set, we want to maintain lastReplayPosition; this is an optimisation to avoid
// casing it for every write, but still ensure it is correct when writeBarrier.await() completes.
// we clone the replay position so that the object passed in does not "escape", permitting stack allocation
replayPosition = replayPosition.clone();
while (true)
{
ReplayPosition last = lastReplayPosition.get();
if (last.compareTo(replayPosition) >= 0)
break;
if (lastReplayPosition.compareAndSet(last, replayPosition))
break;
}
}
AtomicBTreeColumns previous = rows.get(key);
if (previous == null)
{
AtomicBTreeColumns empty = cf.cloneMeShallow(AtomicBTreeColumns.factory, false);
final DecoratedKey cloneKey = new DecoratedKey(key.token, allocator.clone(key.key, opGroup));
// We'll add the columns later. This avoids wasting works if we get beaten in the putIfAbsent
previous = rows.putIfAbsent(cloneKey, empty);
if (previous == null)
{
previous = empty;
// allocate the row overhead after the fact; this saves over allocating and having to free after, but
// means we can overshoot our declared limit.
int overhead = (int) (cfs.partitioner.getHeapSizeOf(key.token) + ROW_OVERHEAD_HEAP_SIZE);
allocator.allocate(overhead, opGroup);
}
else
{
allocator.free(cloneKey.key);
}
}
ContextAllocator contextAllocator = allocator.wrap(opGroup);
AtomicBTreeColumns.Delta delta = previous.addAllWithSizeDelta(cf, contextAllocator, indexer, new AtomicBTreeColumns.Delta());
liveDataSize.addAndGet(delta.dataSize());
currentOperations.addAndGet(cf.getColumnCount() + (cf.isMarkedForDelete() ? 1 : 0) + cf.deletionInfo().rangeCount());
// allocate or free the delta in column overhead after the fact
for (Cell cell : delta.reclaimed())
{
cell.name.free(allocator);
allocator.free(cell.value);
}
allocator.allocate((int) delta.excessHeapSize(), opGroup);
}