当前位置: 首页>>代码示例>>Java>>正文


Java HdfsServerConstants.INVALID_TXID属性代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.server.common.HdfsServerConstants.INVALID_TXID属性的典型用法代码示例。如果您正苦于以下问题:Java HdfsServerConstants.INVALID_TXID属性的具体用法?Java HdfsServerConstants.INVALID_TXID怎么用?Java HdfsServerConstants.INVALID_TXID使用的例子?那么, 这里精选的属性代码示例或许可以为您提供帮助。您也可以进一步了解该属性所在org.apache.hadoop.hdfs.server.common.HdfsServerConstants的用法示例。


在下文中一共展示了HdfsServerConstants.INVALID_TXID属性的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: scanEditLog

/**
 * @param file          File being scanned and validated.
 * @param maxTxIdToScan Maximum Tx ID to try to scan.
 *                      The scan returns after reading this or a higher
 *                      ID. The file portion beyond this ID is
 *                      potentially being updated.
 * @return Result of the validation
 * @throws IOException
 */
static FSEditLogLoader.EditLogValidation scanEditLog(File file,
    long maxTxIdToScan, boolean verifyVersion)
    throws IOException {
  EditLogFileInputStream in;
  try {
    in = new EditLogFileInputStream(file);
    // read the header, initialize the inputstream, but do not check the
    // layoutversion
    in.getVersion(verifyVersion);
  } catch (LogHeaderCorruptException e) {
    LOG.warn("Log file " + file + " has no valid header", e);
    return new FSEditLogLoader.EditLogValidation(0,
        HdfsServerConstants.INVALID_TXID, true);
  }

  try {
    return FSEditLogLoader.scanEditLog(in, maxTxIdToScan);
  } finally {
    IOUtils.closeStream(in);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:30,代码来源:EditLogFileInputStream.java

示例2: apply

@Override
public Long apply(RemoteEditLog log) {
  if (null == log) {
    return HdfsServerConstants.INVALID_TXID;
  }
  return log.getStartTxId();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:7,代码来源:RemoteEditLog.java

示例3: refreshCachedData

/**
 * Reload any data that may have been cached. This is necessary
 * when we first load the Journal, but also after any formatting
 * operation, since the cached data is no longer relevant.
 */
private synchronized void refreshCachedData() {
  IOUtils.closeStream(committedTxnId);
  
  File currentDir = storage.getSingularStorageDir().getCurrentDir();
  this.lastPromisedEpoch = new PersistentLongFile(
      new File(currentDir, LAST_PROMISED_FILENAME), 0);
  this.lastWriterEpoch = new PersistentLongFile(
      new File(currentDir, LAST_WRITER_EPOCH), 0);
  this.committedTxnId = new BestEffortLongFile(
      new File(currentDir, COMMITTED_TXID_FILENAME),
      HdfsServerConstants.INVALID_TXID);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:17,代码来源:Journal.java

示例4: EditLogLedgerMetadata

EditLogLedgerMetadata(String zkPath, int dataLayoutVersion,
                      long ledgerId, long firstTxId) {
  this.zkPath = zkPath;
  this.dataLayoutVersion = dataLayoutVersion;
  this.ledgerId = ledgerId;
  this.firstTxId = firstTxId;
  this.lastTxId = HdfsServerConstants.INVALID_TXID;
  this.inprogress = true;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:9,代码来源:EditLogLedgerMetadata.java

示例5: scanOp

@Override
public long scanOp() throws IOException {
  // Edit logs of this age don't have any length prefix, so we just have
  // to read the entire Op.
  FSEditLogOp op = decodeOp();
  return op == null ?
      HdfsServerConstants.INVALID_TXID : op.getTransactionId();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:8,代码来源:FSEditLogOp.java

示例6: abortCurSegment

private void abortCurSegment() throws IOException {
  if (curSegment == null) {
    return;
  }
  
  curSegment.abort();
  curSegment = null;
  curSegmentTxId = HdfsServerConstants.INVALID_TXID;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:9,代码来源:Journal.java

示例7: getLatestImages

@Override
List<FSImageFile> getLatestImages() throws IOException {
  // We should have at least one image and one edits dirs
  if (latestNameSD == null)
    throw new IOException("Image file is not found in " + imageDirs);
  if (latestEditsSD == null)
    throw new IOException("Edits file is not found in " + editsDirs);
  
  // Make sure we are loading image and edits from same checkpoint
  if (latestNameCheckpointTime > latestEditsCheckpointTime
      && latestNameSD != latestEditsSD
      && latestNameSD.getStorageDirType() == NameNodeDirType.IMAGE
      && latestEditsSD.getStorageDirType() == NameNodeDirType.EDITS) {
    // This is a rare failure when NN has image-only and edits-only
    // storage directories, and fails right after saving images,
    // in some of the storage directories, but before purging edits.
    // See -NOTE- in saveNamespace().
    LOG.error("This is a rare failure scenario!!!");
    LOG.error("Image checkpoint time " + latestNameCheckpointTime +
              " > edits checkpoint time " + latestEditsCheckpointTime);
    LOG.error("Name-node will treat the image as the latest state of " +
              "the namespace. Old edits will be discarded.");
  } else if (latestNameCheckpointTime != latestEditsCheckpointTime) {
    throw new IOException("Inconsistent storage detected, " +
                    "image and edits checkpoint times do not match. " +
                    "image checkpoint time = " + latestNameCheckpointTime +
                    "edits checkpoint time = " + latestEditsCheckpointTime);
  }

  needToSaveAfterRecovery = doRecovery();
  
  FSImageFile file = new FSImageFile(latestNameSD, 
      NNStorage.getStorageFile(latestNameSD, NameNodeFile.IMAGE),
      HdfsServerConstants.INVALID_TXID);
  LinkedList<FSImageFile> ret = new LinkedList<FSImageFile>();
  ret.add(file);
  return ret;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:38,代码来源:FSImagePreTransactionalStorageInspector.java

示例8: getCommittedTxnIdValue

private long getCommittedTxnIdValue(MiniQJMHACluster qjCluster)
    throws IOException {
  Journal journal1 = qjCluster.getJournalCluster().getJournalNode(0)
      .getOrCreateJournal(MiniQJMHACluster.NAMESERVICE);
  BestEffortLongFile committedTxnId = (BestEffortLongFile) Whitebox
      .getInternalState(journal1, "committedTxnId");
  return committedTxnId != null ? committedTxnId.get() :
      HdfsServerConstants.INVALID_TXID;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:9,代码来源:TestDFSUpgradeWithHA.java

示例9: finalizeLogSegment

/**
 * Finalize the log segment at the given transaction ID.
 */
public synchronized void finalizeLogSegment(RequestInfo reqInfo, long startTxId,
    long endTxId) throws IOException {
  checkFormatted();
  checkRequest(reqInfo);

  boolean needsValidation = true;

  // Finalizing the log that the writer was just writing.
  if (startTxId == curSegmentTxId) {
    if (curSegment != null) {
      curSegment.close();
      curSegment = null;
      curSegmentTxId = HdfsServerConstants.INVALID_TXID;
    }
    
    checkSync(nextTxId == endTxId + 1,
        "Trying to finalize in-progress log segment %s to end at " +
        "txid %s but only written up to txid %s",
        startTxId, endTxId, nextTxId - 1);
    // No need to validate the edit log if the client is finalizing
    // the log segment that it was just writing to.
    needsValidation = false;
  }
  
  FileJournalManager.EditLogFile elf = fjm.getLogFile(startTxId);
  if (elf == null) {
    throw new JournalOutOfSyncException("No log file to finalize at " +
        "transaction ID " + startTxId);
  }

  if (elf.isInProgress()) {
    if (needsValidation) {
      LOG.info("Validating log segment " + elf.getFile() + " about to be " +
          "finalized");
      elf.scanLog(Long.MAX_VALUE, false);

      checkSync(elf.getLastTxId() == endTxId,
          "Trying to finalize in-progress log segment %s to end at " +
          "txid %s but log %s on disk only contains up to txid %s",
          startTxId, endTxId, elf.getFile(), elf.getLastTxId());
    }
    fjm.finalizeLogSegment(startTxId, endTxId);
  } else {
    Preconditions.checkArgument(endTxId == elf.getLastTxId(),
        "Trying to re-finalize already finalized log " +
            elf + " with different endTxId " + endTxId);
  }

  // Once logs are finalized, a different length will never be decided.
  // During recovery, we treat a finalized segment the same as an accepted
  // recovery. Thus, we no longer need to keep track of the previously-
  // accepted decision. The existence of the finalized log segment is enough.
  purgePaxosDecision(elf.getFirstTxId());
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:57,代码来源:Journal.java

示例10: reset

final void reset() {
  txid = HdfsServerConstants.INVALID_TXID;
  rpcClientId = RpcConstants.DUMMY_CLIENT_ID;
  rpcCallId = RpcConstants.INVALID_CALL_ID;
  resetSubFields();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:6,代码来源:FSEditLogOp.java

示例11: hasTransactionId

public boolean hasTransactionId() {
  return (txid != HdfsServerConstants.INVALID_TXID);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:3,代码来源:FSEditLogOp.java

示例12: getFirstTxId

@Override
public long getFirstTxId() {
  return HdfsServerConstants.INVALID_TXID;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:4,代码来源:TestEditLog.java

示例13: selectInputStreams

@Override
public void selectInputStreams(Collection<EditLogInputStream> streams,
    long fromTxId, boolean inProgressOk)
    throws IOException {
  List<EditLogLedgerMetadata> currentLedgerList = getLedgerList(fromTxId,
      inProgressOk);
  try {
    BookKeeperEditLogInputStream elis = null;
    for (EditLogLedgerMetadata l : currentLedgerList) {
      long lastTxId = l.getLastTxId();
      if (l.isInProgress()) {
        lastTxId = recoverLastTxId(l, false);
      }
      // Check once again, required in case of InProgress and is case of any
      // gap.
      if (fromTxId >= l.getFirstTxId() && fromTxId <= lastTxId) {
        LedgerHandle h;
        if (l.isInProgress()) { // we don't want to fence the current journal
          h = bkc.openLedgerNoRecovery(l.getLedgerId(),
              BookKeeper.DigestType.MAC, digestpw.getBytes(Charsets.UTF_8));
        } else {
          h = bkc.openLedger(l.getLedgerId(), BookKeeper.DigestType.MAC,
              digestpw.getBytes(Charsets.UTF_8));
        }
        elis = new BookKeeperEditLogInputStream(h, l);
        elis.skipTo(fromTxId);
      } else {
        // If mismatches then there might be some gap, so we should not check
        // further.
        return;
      }
      streams.add(elis);
      if (elis.getLastTxId() == HdfsServerConstants.INVALID_TXID) {
        return;
      }
      fromTxId = elis.getLastTxId() + 1;
    }
  } catch (BKException e) {
    throw new IOException("Could not open ledger for " + fromTxId, e);
  } catch (InterruptedException ie) {
    Thread.currentThread().interrupt();
    throw new IOException("Interrupted opening ledger for " + fromTxId, ie);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:44,代码来源:BookKeeperJournalManager.java

示例14: scanNextOp

/**
 * Go through the next operation from the stream storage.
 * @return the txid of the next operation.
 */
protected long scanNextOp() throws IOException {
  FSEditLogOp next = readOp();
  return next != null ? next.txid : HdfsServerConstants.INVALID_TXID;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:8,代码来源:EditLogInputStream.java

示例15: hasCommittedTxId

public boolean hasCommittedTxId() {
  return (committedTxId != HdfsServerConstants.INVALID_TXID);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:3,代码来源:RequestInfo.java


注:本文中的org.apache.hadoop.hdfs.server.common.HdfsServerConstants.INVALID_TXID属性示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。