当前位置: 首页>>代码示例>>Java>>正文


Java ReplicaUnderRecovery.unlinkBlock方法代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.server.datanode.ReplicaUnderRecovery.unlinkBlock方法的典型用法代码示例。如果您正苦于以下问题:Java ReplicaUnderRecovery.unlinkBlock方法的具体用法?Java ReplicaUnderRecovery.unlinkBlock怎么用?Java ReplicaUnderRecovery.unlinkBlock使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hdfs.server.datanode.ReplicaUnderRecovery的用法示例。


在下文中一共展示了ReplicaUnderRecovery.unlinkBlock方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: updateReplicaUnderRecovery

import org.apache.hadoop.hdfs.server.datanode.ReplicaUnderRecovery; //导入方法依赖的package包/类
private FinalizedReplica updateReplicaUnderRecovery(
                                        String bpid,
                                        ReplicaUnderRecovery rur,
                                        long recoveryId,
                                        long newlength) throws IOException {
  //check recovery id
  if (rur.getRecoveryID() != recoveryId) {
    throw new IOException("rur.getRecoveryID() != recoveryId = " + recoveryId
        + ", rur=" + rur);
  }

  // bump rur's GS to be recovery id
  bumpReplicaGS(rur, recoveryId);

  //update length
  final File replicafile = rur.getBlockFile();
  if (rur.getNumBytes() < newlength) {
    throw new IOException("rur.getNumBytes() < newlength = " + newlength
        + ", rur=" + rur);
  }
  if (rur.getNumBytes() > newlength) {
    rur.unlinkBlock(1);
    truncateBlock(replicafile, rur.getMetaFile(), rur.getNumBytes(), newlength);
    // update RUR with the new length
    rur.setNumBytes(newlength);
 }

  // finalize the block
  return finalizeReplica(bpid, rur);
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:31,代码来源:FsDatasetImpl.java

示例2: updateReplicaUnderRecovery

import org.apache.hadoop.hdfs.server.datanode.ReplicaUnderRecovery; //导入方法依赖的package包/类
private FinalizedReplica updateReplicaUnderRecovery(String bpid,
    ReplicaUnderRecovery rur, long recoveryId, long newlength)
    throws IOException {
  //check recovery id
  if (rur.getRecoveryID() != recoveryId) {
    throw new IOException(
        "rur.getRecoveryID() != recoveryId = " + recoveryId + ", rur=" + rur);
  }

  // bump rur's GS to be recovery id
  bumpReplicaGS(rur, recoveryId);

  //update length
  final File replicafile = rur.getBlockFile();
  if (rur.getNumBytes() < newlength) {
    throw new IOException(
        "rur.getNumBytes() < newlength = " + newlength + ", rur=" + rur);
  }
  if (rur.getNumBytes() > newlength) {
    rur.unlinkBlock(1);
    truncateBlock(replicafile, rur.getMetaFile(), rur.getNumBytes(),
        newlength);
    // update RUR with the new length
    rur.setNumBytesNoPersistance(newlength);
  }

  // finalize the block
  return finalizeReplica(bpid, rur);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:30,代码来源:FsDatasetImpl.java

示例3: updateReplicaUnderRecovery

import org.apache.hadoop.hdfs.server.datanode.ReplicaUnderRecovery; //导入方法依赖的package包/类
private FinalizedReplica updateReplicaUnderRecovery(
                                        String bpid,
                                        ReplicaUnderRecovery rur,
                                        long recoveryId,
                                        long newBlockId,
                                        long newlength) throws IOException {
  //check recovery id
  if (rur.getRecoveryID() != recoveryId) {
    throw new IOException("rur.getRecoveryID() != recoveryId = " + recoveryId
        + ", rur=" + rur);
  }

  boolean copyOnTruncate = newBlockId > 0L && rur.getBlockId() != newBlockId;
  File blockFile;
  File metaFile;
  // bump rur's GS to be recovery id
  if(!copyOnTruncate) {
    bumpReplicaGS(rur, recoveryId);
    blockFile = rur.getBlockFile();
    metaFile = rur.getMetaFile();
  } else {
    File[] copiedReplicaFiles =
        copyReplicaWithNewBlockIdAndGS(rur, bpid, newBlockId, recoveryId);
    blockFile = copiedReplicaFiles[1];
    metaFile = copiedReplicaFiles[0];
  }

  //update length
  if (rur.getNumBytes() < newlength) {
    throw new IOException("rur.getNumBytes() < newlength = " + newlength
        + ", rur=" + rur);
  }
  if (rur.getNumBytes() > newlength) {
    rur.unlinkBlock(1);
    truncateBlock(blockFile, metaFile, rur.getNumBytes(), newlength);
    if(!copyOnTruncate) {
      // update RUR with the new length
      rur.setNumBytes(newlength);
    } else {
      // Copying block to a new block with new blockId.
      // Not truncating original block.
      ReplicaBeingWritten newReplicaInfo = new ReplicaBeingWritten(
          newBlockId, recoveryId, rur.getVolume(), blockFile.getParentFile(),
          newlength);
      newReplicaInfo.setNumBytes(newlength);
      volumeMap.add(bpid, newReplicaInfo);
      finalizeReplica(bpid, newReplicaInfo);
    }
 }

  // finalize the block
  return finalizeReplica(bpid, rur);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:54,代码来源:FsDatasetImpl.java


注:本文中的org.apache.hadoop.hdfs.server.datanode.ReplicaUnderRecovery.unlinkBlock方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。