当前位置: 首页>>代码示例>>Java>>正文


Java DFSConfigKeys.DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY属性代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY属性的典型用法代码示例。如果您正苦于以下问题:Java DFSConfigKeys.DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY属性的具体用法?Java DFSConfigKeys.DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY怎么用?Java DFSConfigKeys.DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY使用的例子?那么, 这里精选的属性代码示例或许可以为您提供帮助。您也可以进一步了解该属性所在org.apache.hadoop.hdfs.DFSConfigKeys的用法示例。


在下文中一共展示了DFSConfigKeys.DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY属性的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getNewBlockTargets

/**
 * Part I of getAdditionalBlock().
 * Analyze the state of the file under read lock to determine if the client
 * can add a new block, detect potential retries, lease mismatches,
 * and minimal replication of the penultimate block.
 * 
 * Generate target DataNode locations for the new block,
 * but do not create the new block yet.
 */
DatanodeStorageInfo[] getNewBlockTargets(String src, long fileId,
    String clientName, ExtendedBlock previous, Set<Node> excludedNodes,
    List<String> favoredNodes, LocatedBlock[] onRetryBlock) throws IOException {
  final long blockSize;
  final int replication;
  final byte storagePolicyID;
  Node clientNode = null;
  String clientMachine = null;

  NameNode.stateChangeLog.debug("BLOCK* getAdditionalBlock: {}  inodeId {}" +
      " for {}", src, fileId, clientName);

  checkOperation(OperationCategory.READ);
  byte[][] pathComponents = FSDirectory.getPathComponentsForReservedPath(src);
  FSPermissionChecker pc = getPermissionChecker();
  readLock();
  try {
    checkOperation(OperationCategory.READ);
    src = dir.resolvePath(pc, src, pathComponents);
    FileState fileState = analyzeFileState(
        src, fileId, clientName, previous, onRetryBlock);
    final INodeFile pendingFile = fileState.inode;
    // Check if the penultimate block is minimally replicated
    if (!checkFileProgress(src, pendingFile, false)) {
      throw new NotReplicatedYetException("Not replicated yet: " + src);
    }
    src = fileState.path;

    if (onRetryBlock[0] != null && onRetryBlock[0].getLocations().length > 0) {
      // This is a retry. No need to generate new locations.
      // Use the last block if it has locations.
      return null;
    }
    if (pendingFile.getBlocks().length >= maxBlocksPerFile) {
      throw new IOException("File has reached the limit on maximum number of"
          + " blocks (" + DFSConfigKeys.DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY
          + "): " + pendingFile.getBlocks().length + " >= "
          + maxBlocksPerFile);
    }
    blockSize = pendingFile.getPreferredBlockSize();
    clientMachine = pendingFile.getFileUnderConstructionFeature()
        .getClientMachine();
    clientNode = blockManager.getDatanodeManager().getDatanodeByHost(
        clientMachine);
    replication = pendingFile.getFileReplication();
    storagePolicyID = pendingFile.getStoragePolicyID();
  } finally {
    readUnlock();
  }

  if (clientNode == null) {
    clientNode = getClientNode(clientMachine);
  }

  // choose targets for the new block to be allocated.
  return getBlockManager().chooseTarget4NewBlock( 
      src, replication, clientNode, excludedNodes, blockSize, favoredNodes,
      storagePolicyID);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:68,代码来源:FSNamesystem.java


注:本文中的org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY属性示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。