当前位置: 首页>>代码示例>>Java>>正文


Java ReadOnlyList.get方法代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.util.ReadOnlyList.get方法的典型用法代码示例。如果您正苦于以下问题:Java ReadOnlyList.get方法的具体用法?Java ReadOnlyList.get怎么用?Java ReadOnlyList.get使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hdfs.util.ReadOnlyList的用法示例。


在下文中一共展示了ReadOnlyList.get方法的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getChild

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入方法依赖的package包/类
/**
 * @param name the name of the child
 * @param snapshotId
 *          if it is not {@link Snapshot#CURRENT_STATE_ID}, get the result
 *          from the corresponding snapshot; otherwise, get the result from
 *          the current directory.
 * @return the child inode.
 */
public INode getChild(byte[] name, int snapshotId) {
  DirectoryWithSnapshotFeature sf;
  if (snapshotId == Snapshot.CURRENT_STATE_ID || 
      (sf = getDirectoryWithSnapshotFeature()) == null) {
    ReadOnlyList<INode> c = getCurrentChildrenList();
    final int i = ReadOnlyList.Util.binarySearch(c, name);
    return i < 0 ? null : c.get(i);
  }
  
  return sf.getChild(this, name, snapshotId);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:INodeDirectory.java

示例2: computeDirectoryContentSummary

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入方法依赖的package包/类
protected ContentSummaryComputationContext computeDirectoryContentSummary(
    ContentSummaryComputationContext summary, int snapshotId) {
  ReadOnlyList<INode> childrenList = getChildrenList(snapshotId);
  // Explicit traversing is done to enable repositioning after relinquishing
  // and reacquiring locks.
  for (int i = 0;  i < childrenList.size(); i++) {
    INode child = childrenList.get(i);
    byte[] childName = child.getLocalNameBytes();

    long lastYieldCount = summary.getYieldCount();
    child.computeContentSummary(summary);

    // Check whether the computation was paused in the subtree.
    // The counts may be off, but traversing the rest of children
    // should be made safe.
    if (lastYieldCount == summary.getYieldCount()) {
      continue;
    }
    // The locks were released and reacquired. Check parent first.
    if (getParent() == null) {
      // Stop further counting and return whatever we have so far.
      break;
    }
    // Obtain the children list again since it may have been modified.
    childrenList = getChildrenList(snapshotId);
    // Reposition in case the children list is changed. Decrement by 1
    // since it will be incremented when loops.
    i = nextChild(childrenList, childName) - 1;
  }

  // Increment the directory count for this directory.
  summary.getCounts().addContent(Content.DIRECTORY, 1);
  // Relinquish and reacquire locks if necessary.
  summary.yield();
  return summary;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:37,代码来源:INodeDirectory.java

示例3: computeDirectoryContentSummary

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入方法依赖的package包/类
protected ContentSummaryComputationContext computeDirectoryContentSummary(
    ContentSummaryComputationContext summary, int snapshotId) {
  ReadOnlyList<INode> childrenList = getChildrenList(snapshotId);
  // Explicit traversing is done to enable repositioning after relinquishing
  // and reacquiring locks.
  for (int i = 0;  i < childrenList.size(); i++) {
    INode child = childrenList.get(i);
    byte[] childName = child.getLocalNameBytes();

    long lastYieldCount = summary.getYieldCount();
    child.computeContentSummary(snapshotId, summary);

    // Check whether the computation was paused in the subtree.
    // The counts may be off, but traversing the rest of children
    // should be made safe.
    if (lastYieldCount == summary.getYieldCount()) {
      continue;
    }
    // The locks were released and reacquired. Check parent first.
    if (!isRoot() && getParent() == null) {
      // Stop further counting and return whatever we have so far.
      break;
    }
    // Obtain the children list again since it may have been modified.
    childrenList = getChildrenList(snapshotId);
    // Reposition in case the children list is changed. Decrement by 1
    // since it will be incremented when loops.
    i = nextChild(childrenList, childName) - 1;
  }

  // Increment the directory count for this directory.
  summary.getCounts().addContent(Content.DIRECTORY, 1);
  // Relinquish and reacquire locks if necessary.
  summary.yield();
  return summary;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:37,代码来源:INodeDirectory.java

示例4: computeDirectoryContentSummary

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入方法依赖的package包/类
protected ContentSummaryComputationContext computeDirectoryContentSummary(
    ContentSummaryComputationContext summary, int snapshotId) {
  ReadOnlyList<INode> childrenList = getChildrenList(snapshotId);
  // Explicit traversing is done to enable repositioning after relinquishing
  // and reacquiring locks.
  for (int i = 0;  i < childrenList.size(); i++) {
    INode child = childrenList.get(i);
    byte[] childName = child.getLocalNameBytes();

    long lastYieldCount = summary.getYieldCount();
    child.computeContentSummary(summary);

    // Check whether the computation was paused in the subtree.
    // The counts may be off, but traversing the rest of children
    // should be made safe.
    if (lastYieldCount == summary.getYieldCount()) {
      continue;
    }
    // The locks were released and reacquired. Check parent first.
    if (getParent() == null) {
      // Stop further counting and return whatever we have so far.
      break;
    }
    // Obtain the children list again since it may have been modified.
    childrenList = getChildrenList(snapshotId);
    // Reposition in case the children list is changed. Decrement by 1
    // since it will be incremented when loops.
    i = nextChild(childrenList, childName) - 1;
  }

  // Increment the directory count for this directory.
  summary.getCounts().add(Content.DIRECTORY, 1);
  // Relinquish and reacquire locks if necessary.
  summary.yield();
  return summary;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:37,代码来源:INodeDirectory.java

示例5: getListing

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入方法依赖的package包/类
/**
 * Get a partial listing of the indicated directory
 *
 * @param src the directory name
 * @param startAfter the name to start listing after
 * @param needLocation if block locations are returned
 * @return a partial listing starting after startAfter
 */
DirectoryListing getListing(String src, byte[] startAfter,
    boolean needLocation) throws UnresolvedLinkException, IOException {
  String srcs = normalizePath(src);

  readLock();
  try {
    if (srcs.endsWith(HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR)) {
      return getSnapshotsListing(srcs, startAfter);
    }
    final INodesInPath inodesInPath = rootDir.getLastINodeInPath(srcs, true);
    final Snapshot snapshot = inodesInPath.getPathSnapshot();
    final INode targetNode = inodesInPath.getINode(0);
    if (targetNode == null)
      return null;
    
    if (!targetNode.isDirectory()) {
      return new DirectoryListing(
          new HdfsFileStatus[]{createFileStatus(HdfsFileStatus.EMPTY_NAME,
              targetNode, needLocation, snapshot)}, 0);
    }

    final INodeDirectory dirInode = targetNode.asDirectory();
    final ReadOnlyList<INode> contents = dirInode.getChildrenList(snapshot);
    int startChild = INodeDirectory.nextChild(contents, startAfter);
    int totalNumChildren = contents.size();
    int numOfListing = Math.min(totalNumChildren-startChild, this.lsLimit);
    HdfsFileStatus listing[] = new HdfsFileStatus[numOfListing];
    for (int i=0; i<numOfListing; i++) {
      INode cur = contents.get(startChild+i);
      listing[i] = createFileStatus(cur.getLocalNameBytes(), cur,
          needLocation, snapshot);
    }
    return new DirectoryListing(
        listing, totalNumChildren-startChild-numOfListing);
  } finally {
    readUnlock();
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:47,代码来源:FSDirectory.java

示例6: computeDirectoryContentSummary

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入方法依赖的package包/类
ContentSummaryComputationContext computeDirectoryContentSummary(
    ContentSummaryComputationContext summary) {
  ReadOnlyList<INode> childrenList = getChildrenList(Snapshot.CURRENT_STATE_ID);
  // Explicit traversing is done to enable repositioning after relinquishing
  // and reacquiring locks.
  for (int i = 0;  i < childrenList.size(); i++) {
    INode child = childrenList.get(i);
    byte[] childName = child.getLocalNameBytes();

    long lastYieldCount = summary.getYieldCount();
    child.computeContentSummary(summary);

    // Check whether the computation was paused in the subtree.
    // The counts may be off, but traversing the rest of children
    // should be made safe.
    if (lastYieldCount == summary.getYieldCount()) {
      continue;
    }
    // The locks were released and reacquired. Check parent first.
    if (getParent() == null) {
      // Stop further counting and return whatever we have so far.
      break;
    }
    // Obtain the children list again since it may have been modified.
    childrenList = getChildrenList(Snapshot.CURRENT_STATE_ID);
    // Reposition in case the children list is changed. Decrement by 1
    // since it will be incremented when loops.
    i = nextChild(childrenList, childName) - 1;
  }

  // Increment the directory count for this directory.
  summary.getCounts().add(Content.DIRECTORY, 1);
  // Relinquish and reacquire locks if necessary.
  summary.yield();
  return summary;
}
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:37,代码来源:INodeDirectory.java

示例7: getListing

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入方法依赖的package包/类
/**
 * Get a partial listing of the indicated directory
 *
 * We will stop when any of the following conditions is met:
 * 1) this.lsLimit files have been added
 * 2) needLocation is true AND enough files have been added such
 * that at least this.lsLimit block locations are in the response
 *
 * @param fsd FSDirectory
 * @param iip the INodesInPath instance containing all the INodes along the
 *            path
 * @param src the directory name
 * @param startAfter the name to start listing after
 * @param needLocation if block locations are returned
 * @return a partial listing starting after startAfter
 */
private static DirectoryListing getListing(FSDirectory fsd, INodesInPath iip,
    String src, byte[] startAfter, boolean needLocation, boolean isSuperUser)
    throws IOException {
  String srcs = FSDirectory.normalizePath(src);
  final boolean isRawPath = FSDirectory.isReservedRawName(src);

  fsd.readLock();
  try {
    if (srcs.endsWith(HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR)) {
      return getSnapshotsListing(fsd, srcs, startAfter);
    }
    final int snapshot = iip.getPathSnapshotId();
    final INode targetNode = iip.getLastINode();
    if (targetNode == null)
      return null;
    byte parentStoragePolicy = isSuperUser ?
        targetNode.getStoragePolicyID() : BlockStoragePolicySuite
        .ID_UNSPECIFIED;

    if (!targetNode.isDirectory()) {
      return new DirectoryListing(
          new HdfsFileStatus[]{createFileStatus(fsd, src,
              HdfsFileStatus.EMPTY_NAME, targetNode, needLocation,
              parentStoragePolicy, snapshot, isRawPath, iip)}, 0);
    }

    final INodeDirectory dirInode = targetNode.asDirectory();
    final ReadOnlyList<INode> contents = dirInode.getChildrenList(snapshot);
    int startChild = INodeDirectory.nextChild(contents, startAfter);
    int totalNumChildren = contents.size();
    int numOfListing = Math.min(totalNumChildren - startChild,
        fsd.getLsLimit());
    int locationBudget = fsd.getLsLimit();
    int listingCnt = 0;
    HdfsFileStatus listing[] = new HdfsFileStatus[numOfListing];
    for (int i=0; i<numOfListing && locationBudget>0; i++) {
      INode cur = contents.get(startChild+i);
      byte curPolicy = isSuperUser && !cur.isSymlink()?
          cur.getLocalStoragePolicyID():
          BlockStoragePolicySuite.ID_UNSPECIFIED;
      listing[i] = createFileStatus(fsd, src, cur.getLocalNameBytes(), cur,
          needLocation, getStoragePolicyID(curPolicy,
              parentStoragePolicy), snapshot, isRawPath, iip);
      listingCnt++;
      if (needLocation) {
          // Once we  hit lsLimit locations, stop.
          // This helps to prevent excessively large response payloads.
          // Approximate #locations with locatedBlockCount() * repl_factor
          LocatedBlocks blks =
              ((HdfsLocatedFileStatus)listing[i]).getBlockLocations();
          locationBudget -= (blks == null) ? 0 :
             blks.locatedBlockCount() * listing[i].getReplication();
      }
    }
    // truncate return array if necessary
    if (listingCnt < numOfListing) {
        listing = Arrays.copyOf(listing, listingCnt);
    }
    return new DirectoryListing(
        listing, totalNumChildren-startChild-listingCnt);
  } finally {
    fsd.readUnlock();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:81,代码来源:FSDirStatAndListingOp.java

示例8: getListing

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入方法依赖的package包/类
/**
 * Get a partial listing of the indicated directory
 *
 * We will stop when any of the following conditions is met:
 * 1) this.lsLimit files have been added
 * 2) needLocation is true AND enough files have been added such
 * that at least this.lsLimit block locations are in the response
 *
 * @param src the directory name
 * @param startAfter the name to start listing after
 * @param needLocation if block locations are returned
 * @return a partial listing starting after startAfter
 */
DirectoryListing getListing(String src, byte[] startAfter,
    boolean needLocation, boolean isSuperUser)
    throws UnresolvedLinkException, IOException {
  String srcs = normalizePath(src);
  final boolean isRawPath = isReservedRawName(src);

  readLock();
  try {
    if (srcs.endsWith(HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR)) {
      return getSnapshotsListing(srcs, startAfter);
    }
    final INodesInPath inodesInPath = getINodesInPath(srcs, true);
    final INode[] inodes = inodesInPath.getINodes();
    final int snapshot = inodesInPath.getPathSnapshotId();
    final INode targetNode = inodes[inodes.length - 1];
    if (targetNode == null)
      return null;
    byte parentStoragePolicy = isSuperUser ?
        targetNode.getStoragePolicyID() : BlockStoragePolicySuite.ID_UNSPECIFIED;
    
    if (!targetNode.isDirectory()) {
      return new DirectoryListing(
          new HdfsFileStatus[]{createFileStatus(HdfsFileStatus.EMPTY_NAME,
              targetNode, needLocation, parentStoragePolicy, snapshot,
              isRawPath, inodesInPath)}, 0);
    }

    final INodeDirectory dirInode = targetNode.asDirectory();
    final ReadOnlyList<INode> contents = dirInode.getChildrenList(snapshot);
    int startChild = INodeDirectory.nextChild(contents, startAfter);
    int totalNumChildren = contents.size();
    int numOfListing = Math.min(totalNumChildren-startChild, this.lsLimit);
    int locationBudget = this.lsLimit;
    int listingCnt = 0;
    HdfsFileStatus listing[] = new HdfsFileStatus[numOfListing];
    for (int i=0; i<numOfListing && locationBudget>0; i++) {
      INode cur = contents.get(startChild+i);
      byte curPolicy = isSuperUser && !cur.isSymlink()?
          cur.getLocalStoragePolicyID():
          BlockStoragePolicySuite.ID_UNSPECIFIED;
      listing[i] = createFileStatus(cur.getLocalNameBytes(), cur, needLocation,
          getStoragePolicyID(curPolicy, parentStoragePolicy), snapshot,
          isRawPath, inodesInPath);
      listingCnt++;
      if (needLocation) {
          // Once we  hit lsLimit locations, stop.
          // This helps to prevent excessively large response payloads.
          // Approximate #locations with locatedBlockCount() * repl_factor
          LocatedBlocks blks = 
              ((HdfsLocatedFileStatus)listing[i]).getBlockLocations();
          locationBudget -= (blks == null) ? 0 :
             blks.locatedBlockCount() * listing[i].getReplication();
      }
    }
    // truncate return array if necessary
    if (listingCnt < numOfListing) {
        listing = Arrays.copyOf(listing, listingCnt);
    }
    return new DirectoryListing(
        listing, totalNumChildren-startChild-listingCnt);
  } finally {
    readUnlock();
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:78,代码来源:FSDirectory.java

示例9: getListing

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入方法依赖的package包/类
/**
 * Get a partial listing of the indicated directory
 *
 * We will stop when any of the following conditions is met:
 * 1) this.lsLimit files have been added
 * 2) needLocation is true AND enough files have been added such
 * that at least this.lsLimit block locations are in the response
 *
 * @param src the directory name
 * @param startAfter the name to start listing after
 * @param needLocation if block locations are returned
 * @return a partial listing starting after startAfter
 */
DirectoryListing getListing(String src, byte[] startAfter,
    boolean needLocation) throws UnresolvedLinkException, IOException {
  String srcs = normalizePath(src);

  readLock();
  try {
    if (srcs.endsWith(HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR)) {
      return getSnapshotsListing(srcs, startAfter);
    }
    final INodesInPath inodesInPath = rootDir.getLastINodeInPath(srcs, true);
    final int snapshot = inodesInPath.getPathSnapshotId();
    final INode targetNode = inodesInPath.getINode(0);
    if (targetNode == null)
      return null;
    
    if (!targetNode.isDirectory()) {
      return new DirectoryListing(
          new HdfsFileStatus[]{createFileStatus(HdfsFileStatus.EMPTY_NAME,
              targetNode, needLocation, snapshot)}, 0);
    }

    final INodeDirectory dirInode = targetNode.asDirectory();
    final ReadOnlyList<INode> contents = dirInode.getChildrenList(snapshot);
    int startChild = INodeDirectory.nextChild(contents, startAfter);
    int totalNumChildren = contents.size();
    int numOfListing = Math.min(totalNumChildren-startChild, this.lsLimit);
    int locationBudget = this.lsLimit;
    int listingCnt = 0;
    HdfsFileStatus listing[] = new HdfsFileStatus[numOfListing];
    for (int i=0; i<numOfListing && locationBudget>0; i++) {
      INode cur = contents.get(startChild+i);
      listing[i] = createFileStatus(cur.getLocalNameBytes(), cur,
          needLocation, snapshot);
      listingCnt++;
      if (needLocation) {
          // Once we  hit lsLimit locations, stop.
          // This helps to prevent excessively large response payloads.
          // Approximate #locations with locatedBlockCount() * repl_factor
          LocatedBlocks blks = 
              ((HdfsLocatedFileStatus)listing[i]).getBlockLocations();
          locationBudget -= (blks == null) ? 0 :
             blks.locatedBlockCount() * listing[i].getReplication();
      }
    }
    // truncate return array if necessary
    if (listingCnt < numOfListing) {
        listing = Arrays.copyOf(listing, listingCnt);
    }
    return new DirectoryListing(
        listing, totalNumChildren-startChild-listingCnt);
  } finally {
    readUnlock();
  }
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:68,代码来源:FSDirectory.java

示例10: getChild

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入方法依赖的package包/类
/**
 * @param name the name of the child
 * @param snapshot
 *          if it is not null, get the result from the given snapshot;
 *          otherwise, get the result from the current directory.
 * @return the child inode.
 */
public INode getChild(byte[] name, Snapshot snapshot) {
  final ReadOnlyList<INode> c = getChildrenList(snapshot);
  final int i = ReadOnlyList.Util.binarySearch(c, name);
  return i < 0? null: c.get(i);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:13,代码来源:INodeDirectory.java


注:本文中的org.apache.hadoop.hdfs.util.ReadOnlyList.get方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。