当前位置: 首页>>代码示例>>Java>>正文


Java OperationCategory类代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory的典型用法代码示例。如果您正苦于以下问题:Java OperationCategory类的具体用法?Java OperationCategory怎么用?Java OperationCategory使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


OperationCategory类属于org.apache.hadoop.hdfs.server.namenode.NameNode包,在下文中一共展示了OperationCategory类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: retrievePassword

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
@Override
public byte[] retrievePassword(
    DelegationTokenIdentifier identifier) throws InvalidToken {
  try {
    // this check introduces inconsistency in the authentication to a
    // HA standby NN.  non-token auths are allowed into the namespace which
    // decides whether to throw a StandbyException.  tokens are a bit
    // different in that a standby may be behind and thus not yet know
    // of all tokens issued by the active NN.  the following check does
    // not allow ANY token auth, however it should allow known tokens in
    namesystem.checkOperation(OperationCategory.READ);
  } catch (StandbyException se) {
    // FIXME: this is a hack to get around changing method signatures by
    // tunneling a non-InvalidToken exception as the cause which the
    // RPC server will unwrap before returning to the client
    InvalidToken wrappedStandby = new InvalidToken("StandbyException");
    wrappedStandby.initCause(se);
    throw wrappedStandby;
  }
  return super.retrievePassword(identifier);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:DelegationTokenSecretManager.java

示例2: retriableRetrievePassword

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
@Override
public byte[] retriableRetrievePassword(DelegationTokenIdentifier identifier)
    throws InvalidToken, StandbyException, RetriableException, IOException {
  namesystem.checkOperation(OperationCategory.READ);
  try {
    return super.retrievePassword(identifier);
  } catch (InvalidToken it) {
    if (namesystem.inTransitionToActive()) {
      // if the namesystem is currently in the middle of transition to 
      // active state, let client retry since the corresponding editlog may 
      // have not been applied yet
      throw new RetriableException(it);
    } else {
      throw it;
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:DelegationTokenSecretManager.java

示例3: metaSave

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Dump all metadata into specified file
 */
void metaSave(String filename) throws IOException {
  checkSuperuserPrivilege();
  checkOperation(OperationCategory.UNCHECKED);
  writeLock();
  try {
    checkOperation(OperationCategory.UNCHECKED);
    File file = new File(System.getProperty("hadoop.log.dir"), filename);
    PrintWriter out = new PrintWriter(new BufferedWriter(
        new OutputStreamWriter(new FileOutputStream(file), Charsets.UTF_8)));
    metaSave(out);
    out.flush();
    out.close();
  } finally {
    writeUnlock();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:FSNamesystem.java

示例4: setPermission

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Set permissions for an existing file.
 * @throws IOException
 */
void setPermission(String src, FsPermission permission) throws IOException {
  HdfsFileStatus auditStat;
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot set permission for " + src);
    auditStat = FSDirAttrOp.setPermission(dir, src, permission);
  } catch (AccessControlException e) {
    logAuditEvent(false, "setPermission", src);
    throw e;
  } finally {
    writeUnlock();
  }
  getEditLog().logSync();
  logAuditEvent(true, "setPermission", src, null, auditStat);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:FSNamesystem.java

示例5: setOwner

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Set owner for an existing file.
 * @throws IOException
 */
void setOwner(String src, String username, String group)
    throws IOException {
  HdfsFileStatus auditStat;
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot set owner for " + src);
    auditStat = FSDirAttrOp.setOwner(dir, src, username, group);
  } catch (AccessControlException e) {
    logAuditEvent(false, "setOwner", src);
    throw e;
  } finally {
    writeUnlock();
  }
  getEditLog().logSync();
  logAuditEvent(true, "setOwner", src, null, auditStat);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:FSNamesystem.java

示例6: concat

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Moves all the blocks from {@code srcs} and appends them to {@code target}
 * To avoid rollbacks we will verify validity of ALL of the args
 * before we start actual move.
 * 
 * This does not support ".inodes" relative path
 * @param target target to concat into
 * @param srcs file that will be concatenated
 * @throws IOException on error
 */
void concat(String target, String [] srcs, boolean logRetryCache)
    throws IOException {
  checkOperation(OperationCategory.WRITE);
  waitForLoadingFSImage();
  HdfsFileStatus stat = null;
  boolean success = false;
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot concat " + target);
    stat = FSDirConcatOp.concat(dir, target, srcs, logRetryCache);
    success = true;
  } finally {
    writeUnlock();
    if (success) {
      getEditLog().logSync();
    }
    logAuditEvent(success, "concat", Arrays.toString(srcs), target, stat);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:31,代码来源:FSNamesystem.java

示例7: setTimes

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * stores the modification and access time for this inode. 
 * The access time is precise up to an hour. The transaction, if needed, is
 * written to the edits log but is not flushed.
 */
void setTimes(String src, long mtime, long atime) throws IOException {
  HdfsFileStatus auditStat;
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot set times " + src);
    auditStat = FSDirAttrOp.setTimes(dir, src, mtime, atime);
  } catch (AccessControlException e) {
    logAuditEvent(false, "setTimes", src);
    throw e;
  } finally {
    writeUnlock();
  }
  getEditLog().logSync();
  logAuditEvent(true, "setTimes", src, null, auditStat);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:FSNamesystem.java

示例8: createSymlink

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Create a symbolic link.
 */
@SuppressWarnings("deprecation")
void createSymlink(String target, String link,
    PermissionStatus dirPerms, boolean createParent, boolean logRetryCache)
    throws IOException {
  if (!FileSystem.areSymlinksEnabled()) {
    throw new UnsupportedOperationException("Symlinks not supported");
  }
  HdfsFileStatus auditStat = null;
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot create symlink " + link);
    auditStat = FSDirSymlinkOp.createSymlinkInt(this, target, link, dirPerms,
                                                createParent, logRetryCache);
  } catch (AccessControlException e) {
    logAuditEvent(false, "createSymlink", link, target, null);
    throw e;
  } finally {
    writeUnlock();
  }
  getEditLog().logSync();
  logAuditEvent(true, "createSymlink", link, target, auditStat);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:28,代码来源:FSNamesystem.java

示例9: setReplication

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Set replication for an existing file.
 * 
 * The NameNode sets new replication and schedules either replication of 
 * under-replicated data blocks or removal of the excessive block copies 
 * if the blocks are over-replicated.
 * 
 * @see ClientProtocol#setReplication(String, short)
 * @param src file name
 * @param replication new replication
 * @return true if successful; 
 *         false if file does not exist or is a directory
 */
boolean setReplication(final String src, final short replication)
    throws IOException {
  boolean success = false;
  waitForLoadingFSImage();
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot set replication for " + src);
    success = FSDirAttrOp.setReplication(dir, blockManager, src, replication);
  } catch (AccessControlException e) {
    logAuditEvent(false, "setReplication", src);
    throw e;
  } finally {
    writeUnlock();
  }
  if (success) {
    getEditLog().logSync();
    logAuditEvent(true, "setReplication", src);
  }
  return success;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:36,代码来源:FSNamesystem.java

示例10: setStoragePolicy

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Set the storage policy for a file or a directory.
 *
 * @param src file/directory path
 * @param policyName storage policy name
 */
void setStoragePolicy(String src, String policyName) throws IOException {
  HdfsFileStatus auditStat;
  waitForLoadingFSImage();
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot set storage policy for " + src);
    auditStat = FSDirAttrOp.setStoragePolicy(
        dir, blockManager, src, policyName);
  } catch (AccessControlException e) {
    logAuditEvent(false, "setStoragePolicy", src);
    throw e;
  } finally {
    writeUnlock();
  }
  getEditLog().logSync();
  logAuditEvent(true, "setStoragePolicy", src, null, auditStat);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:FSNamesystem.java

示例11: renameTo

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/** 
 * Change the indicated filename. 
 * @deprecated Use {@link #renameTo(String, String, boolean,
 * Options.Rename...)} instead.
 */
@Deprecated
boolean renameTo(String src, String dst, boolean logRetryCache)
    throws IOException {
  waitForLoadingFSImage();
  checkOperation(OperationCategory.WRITE);
  FSDirRenameOp.RenameOldResult ret = null;
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot rename " + src);
    ret = FSDirRenameOp.renameToInt(dir, src, dst, logRetryCache);
  } catch (AccessControlException e)  {
    logAuditEvent(false, "rename", src, dst, null);
    throw e;
  } finally {
    writeUnlock();
  }
  boolean success = ret != null && ret.success;
  if (success) {
    getEditLog().logSync();
  }
  logAuditEvent(success, "rename", src, dst,
      ret == null ? null : ret.auditStat);
  return success;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:31,代码来源:FSNamesystem.java

示例12: getFileInfo

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Get the file info for a specific file.
 *
 * @param src The string representation of the path to the file
 * @param resolveLink whether to throw UnresolvedLinkException
 *        if src refers to a symlink
 *
 * @throws AccessControlException if access is denied
 * @throws UnresolvedLinkException if a symlink is encountered.
 *
 * @return object containing information regarding the file
 *         or null if file not found
 * @throws StandbyException
 */
HdfsFileStatus getFileInfo(final String src, boolean resolveLink)
  throws IOException {
  checkOperation(OperationCategory.READ);
  HdfsFileStatus stat = null;
  readLock();
  try {
    checkOperation(OperationCategory.READ);
    stat = FSDirStatAndListingOp.getFileInfo(dir, src, resolveLink);
  } catch (AccessControlException e) {
    logAuditEvent(false, "getfileinfo", src);
    throw e;
  } finally {
    readUnlock();
  }
  logAuditEvent(true, "getfileinfo", src);
  return stat;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:FSNamesystem.java

示例13: mkdirs

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Create all the necessary directories
 */
boolean mkdirs(String src, PermissionStatus permissions,
    boolean createParent) throws IOException {
  HdfsFileStatus auditStat = null;
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot create directory " + src);
    auditStat = FSDirMkdirOp.mkdirs(this, src, permissions, createParent);
  } catch (AccessControlException e) {
    logAuditEvent(false, "mkdirs", src);
    throw e;
  } finally {
    writeUnlock();
  }
  getEditLog().logSync();
  logAuditEvent(true, "mkdirs", src, null, auditStat);
  return true;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:FSNamesystem.java

示例14: setQuota

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Set the namespace quota and storage space quota for a directory.
 * See {@link ClientProtocol#setQuota(String, long, long, StorageType)} for the
 * contract.
 * 
 * Note: This does not support ".inodes" relative path.
 */
void setQuota(String src, long nsQuota, long ssQuota, StorageType type)
    throws IOException {
  checkOperation(OperationCategory.WRITE);
  writeLock();
  boolean success = false;
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot set quota on " + src);
    FSDirAttrOp.setQuota(dir, src, nsQuota, ssQuota, type);
    success = true;
  } finally {
    writeUnlock();
    if (success) {
      getEditLog().logSync();
    }
    logAuditEvent(success, "setQuota", src);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:FSNamesystem.java

示例15: getListing

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Get a partial listing of the indicated directory
 *
 * @param src the directory name
 * @param startAfter the name to start after
 * @param needLocation if blockLocations need to be returned
 * @return a partial listing starting after startAfter
 * 
 * @throws AccessControlException if access is denied
 * @throws UnresolvedLinkException if symbolic link is encountered
 * @throws IOException if other I/O error occurred
 */
DirectoryListing getListing(String src, byte[] startAfter,
    boolean needLocation) 
    throws IOException {
  checkOperation(OperationCategory.READ);
  DirectoryListing dl = null;
  readLock();
  try {
    checkOperation(NameNode.OperationCategory.READ);
    dl = FSDirStatAndListingOp.getListingInt(dir, src, startAfter,
        needLocation);
  } catch (AccessControlException e) {
    logAuditEvent(false, "listStatus", src);
    throw e;
  } finally {
    readUnlock();
  }
  logAuditEvent(true, "listStatus", src);
  return dl;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:FSNamesystem.java


注:本文中的org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。