當前位置: 首頁>>代碼示例>>Java>>正文


Java FsPermission.getDefault方法代碼示例

本文整理匯總了Java中org.apache.hadoop.fs.permission.FsPermission.getDefault方法的典型用法代碼示例。如果您正苦於以下問題:Java FsPermission.getDefault方法的具體用法?Java FsPermission.getDefault怎麽用?Java FsPermission.getDefault使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.fs.permission.FsPermission的用法示例。


在下文中一共展示了FsPermission.getDefault方法的9個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: HdfsFileStatus

import org.apache.hadoop.fs.permission.FsPermission; //導入方法依賴的package包/類
/**
 * Constructor
 * @param length the number of bytes the file has
 * @param isdir if the path is a directory
 * @param block_replication the replication factor
 * @param blocksize the block size
 * @param modification_time modification time
 * @param access_time access time
 * @param permission permission
 * @param owner the owner of the path
 * @param group the group of the path
 * @param path the local name in java UTF8 encoding the same as that in-memory
 * @param fileId the file id
 * @param feInfo the file's encryption info
 */
public HdfsFileStatus(long length, boolean isdir, int block_replication,
    long blocksize, long modification_time, long access_time,
    FsPermission permission, String owner, String group, byte[] symlink,
    byte[] path, long fileId, int childrenNum, FileEncryptionInfo feInfo,
    byte storagePolicy) {
  this.length = length;
  this.isdir = isdir;
  this.block_replication = (short)block_replication;
  this.blocksize = blocksize;
  this.modification_time = modification_time;
  this.access_time = access_time;
  this.permission = (permission == null) ? 
      ((isdir || symlink!=null) ? 
          FsPermission.getDefault() : 
          FsPermission.getFileDefault()) :
      permission;
  this.owner = (owner == null) ? "" : owner;
  this.group = (group == null) ? "" : group;
  this.symlink = symlink;
  this.path = path;
  this.fileId = fileId;
  this.childrenNum = childrenNum;
  this.feInfo = feInfo;
  this.storagePolicy = storagePolicy;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:41,代碼來源:HdfsFileStatus.java

示例2: setup

import org.apache.hadoop.fs.permission.FsPermission; //導入方法依賴的package包/類
@Before
public void setup() throws IOException {
  StaticMapping.resetMap();
  Configuration conf = new HdfsConfiguration();
  final String[] racks = { "/RACK0", "/RACK0", "/RACK2", "/RACK3", "/RACK2" };
  final String[] hosts = { "/host0", "/host1", "/host2", "/host3", "/host4" };

  conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, DEFAULT_BLOCK_SIZE);
  conf.setInt(DFSConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, DEFAULT_BLOCK_SIZE / 2);
  cluster = new MiniDFSCluster.Builder(conf).numDataNodes(5).racks(racks)
      .hosts(hosts).build();
  cluster.waitActive();
  nameNodeRpc = cluster.getNameNodeRpc();
  namesystem = cluster.getNamesystem();
  perm = new PermissionStatus("TestDefaultBlockPlacementPolicy", null,
      FsPermission.getDefault());
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:18,代碼來源:TestDefaultBlockPlacementPolicy.java

示例3: newDirectory

import org.apache.hadoop.fs.permission.FsPermission; //導入方法依賴的package包/類
private FileStatus newDirectory(FileMetadata meta, Path path) {
  return new FileStatus (
      0,
      true,
      1,
      blockSize,
      meta == null ? 0 : meta.getLastModified(),
      0,
      meta == null ? FsPermission.getDefault() : meta.getPermissionStatus().getPermission(),
      meta == null ? "" : meta.getPermissionStatus().getUserName(),
      meta == null ? "" : meta.getPermissionStatus().getGroupName(),
      path.makeQualified(getUri(), getWorkingDirectory()));
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:14,代碼來源:NativeAzureFileSystem.java

示例4: deprecatedGetFileLinkStatusInternal

import org.apache.hadoop.fs.permission.FsPermission; //導入方法依賴的package包/類
/**
 * Deprecated. Remains for legacy support. Should be removed when {@link Stat}
 * gains support for Windows and other operating systems.
 */
@Deprecated
private FileStatus deprecatedGetFileLinkStatusInternal(final Path f)
    throws IOException {
  String target = FileUtil.readLink(new File(f.toString()));

  try {
    FileStatus fs = getFileStatus(f);
    // If f refers to a regular file or directory
    if (target.isEmpty()) {
      return fs;
    }
    // Otherwise f refers to a symlink
    return new FileStatus(fs.getLen(),
        false,
        fs.getReplication(),
        fs.getBlockSize(),
        fs.getModificationTime(),
        fs.getAccessTime(),
        fs.getPermission(),
        fs.getOwner(),
        fs.getGroup(),
        new Path(target),
        f);
  } catch (FileNotFoundException e) {
    /* The exists method in the File class returns false for dangling
     * links so we can get a FileNotFoundException for links that exist.
     * It's also possible that we raced with a delete of the link. Use
     * the readBasicFileAttributes method in java.nio.file.attributes
     * when available.
     */
    if (!target.isEmpty()) {
      return new FileStatus(0, false, 0, 0, 0, 0, FsPermission.getDefault(),
          "", "", new Path(target), f);
    }
    // f refers to a file or directory that does not exist
    throw e;
  }
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:43,代碼來源:RawLocalFileSystem.java

示例5: addSymlink

import org.apache.hadoop.fs.permission.FsPermission; //導入方法依賴的package包/類
/**
 * Add the given symbolic link to the fs. Record it in the edits log.
 */
private static INodeSymlink addSymlink(FSDirectory fsd, String path,
    INodesInPath iip, String target, PermissionStatus dirPerms,
    boolean createParent, boolean logRetryCache) throws IOException {
  final long mtime = now();
  final byte[] localName = iip.getLastLocalName();
  if (createParent) {
    Map.Entry<INodesInPath, String> e = FSDirMkdirOp
        .createAncestorDirectories(fsd, iip, dirPerms);
    if (e == null) {
      return null;
    }
    iip = INodesInPath.append(e.getKey(), null, localName);
  }
  final String userName = dirPerms.getUserName();
  long id = fsd.allocateNewInodeId();
  PermissionStatus perm = new PermissionStatus(
      userName, null, FsPermission.getDefault());
  INodeSymlink newNode = unprotectedAddSymlink(fsd, iip.getExistingINodes(),
      localName, id, target, mtime, mtime, perm);
  if (newNode == null) {
    NameNode.stateChangeLog.info("addSymlink: failed to add " + path);
    return null;
  }
  fsd.getEditLog().logSymlink(path, target, mtime, mtime, newNode,
      logRetryCache);

  if(NameNode.stateChangeLog.isDebugEnabled()) {
    NameNode.stateChangeLog.debug("addSymlink: " + path + " is added");
  }
  return newNode;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:35,代碼來源:FSDirSymlinkOp.java

示例6: applyUMask

import org.apache.hadoop.fs.permission.FsPermission; //導入方法依賴的package包/類
private FsPermission applyUMask(FsPermission permission) {
  if (permission == null) {
    permission = FsPermission.getDefault();
  }
  return permission.applyUMask(FsPermission.getUMask(getConf()));
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:7,代碼來源:WebHdfsFileSystem.java

示例7: testFsckFileNotFound

import org.apache.hadoop.fs.permission.FsPermission; //導入方法依賴的package包/類
/** Test fsck with FileNotFound */
@Test
public void testFsckFileNotFound() throws Exception {

  // Number of replicas to actually start
  final short NUM_REPLICAS = 1;

  Configuration conf = new Configuration();
  NameNode namenode = mock(NameNode.class);
  NetworkTopology nettop = mock(NetworkTopology.class);
  Map<String,String[]> pmap = new HashMap<String, String[]>();
  Writer result = new StringWriter();
  PrintWriter out = new PrintWriter(result, true);
  InetAddress remoteAddress = InetAddress.getLocalHost();
  FSNamesystem fsName = mock(FSNamesystem.class);
  BlockManager blockManager = mock(BlockManager.class);
  DatanodeManager dnManager = mock(DatanodeManager.class);

  when(namenode.getNamesystem()).thenReturn(fsName);
  when(fsName.getBlockLocations(any(FSPermissionChecker.class), anyString(),
                                anyLong(), anyLong(),
                                anyBoolean(), anyBoolean()))
      .thenThrow(new FileNotFoundException());
  when(fsName.getBlockManager()).thenReturn(blockManager);
  when(blockManager.getDatanodeManager()).thenReturn(dnManager);

  NamenodeFsck fsck = new NamenodeFsck(conf, namenode, nettop, pmap, out,
      NUM_REPLICAS, remoteAddress);

  String pathString = "/tmp/testFile";

  long length = 123L;
  boolean isDir = false;
  int blockReplication = 1;
  long blockSize = 128 *1024L;
  long modTime = 123123123L;
  long accessTime = 123123120L;
  FsPermission perms = FsPermission.getDefault();
  String owner = "foo";
  String group = "bar";
  byte [] symlink = null;
  byte [] path = new byte[128];
  path = DFSUtil.string2Bytes(pathString);
  long fileId = 312321L;
  int numChildren = 1;
  byte storagePolicy = 0;

  HdfsFileStatus file = new HdfsFileStatus(length, isDir, blockReplication,
      blockSize, modTime, accessTime, perms, owner, group, symlink, path,
      fileId, numChildren, null, storagePolicy);
  Result res = new Result(conf);

  try {
    fsck.check(pathString, file, res);
  } catch (Exception e) {
    fail("Unexpected exception "+ e.getMessage());
  }
  assertTrue(res.toString().contains("HEALTHY"));
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:60,代碼來源:TestFsck.java

示例8: mkdirs

import org.apache.hadoop.fs.permission.FsPermission; //導入方法依賴的package包/類
/**
 * Create a directory (or hierarchy of directories) with the given
 * name and permission.
 *
 * @param src The path of the directory being created
 * @param permission The permission of the directory being created.
 * If permission == null, use {@link FsPermission#getDefault()}.
 * @param createParent create missing parent directory if true
 * 
 * @return True if the operation success.
 * 
 * @see ClientProtocol#mkdirs(String, FsPermission, boolean)
 */
public boolean mkdirs(String src, FsPermission permission,
    boolean createParent) throws IOException {
  if (permission == null) {
    permission = FsPermission.getDefault();
  }
  FsPermission masked = permission.applyUMask(dfsClientConf.uMask);
  return primitiveMkdir(src, masked, createParent);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:22,代碼來源:DFSClient.java

示例9: defaultPermissionNoBlobMetadata

import org.apache.hadoop.fs.permission.FsPermission; //導入方法依賴的package包/類
/**
 * Default permission to use when no permission metadata is found.
 * 
 * @return The default permission to use.
 */
private static PermissionStatus defaultPermissionNoBlobMetadata() {
  return new PermissionStatus("", "", FsPermission.getDefault());
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:9,代碼來源:AzureNativeFileSystemStore.java


注:本文中的org.apache.hadoop.fs.permission.FsPermission.getDefault方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。