當前位置: 首頁>>代碼示例>>Java>>正文


Java Path.SEPARATOR屬性代碼示例

本文整理匯總了Java中org.apache.hadoop.fs.Path.SEPARATOR屬性的典型用法代碼示例。如果您正苦於以下問題:Java Path.SEPARATOR屬性的具體用法?Java Path.SEPARATOR怎麽用?Java Path.SEPARATOR使用的例子?那麽, 這裏精選的屬性代碼示例或許可以為您提供幫助。您也可以進一步了解該屬性所在org.apache.hadoop.fs.Path的用法示例。


在下文中一共展示了Path.SEPARATOR屬性的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: ChRootedFs

public ChRootedFs(final AbstractFileSystem fs, final Path theRoot)
  throws URISyntaxException {
  super(fs.getUri(), fs.getUri().getScheme(),
      fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
  myFs = fs;
  myFs.checkPath(theRoot);
  chRootPathPart = new Path(myFs.getUriPath(theRoot));
  chRootPathPartString = chRootPathPart.toUri().getPath();
  /*
   * We are making URI include the chrootedPath: e.g. file:///chrootedPath.
   * This is questionable since Path#makeQualified(uri, path) ignores
   * the pathPart of a uri. Since this class is internal we can ignore
   * this issue but if we were to make it external then this needs
   * to be resolved.
   */
  // Handle the two cases:
  //              scheme:/// and scheme://authority/
  myUri = new URI(myFs.getUri().toString() + 
      (myFs.getUri().getAuthority() == null ? "" :  Path.SEPARATOR) +
        chRootPathPart.toUri().getPath().substring(1));
  super.checkPath(theRoot);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:22,代碼來源:ChRootedFs.java

示例2: makeArchiveWithRepl

private String makeArchiveWithRepl() throws Exception {
  final String inputPathStr = inputPath.toUri().getPath();
  System.out.println("inputPathStr = " + inputPathStr);

  final URI uri = fs.getUri();
  final String prefix = "har://hdfs-" + uri.getHost() + ":" + uri.getPort()
      + archivePath.toUri().getPath() + Path.SEPARATOR;

  final String harName = "foo.har";
  final String fullHarPathStr = prefix + harName;
  final String[] args = { "-archiveName", harName, "-p", inputPathStr,
      "-r 3", "*", archivePath.toString() };
  System.setProperty(HadoopArchives.TEST_HADOOP_ARCHIVES_JAR_PATH,
      HADOOP_ARCHIVES_JAR);
  final HadoopArchives har = new HadoopArchives(conf);
  assertEquals(0, ToolRunner.run(har, args));
  return fullHarPathStr;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:18,代碼來源:TestHadoopArchives.java

示例3: testCleanupRemainders

@Test(timeout=10000)
public void testCleanupRemainders() throws Exception {
  Assume.assumeTrue(NativeIO.isAvailable());
  Assume.assumeTrue(SystemUtils.IS_OS_UNIX);
  File path = new File(TEST_BASE, "testCleanupRemainders");
  path.mkdirs();
  String remainder1 = path.getAbsolutePath() + 
      Path.SEPARATOR + "woot2_remainder1";
  String remainder2 = path.getAbsolutePath() +
      Path.SEPARATOR + "woot2_remainder2";
  createTempFile(remainder1);
  createTempFile(remainder2);
  SharedFileDescriptorFactory.create("woot2_", 
      new String[] { path.getAbsolutePath() });
  // creating the SharedFileDescriptorFactory should have removed 
  // the remainders
  Assert.assertFalse(new File(remainder1).exists());
  Assert.assertFalse(new File(remainder2).exists());
  FileUtil.fullyDelete(path);
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:20,代碼來源:TestSharedFileDescriptorFactory.java

示例4: startMiniDfsCluster

/**
 * Start a MiniDFS cluster backed Drillbit cluster
 * @param testClass
 * @param isImpersonationEnabled Enable impersonation in the cluster?
 * @throws Exception
 */
protected static void startMiniDfsCluster(
    final String testClass, final boolean isImpersonationEnabled) throws Exception {
  Preconditions.checkArgument(!Strings.isNullOrEmpty(testClass), "Expected a non-null and non-empty test class name");
  dfsConf = new Configuration();

  // Set the MiniDfs base dir to be the temp directory of the test, so that all files created within the MiniDfs
  // are properly cleanup when test exits.
  miniDfsStoragePath = System.getProperty("java.io.tmpdir") + Path.SEPARATOR + testClass;
  dfsConf.set("hdfs.minidfs.basedir", miniDfsStoragePath);

  if (isImpersonationEnabled) {
    // Set the proxyuser settings so that the user who is running the Drillbits/MiniDfs can impersonate other users.
    dfsConf.set(String.format("hadoop.proxyuser.%s.hosts", processUser), "*");
    dfsConf.set(String.format("hadoop.proxyuser.%s.groups", processUser), "*");
  }

  // Start the MiniDfs cluster
  dfsCluster = new MiniDFSCluster.Builder(dfsConf)
      .numDataNodes(3)
      .format(true)
      .build();

  fs = dfsCluster.getFileSystem();
}
 
開發者ID:skhalifa,項目名稱:QDrill,代碼行數:30,代碼來源:BaseTestImpersonation.java

示例5: makeArchive

private String makeArchive(Path parentPath, String relGlob) throws Exception {
  final String parentPathStr = parentPath.toUri().getPath();
  final String relPathGlob = relGlob == null ? "*" : relGlob;
  System.out.println("parentPathStr = " + parentPathStr);

  final URI uri = fs.getUri();
  final String prefix = "har://hdfs-" + uri.getHost() + ":" + uri.getPort()
      + archivePath.toUri().getPath() + Path.SEPARATOR;

  final String harName = "foo.har";
  final String fullHarPathStr = prefix + harName;
  final String[] args = { "-archiveName", harName, "-p", parentPathStr,
      relPathGlob, archivePath.toString() };
  System.setProperty(HadoopArchives.TEST_HADOOP_ARCHIVES_JAR_PATH,
      HADOOP_ARCHIVES_JAR);
  final HadoopArchives har = new HadoopArchives(conf);
  assertEquals(0, ToolRunner.run(har, args));
  return fullHarPathStr;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:19,代碼來源:TestHadoopArchives.java

示例6: resolveDotInodesPath

private static String resolveDotInodesPath(String src,
    byte[][] pathComponents, FSDirectory fsd)
    throws FileNotFoundException {
  final String inodeId = DFSUtil.bytes2String(pathComponents[3]);
  final long id;
  try {
    id = Long.parseLong(inodeId);
  } catch (NumberFormatException e) {
    throw new FileNotFoundException("Invalid inode path: " + src);
  }
  if (id == INodeId.ROOT_INODE_ID && pathComponents.length == 4) {
    return Path.SEPARATOR;
  }
  INode inode = fsd.getInode(id);
  if (inode == null) {
    throw new FileNotFoundException(
        "File for given inode path does not exist: " + src);
  }
  
  // Handle single ".." for NFS lookup support.
  if ((pathComponents.length > 4)
      && DFSUtil.bytes2String(pathComponents[4]).equals("..")) {
    INode parent = inode.getParent();
    if (parent == null || parent.getId() == INodeId.ROOT_INODE_ID) {
      // inode is root, or its parent is root.
      return Path.SEPARATOR;
    } else {
      return parent.getFullPathName();
    }
  }

  String path = "";
  if (id != INodeId.ROOT_INODE_ID) {
    path = inode.getFullPathName();
  }
  return constructRemainingPath(path, pathComponents, 4);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:37,代碼來源:FSDirectory.java

示例7: testFallback

/**
 * Test the sync returns false in the following scenarios:
 * 1. the source/target dir are not snapshottable dir
 * 2. the source/target does not have the given snapshots
 * 3. changes have been made in target
 */
@Test
public void testFallback() throws Exception {
  // the source/target dir are not snapshottable dir
  Assert.assertFalse(DistCpSync.sync(options, conf));
  // make sure the source path has been updated to the snapshot path
  final Path spath = new Path(source,
      HdfsConstants.DOT_SNAPSHOT_DIR + Path.SEPARATOR + "s2");
  Assert.assertEquals(spath, options.getSourcePaths().get(0));

  // reset source path in options
  options.setSourcePaths(Arrays.asList(source));
  // the source/target does not have the given snapshots
  dfs.allowSnapshot(source);
  dfs.allowSnapshot(target);
  Assert.assertFalse(DistCpSync.sync(options, conf));
  Assert.assertEquals(spath, options.getSourcePaths().get(0));

  // reset source path in options
  options.setSourcePaths(Arrays.asList(source));
  dfs.createSnapshot(source, "s1");
  dfs.createSnapshot(source, "s2");
  dfs.createSnapshot(target, "s1");
  Assert.assertTrue(DistCpSync.sync(options, conf));

  // reset source paths in options
  options.setSourcePaths(Arrays.asList(source));
  // changes have been made in target
  final Path subTarget = new Path(target, "sub");
  dfs.mkdirs(subTarget);
  Assert.assertFalse(DistCpSync.sync(options, conf));
  // make sure the source path has been updated to the snapshot path
  Assert.assertEquals(spath, options.getSourcePaths().get(0));

  // reset source paths in options
  options.setSourcePaths(Arrays.asList(source));
  dfs.delete(subTarget, true);
  Assert.assertTrue(DistCpSync.sync(options, conf));
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:44,代碼來源:TestDistCpSync.java

示例8: getUserLocalDirs

protected List<String> getUserLocalDirs(List<String> localDirs) {
  List<String> userLocalDirs = new ArrayList<>(localDirs.size());
  String user = container.getUser();

  for (String localDir : localDirs) {
    String userLocalDir = localDir + Path.SEPARATOR +
        ContainerLocalizer.USERCACHE + Path.SEPARATOR + user
        + Path.SEPARATOR;

    userLocalDirs.add(userLocalDir);
  }

  return userLocalDirs;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:14,代碼來源:ContainerLaunch.java

示例9: getNMFilecacheDirs

protected List<String> getNMFilecacheDirs(List<String> localDirs) {
  List<String> filecacheDirs = new ArrayList<>(localDirs.size());

  for (String localDir : localDirs) {
    String filecacheDir = localDir + Path.SEPARATOR +
        ContainerLocalizer.FILECACHE;

    filecacheDirs.add(filecacheDir);
  }

  return filecacheDirs;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:12,代碼來源:ContainerLaunch.java

示例10: getSkipOutputPath

/**
 * Get the directory to which skipped records are written. By default it is 
 * the sub directory of the output _logs directory.
 * User can stop writing skipped records by setting the value null.
 * 
 * @param conf the configuration.
 * @return path skip output directory. Null is returned if this is not set 
 * and output directory is also not set.
 */
public static Path getSkipOutputPath(Configuration conf) {
  String name =  conf.get(OUT_PATH);
  if(name!=null) {
    if("none".equals(name)) {
      return null;
    }
    return new Path(name);
  }
  Path outPath = FileOutputFormat.getOutputPath(new JobConf(conf));
  return outPath==null ? null : new Path(outPath, 
      "_logs"+Path.SEPARATOR+"skip");
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:21,代碼來源:SkipBadRecords.java

示例11: parsePath

public static String parsePath(Path p) {
  // p = file://xxxx/xxx/xxxx, trans to /xxxx/xxx/xxxx
  int depth = p.depth();
  String str = "";
  while (depth > 0) {
    str = Path.SEPARATOR + p.getName() + str;
    p = p.getParent();
    --depth;
  }
  return str;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:11,代碼來源:RemoteJobQueue.java

示例12: writeFile

private void writeFile(final DistributedFileSystem dfs,
    Path dir, String fileName) throws IOException {
  Path filePath = new Path(dir.toString() + Path.SEPARATOR + fileName);
  final FSDataOutputStream out = dfs.create(filePath);
  out.writeChars("teststring");
  out.close();
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:7,代碼來源:TestFsck.java

示例13: createPath

private Path createPath(FileContext fc, Path root, int year, int month,
                        int day, String id) throws IOException {
  Path path = new Path(root, year + Path.SEPARATOR + month + Path.SEPARATOR +
          day + Path.SEPARATOR + id);
  fc.mkdir(path, FsPermission.getDirDefault(), true);
  return path;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:7,代碼來源:TestJobHistoryUtils.java

示例14: handleWildCard

private static Path handleWildCard(final String root) {
  if (root.contains(WILD_CARD)) {
    int idx = root.indexOf(WILD_CARD); // first wild card in the path
    idx = root.lastIndexOf(PATH_SEPARATOR, idx); // file separator right before the first wild card
    final String newRoot = root.substring(0, idx);
    return newRoot.isEmpty() ? new Path(Path.SEPARATOR) : new Path(newRoot);
  } else {
    return new Path(root);
  }
}
 
開發者ID:dremio,項目名稱:dremio-oss,代碼行數:10,代碼來源:FileSelection.java

示例15: copyLocalFileToDfs

public static Path copyLocalFileToDfs(FileSystem fs, String appId,
    Path srcPath, String dstFileName) throws IOException {
  Path dstPath = new Path(fs.getHomeDirectory(),
      Constants.DEFAULT_APP_NAME + Path.SEPARATOR + appId + Path.SEPARATOR + dstFileName);
  LOG.info("Copying " + srcPath + " to " + dstPath);
  fs.copyFromLocalFile(srcPath, dstPath);
  return dstPath;
}
 
開發者ID:Intel-bigdata,項目名稱:TensorFlowOnYARN,代碼行數:8,代碼來源:Utils.java


注:本文中的org.apache.hadoop.fs.Path.SEPARATOR屬性示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。