当前位置: 首页>>代码示例>>Java>>正文


Java FileUtil.listFiles方法代码示例

本文整理汇总了Java中org.apache.hadoop.fs.FileUtil.listFiles方法的典型用法代码示例。如果您正苦于以下问题:Java FileUtil.listFiles方法的具体用法?Java FileUtil.listFiles怎么用?Java FileUtil.listFiles使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.fs.FileUtil的用法示例。


在下文中一共展示了FileUtil.listFiles方法的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: assertGlobEquals

import org.apache.hadoop.fs.FileUtil; //导入方法依赖的package包/类
/**
 * List all of the files in 'dir' that match the regex 'pattern'.
 * Then check that this list is identical to 'expectedMatches'.
 * @throws IOException if the dir is inaccessible
 */
public static void assertGlobEquals(File dir, String pattern,
    String ... expectedMatches) throws IOException {
  
  Set<String> found = Sets.newTreeSet();
  for (File f : FileUtil.listFiles(dir)) {
    if (f.getName().matches(pattern)) {
      found.add(f.getName());
    }
  }
  Set<String> expectedSet = Sets.newTreeSet(
      Arrays.asList(expectedMatches));
  Assert.assertEquals("Bad files matching " + pattern + " in " + dir,
      Joiner.on(",").join(expectedSet),
      Joiner.on(",").join(found));
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:21,代码来源:GenericTestUtils.java

示例2: hasSomeData

import org.apache.hadoop.fs.FileUtil; //导入方法依赖的package包/类
/**
 * @return true if the storage directory should prompt the user prior
 * to formatting (i.e if the directory appears to contain some data)
 * @throws IOException if the SD cannot be accessed due to an IO error
 */
@Override
public boolean hasSomeData() throws IOException {
  // Its alright for a dir not to exist, or to exist (properly accessible)
  // and be completely empty.
  if (!root.exists()) return false;
  
  if (!root.isDirectory()) {
    // a file where you expect a directory should not cause silent
    // formatting
    return true;
  }
  
  if (FileUtil.listFiles(root).length == 0) {
    // Empty dir can format without prompt.
    return false;
  }
  
  return true;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:Storage.java

示例3: purgeMatching

import org.apache.hadoop.fs.FileUtil; //导入方法依赖的package包/类
/**
 * Purge files in the given directory which match any of the set of patterns.
 * The patterns must have a single numeric capture group which determines
 * the associated transaction ID of the file. Only those files for which
 * the transaction ID is less than the <code>minTxIdToKeep</code> parameter
 * are removed.
 */
private static void purgeMatching(File dir, List<Pattern> patterns,
    long minTxIdToKeep) throws IOException {

  for (File f : FileUtil.listFiles(dir)) {
    if (!f.isFile()) continue;
    
    for (Pattern p : patterns) {
      Matcher matcher = p.matcher(f.getName());
      if (matcher.matches()) {
        // This parsing will always succeed since the group(1) is
        // /\d+/ in the regex itself.
        long txid = Long.parseLong(matcher.group(1));
        if (txid < minTxIdToKeep) {
          LOG.info("Purging no-longer needed file " + txid);
          if (!f.delete()) {
            LOG.warn("Unable to delete no-longer-needed data " +
                f);
          }
          break;
        }
      }
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:JNStorage.java

示例4: delete

import org.apache.hadoop.fs.FileUtil; //导入方法依赖的package包/类
@Override
public boolean delete(String path, boolean recursive) throws IOException {
  File f = new File(path);
  if (f.isFile()) {
    return f.delete();
  } else if (!recursive && f.isDirectory() && (FileUtil.listFiles(f).length != 0)) {
    throw new IOException("Directory " + f.toString() + " is not empty");
  }
  return FileUtil.fullyDelete(f);
}
 
开发者ID:intel-hpdd,项目名称:lustre-connector-for-hadoop,代码行数:11,代码来源:LustreFsJavaImpl.java

示例5: purgeLogsOlderThan

import org.apache.hadoop.fs.FileUtil; //导入方法依赖的package包/类
@Override
public void purgeLogsOlderThan(long minTxIdToKeep)
    throws IOException {
  LOG.info("Purging logs older than " + minTxIdToKeep);
  File[] files = FileUtil.listFiles(sd.getCurrentDir());
  List<EditLogFile> editLogs = matchEditLogs(files, true);
  for (EditLogFile log : editLogs) {
    if (log.getFirstTxId() < minTxIdToKeep &&
        log.getLastTxId() < minTxIdToKeep) {
      purger.purgeLog(log);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:FileJournalManager.java

示例6: delete

import org.apache.hadoop.fs.FileUtil; //导入方法依赖的package包/类
@Override
public boolean delete(Path p, boolean recursive) throws IOException {
  if (LOG.isDebugEnabled()) {
    LOG.debug(String.format("EFS:delete: %s %b", p, recursive));
  }
  
  // The super delete uses the FileUtil.fullyDelete, 
  // but we cannot rely on that because we need to use the elevated 
  // operations to remove the files
  //
  File f = pathToFile(p);
  if (!f.exists()) {
    //no path, return false "nothing to delete"
    return false;
  }
  else if (f.isFile()) {
    return Native.Elevated.deleteFile(p);
  } 
  else if (f.isDirectory()) {
    
    // This is a best-effort attempt. There are race conditions in that
    // child files can be created/deleted after we snapped the list. 
    // No need to protect against that case.
    File[] files = FileUtil.listFiles(f);
    int childCount = files.length;
    
    if (recursive) {
      for(File child:files) {
        if (delete(new Path(child.getPath()), recursive)) {
          --childCount;
        }
      }
    }
    if (childCount == 0) {
      return Native.Elevated.deleteDirectory(p);
    } 
    else {
      throw new IOException("Directory " + f.toString() + " is not empty");
    }
  }
  else {
    // This can happen under race conditions if an external agent 
    // is messing with the file type between IFs
    throw new IOException("Path " + f.toString() + 
        " exists, but is neither a file nor a directory");
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:48,代码来源:WindowsSecureContainerExecutor.java


注:本文中的org.apache.hadoop.fs.FileUtil.listFiles方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。