當前位置: 首頁>>代碼示例>>Java>>正文


Java LocatedFileStatus類代碼示例

本文整理匯總了Java中org.apache.hadoop.fs.LocatedFileStatus的典型用法代碼示例。如果您正苦於以下問題:Java LocatedFileStatus類的具體用法?Java LocatedFileStatus怎麽用?Java LocatedFileStatus使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


LocatedFileStatus類屬於org.apache.hadoop.fs包,在下文中一共展示了LocatedFileStatus類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: assertFileCount

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
/**
 * Assert that the number of log files in the target directory is as expected.
 * @param fs the target FileSystem
 * @param dir the target directory path
 * @param expected the expected number of files
 * @throws IOException thrown if listing files fails
 */
public void assertFileCount(FileSystem fs, Path dir, int expected)
    throws IOException {
  RemoteIterator<LocatedFileStatus> i = fs.listFiles(dir, true);
  int count = 0;

  while (i.hasNext()) {
    i.next();
    count++;
  }

  assertTrue("The sink created additional unexpected log files. " + count
      + "files were created", expected >= count);
  assertTrue("The sink created too few log files. " + count + "files were "
      + "created", expected <= count);
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:23,代碼來源:RollingFileSystemSinkTestBase.java

示例2: getNextIdToTry

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
/**
 * Return the next ID suffix to use when creating the log file. This method
 * will look at the files in the directory, find the one with the highest
 * ID suffix, and 1 to that suffix, and return it. This approach saves a full
 * linear probe, which matters in the case where there are a large number of
 * log files.
 *
 * @param initial the base file path
 * @param lastId the last ID value that was used
 * @return the next ID to try
 * @throws IOException thrown if there's an issue querying the files in the
 * directory
 */
private int getNextIdToTry(Path initial, int lastId)
    throws IOException {
  RemoteIterator<LocatedFileStatus> files =
      fileSystem.listFiles(currentDirPath, true);
  String base = initial.toString();
  int id = lastId;

  while (files.hasNext()) {
    String file = files.next().getPath().getName();

    if (file.startsWith(base)) {
      int fileId = extractId(file);

      if (fileId > id) {
        id = fileId;
      }
    }
  }

  // Return either 1 more than the highest we found or 1 more than the last
  // ID used (if no ID was found).
  return id + 1;
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:37,代碼來源:RollingFileSystemSink.java

示例3: if

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
@Override
public RemoteIterator<LocatedFileStatus>listLocatedStatus(final Path f,
    final PathFilter filter) throws FileNotFoundException, IOException {
  final InodeTree.ResolveResult<FileSystem> res = fsState
      .resolve(getUriPath(f), true);
  final RemoteIterator<LocatedFileStatus> statusIter = res.targetFileSystem
      .listLocatedStatus(res.remainingPath);

  if (res.isInternalDir()) {
    return statusIter;
  }

  return new RemoteIterator<LocatedFileStatus>() {
    @Override
    public boolean hasNext() throws IOException {
      return statusIter.hasNext();
    }

    @Override
    public LocatedFileStatus next() throws IOException {
      final LocatedFileStatus status = statusIter.next();
      return (LocatedFileStatus)fixFileStatus(status,
          getChrootedPath(res, status, f));
    }
  };
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:27,代碼來源:ViewFileSystem.java

示例4: testCopyRecursive

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
@Test
public void testCopyRecursive() throws Throwable {
  int expected = createTestFiles(sourceDir, 64);

  expectSuccess(
      "-s", sourceDir.toURI().toString(),
      "-d", destDir.toURI().toString(),
      "-t", "4",
      "-l", "3");

  LocalFileSystem local = FileSystem.getLocal(new Configuration());
  Set<String> entries = new TreeSet<>();
  RemoteIterator<LocatedFileStatus> iterator
      = local.listFiles(new Path(destDir.toURI()), true);
  int count = 0;
  while (iterator.hasNext()) {
    LocatedFileStatus next = iterator.next();
    entries.add(next.getPath().toUri().toString());
    LOG.info("Entry {} size = {}", next.getPath(), next.getLen());
    count++;
  }
  assertEquals("Mismatch in files found", expected, count);

}
 
開發者ID:steveloughran,項目名稱:cloudup,代碼行數:25,代碼來源:TestLocalCloudup.java

示例5: getOrcFiles

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
/**
 * Get all ORC files present in directory for the specified table and partition/bucket. The ORC
 * files returned are in ascending order of the (insertion) time-partition and sequence-id within
 * the time-partition.
 *
 * @param orcDir the ORC store directory
 * @param args the arguments in order: table-name, bucket-id, time-partition-id
 * @return the list of all ORC files
 */
private String[] getOrcFiles(final String orcDir, final String fileExt, final String... args) {
  try {
    FileSystem fileSystem = FileSystem.get(conf);
    Path distributedPath = new Path(Paths.get(orcDir, args).toString());
    ArrayList<String> filePathStrings = new ArrayList<>();
    if (fileSystem.exists(distributedPath)) {
      RemoteIterator<LocatedFileStatus> fileListItr = fileSystem.listFiles(distributedPath, true);
      while (fileListItr != null && fileListItr.hasNext()) {
        LocatedFileStatus file = fileListItr.next();
        if (!file.getPath().getName().endsWith(fileExt)) {
          // exclude CRC files
          filePathStrings.add(file.getPath().toUri().toString());
        }
      }

      Collections.sort(filePathStrings);
    }
    String[] retArray = new String[filePathStrings.size()];
    filePathStrings.toArray(retArray);
    return retArray;
  } catch (IOException e) {
    e.printStackTrace();
  }
  return new String[0];
}
 
開發者ID:ampool,項目名稱:monarch,代碼行數:35,代碼來源:AbstractTierStoreReader.java

示例6: getFilesCount

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
public int getFilesCount(String storeBaseDir, String tableName) {
  int filesCount = 0;
  try {
    FileSystem fs = FileSystem.get(conf);
    Path storeBasePath = new Path(fs.getHomeDirectory(), storeBaseDir);
    Path tablePath = new Path(storeBasePath, tableName);
    if (fs.exists(tablePath)) {
      RemoteIterator<LocatedFileStatus> locatedFileStatusRemoteIterator =
          fs.listFiles(tablePath, false);
      while (locatedFileStatusRemoteIterator.hasNext()) {
        filesCount++;
        LocatedFileStatus next = locatedFileStatusRemoteIterator.next();
        System.out.println("File name is " + next.getPath());
      }
    }
  } catch (IOException e) {
    e.printStackTrace();
  }
  return filesCount;
}
 
開發者ID:ampool,項目名稱:monarch,代碼行數:21,代碼來源:HDFSQuasiService.java

示例7: getORCRecords

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
public List<OrcStruct> getORCRecords(String storeBaseDir, String tableName) throws IOException {
  List<OrcStruct> orcrecords = new ArrayList<>();
  try {
    FileSystem fs = FileSystem.get(conf);
    Path storeBasePath = new Path(fs.getHomeDirectory(), storeBaseDir);
    Path tablePath = new Path(storeBasePath, tableName);
    if (fs.exists(tablePath)) {
      RemoteIterator<LocatedFileStatus> locatedFileStatusRemoteIterator =
          fs.listFiles(tablePath, false);
      while (locatedFileStatusRemoteIterator.hasNext()) {
        LocatedFileStatus next = locatedFileStatusRemoteIterator.next();
        final org.apache.hadoop.hive.ql.io.orc.Reader fis =
            OrcFile.createReader(next.getPath(), OrcFile.readerOptions(conf));
        RecordReader rows = fis.rows();
        while (rows.hasNext()) {
          orcrecords.add((OrcStruct) rows.next(null));
        }
        System.out.println("File name is " + next.getPath());
      }
    }
  } catch (IOException e) {
    e.printStackTrace();
  }
  return orcrecords;
}
 
開發者ID:ampool,項目名稱:monarch,代碼行數:26,代碼來源:HDFSQuasiService.java

示例8: addInputPathRecursively

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
/**
 * Add files in the input path recursively into the results.
 * @param result
 *          The List to store all files.
 * @param fs
 *          The FileSystem.
 * @param path
 *          The input path.
 * @param inputFilter
 *          The input filter that can be used to filter files/dirs. 
 * @throws IOException
 */
protected void addInputPathRecursively(List<FileStatus> result,
    FileSystem fs, Path path, PathFilter inputFilter) 
    throws IOException {
  RemoteIterator<LocatedFileStatus> iter = fs.listLocatedStatus(path);
  while (iter.hasNext()) {
    LocatedFileStatus stat = iter.next();
    if (inputFilter.accept(stat.getPath())) {
      if (stat.isDirectory()) {
        addInputPathRecursively(result, fs, stat.getPath(), inputFilter);
      } else {
        result.add(stat);
      }
    }
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:28,代碼來源:FileInputFormat.java

示例9: call

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
@Override
public Result call() throws Exception {
  Result result = new Result();
  result.fs = fs;

  if (fileStatus.isDirectory()) {
    RemoteIterator<LocatedFileStatus> iter = fs
        .listLocatedStatus(fileStatus.getPath());
    while (iter.hasNext()) {
      LocatedFileStatus stat = iter.next();
      if (inputFilter.accept(stat.getPath())) {
        if (recursive && stat.isDirectory()) {
          result.dirsNeedingRecursiveCalls.add(stat);
        } else {
          result.locatedFileStatuses.add(stat);
        }
      }
    }
  } else {
    result.locatedFileStatuses.add(fileStatus);
  }
  return result;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:24,代碼來源:LocatedFileStatusFetcher.java

示例10: testListFiles

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
@Test(timeout=60000)
public void testListFiles() throws IOException {
  Configuration conf = new HdfsConfiguration();
  MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();
  
  try {
    DistributedFileSystem fs = cluster.getFileSystem();

    final Path relative = new Path("relative");
    fs.create(new Path(relative, "foo")).close();

    final List<LocatedFileStatus> retVal = new ArrayList<LocatedFileStatus>();
    final RemoteIterator<LocatedFileStatus> iter = fs.listFiles(relative, true);
    while (iter.hasNext()) {
      retVal.add(iter.next());
    }
    System.out.println("retVal = " + retVal);
  } finally {
    cluster.shutdown();
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:22,代碼來源:TestDistributedFileSystem.java

示例11: testFile

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
/** Test when input path is a file */
@Test
public void testFile() throws IOException {
  fc.mkdir(TEST_DIR, FsPermission.getDefault(), true);
  writeFile(fc, FILE1, FILE_LEN);

  RemoteIterator<LocatedFileStatus> itor = fc.util().listFiles(
      FILE1, true);
  LocatedFileStatus stat = itor.next();
  assertFalse(itor.hasNext());
  assertTrue(stat.isFile());
  assertEquals(FILE_LEN, stat.getLen());
  assertEquals(fc.makeQualified(FILE1), stat.getPath());
  assertEquals(1, stat.getBlockLocations().length);
  
  itor = fc.util().listFiles(FILE1, false);
  stat = itor.next();
  assertFalse(itor.hasNext());
  assertTrue(stat.isFile());
  assertEquals(FILE_LEN, stat.getLen());
  assertEquals(fc.makeQualified(FILE1), stat.getPath());
  assertEquals(1, stat.getBlockLocations().length);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:24,代碼來源:TestListFilesInFileContext.java

示例12: checkEquals

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
private static void checkEquals(RemoteIterator<LocatedFileStatus> i1,
    RemoteIterator<LocatedFileStatus> i2) throws IOException {
  while (i1.hasNext()) {
    assertTrue(i2.hasNext());
    
    // Compare all the fields but the path name, which is relative
    // to the original path from listFiles.
    LocatedFileStatus l1 = i1.next();
    LocatedFileStatus l2 = i2.next();
    assertEquals(l1.getAccessTime(), l2.getAccessTime());
    assertEquals(l1.getBlockSize(), l2.getBlockSize());
    assertEquals(l1.getGroup(), l2.getGroup());
    assertEquals(l1.getLen(), l2.getLen());
    assertEquals(l1.getModificationTime(), l2.getModificationTime());
    assertEquals(l1.getOwner(), l2.getOwner());
    assertEquals(l1.getPermission(), l2.getPermission());
    assertEquals(l1.getReplication(), l2.getReplication());
  }
  assertFalse(i2.hasNext());
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:21,代碼來源:TestINodeFile.java

示例13: assertListFilesFinds

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
/**
 * To get this project to compile under Hadoop 1, this code needs to be
 * commented out
 *
 *
 * @param fs filesystem
 * @param dir dir
 * @param subdir subdir
 * @param recursive recurse?
 * @throws IOException IO problems
 */
public static void assertListFilesFinds(FileSystem fs,
                                        Path dir,
                                        Path subdir,
                                        boolean recursive) throws IOException {
  RemoteIterator<LocatedFileStatus> iterator =
    fs.listFiles(dir, recursive);
  boolean found = false;
  int entries = 0;
  StringBuilder builder = new StringBuilder();
  while (iterator.hasNext()) {
    LocatedFileStatus next = iterator.next();
    entries++;
    builder.append(next.toString()).append('\n');
    if (next.getPath().equals(subdir)) {
      found = true;
    }
  }
  assertTrue("Path " + subdir
             + " not found in directory " + dir + " : "
             + " entries=" + entries
             + " content"
             + builder.toString(),
             found);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:36,代碼來源:TestV2LsOperations.java

示例14: publishPlainDataStatistics

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
static DataStatistics publishPlainDataStatistics(Configuration conf, 
                                                 Path inputDir) 
throws IOException {
  FileSystem fs = inputDir.getFileSystem(conf);

  // obtain input data file statuses
  long dataSize = 0;
  long fileCount = 0;
  RemoteIterator<LocatedFileStatus> iter = fs.listFiles(inputDir, true);
  PathFilter filter = new Utils.OutputFileUtils.OutputFilesFilter();
  while (iter.hasNext()) {
    LocatedFileStatus lStatus = iter.next();
    if (filter.accept(lStatus.getPath())) {
      dataSize += lStatus.getLen();
      ++fileCount;
    }
  }

  // publish the plain data statistics
  LOG.info("Total size of input data : " 
           + StringUtils.humanReadableInt(dataSize));
  LOG.info("Total number of input data files : " + fileCount);
  
  return new DataStatistics(dataSize, fileCount, false);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:26,代碼來源:GenerateData.java

示例15: basicClientReadWrite

import org.apache.hadoop.fs.LocatedFileStatus; //導入依賴的package包/類
@Test
public void basicClientReadWrite() throws Exception {
  Path basePath = new Path(temporaryFolder.newFolder().getAbsolutePath());
  Path path = ((PathCanonicalizer) clientFS).canonicalizePath(new Path(basePath, "testfile.bytes"));
  final byte[] randomBytesMoreThanBuffer = new byte[RemoteNodeFileSystem.REMOTE_WRITE_BUFFER_SIZE * 3];
  Random r = new Random();
  r.nextBytes(randomBytesMoreThanBuffer);

  try(FSDataOutputStream stream = clientFS.create(path, false)){
    stream.write(randomBytesMoreThanBuffer);
  }


  RemoteIterator<LocatedFileStatus> iter = client.fileSystem.listFiles(basePath, false);
  assertEquals(true, iter.hasNext());
  LocatedFileStatus status = iter.next();

  try(FSDataInputStream in = clientFS.open(status.getPath())){
    byte[] back = new byte[randomBytesMoreThanBuffer.length];
    int dataRead = in.read(back);
    assertEquals(back.length, dataRead);
    assertTrue(Arrays.equals(randomBytesMoreThanBuffer, back));
  }
  client.fileSystem.delete(status.getPath(), false);
}
 
開發者ID:dremio,項目名稱:dremio-oss,代碼行數:26,代碼來源:TestRemoteNodeFileSystemDual.java


注:本文中的org.apache.hadoop.fs.LocatedFileStatus類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。