當前位置: 首頁>>代碼示例>>Java>>正文


Java FileStatus.getLen方法代碼示例

本文整理匯總了Java中org.apache.hadoop.fs.FileStatus.getLen方法的典型用法代碼示例。如果您正苦於以下問題:Java FileStatus.getLen方法的具體用法?Java FileStatus.getLen怎麽用?Java FileStatus.getLen使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.fs.FileStatus的用法示例。


在下文中一共展示了FileStatus.getLen方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: getFileStatus

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
/**
 * Convert the file information in LsEntry to a {@link FileStatus} object. *
 *
 * @param sftpFile
 * @param parentPath
 * @return file status
 * @throws IOException
 */
private FileStatus getFileStatus(ChannelSftp channel, LsEntry sftpFile,
    Path parentPath) throws IOException {

  SftpATTRS attr = sftpFile.getAttrs();
  long length = attr.getSize();
  boolean isDir = attr.isDir();
  boolean isLink = attr.isLink();
  if (isLink) {
    String link = parentPath.toUri().getPath() + "/" + sftpFile.getFilename();
    try {
      link = channel.realpath(link);

      Path linkParent = new Path("/", link);

      FileStatus fstat = getFileStatus(channel, linkParent);
      isDir = fstat.isDirectory();
      length = fstat.getLen();
    } catch (Exception e) {
      throw new IOException(e);
    }
  }
  int blockReplication = 1;
  // Using default block size since there is no way in SFTP channel to know of
  // block sizes on server. The assumption could be less than ideal.
  long blockSize = DEFAULT_BLOCK_SIZE;
  long modTime = attr.getMTime() * 1000; // convert to milliseconds
  long accessTime = 0;
  FsPermission permission = getPermissions(sftpFile);
  // not be able to get the real user group name, just use the user and group
  // id
  String user = Integer.toString(attr.getUId());
  String group = Integer.toString(attr.getGId());
  Path filePath = new Path(parentPath, sftpFile.getFilename());

  return new FileStatus(length, isDir, blockReplication, blockSize, modTime,
      accessTime, permission, user, group, filePath.makeQualified(
          this.getUri(), this.getWorkingDirectory()));
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:47,代碼來源:SFTPFileSystem.java

示例2: size

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
public long size()
{
    if (this.fileLength != -1)
        return fileLength;

    try
    {
        if (fs != null && filePath != null) {
            FileStatus fileStatus = fs.getFileStatus(filePath);
            fileLength = fileStatus.getLen();
            return fileLength;
        }

        return -1;
    } catch (IOException e)
    {
        throw new FSReadError(e, filePath.getName());
    }
}
 
開發者ID:Netflix,項目名稱:sstable-adaptor,代碼行數:20,代碼來源:ChannelProxy.java

示例3: isValid

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
/**
 * Return if the specified file is a valid store file or not.
 *
 * @param fileStatus The {@link FileStatus} of the file
 * @return <tt>true</tt> if the file is valid
 */
public static boolean isValid(final FileStatus fileStatus) throws IOException {
  final Path p = fileStatus.getPath();

  if (fileStatus.isDirectory()) return false;

  // Check for empty hfile. Should never be the case but can happen
  // after data loss in hdfs for whatever reason (upgrade, etc.): HBASE-646
  // NOTE: that the HFileLink is just a name, so it's an empty file.
  if (!HFileLink.isHFileLink(p) && fileStatus.getLen() <= 0) {
    LOG.warn("Skipping " + p + " because it is empty. HBASE-646 DATA LOSS?");
    return false;
  }

  return validateStoreFileName(p.getName());
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:22,代碼來源:StoreFileInfo.java

示例4: computeRefFileHDFSBlockDistribution

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
/**
 * helper function to compute HDFS blocks distribution of a given reference file.For reference
 * file, we don't compute the exact value. We use some estimate instead given it might be good
 * enough. we assume bottom part takes the first half of reference file, top part takes the second
 * half of the reference file. This is just estimate, given midkey ofregion != midkey of HFile,
 * also the number and size of keys vary. If this estimate isn't good enough, we can improve it
 * later.
 *
 * @param fs        The FileSystem
 * @param reference The reference
 * @param status    The reference FileStatus
 * @return HDFS blocks distribution
 */
private static HDFSBlocksDistribution computeRefFileHDFSBlockDistribution(final FileSystem fs,
    final Reference reference, final FileStatus status) throws IOException {
  if (status == null) {
    return null;
  }

  long start = 0;
  long length = 0;

  if (Reference.isTopFileRegion(reference.getFileRegion())) {
    start = status.getLen() / 2;
    length = status.getLen() - status.getLen() / 2;
  } else {
    start = 0;
    length = status.getLen() / 2;
  }
  return FSUtils.computeHDFSBlocksDistribution(fs, status, start, length);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:32,代碼來源:StoreFileInfo.java

示例5: createTableDescriptorForTableDirectory

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
/**
 * Create a new HTableDescriptor in HDFS in the specified table directory. Happens when we create
 * a new table or snapshot a table.
 * @param tableDir table directory under which we should write the file
 * @param htd description of the table to write
 * @param forceCreation if <tt>true</tt>,then even if previous table descriptor is present it will
 *          be overwritten
 * @return <tt>true</tt> if the we successfully created the file, <tt>false</tt> if the file
 *         already exists and we weren't forcing the descriptor creation.
 * @throws IOException if a filesystem error occurs
 */
public boolean createTableDescriptorForTableDirectory(Path tableDir,
    HTableDescriptor htd, boolean forceCreation) throws IOException {
  if (fsreadonly) {
    throw new NotImplementedException("Cannot create a table descriptor - in read only mode");
  }
  FileStatus status = getTableInfoPath(fs, tableDir);
  if (status != null) {
    LOG.debug("Current tableInfoPath = " + status.getPath());
    if (!forceCreation) {
      if (fs.exists(status.getPath()) && status.getLen() > 0) {
        if (readTableDescriptor(fs, status, false).equals(htd)) {
          LOG.debug("TableInfo already exists.. Skipping creation");
          return false;
        }
      }
    }
  }
  Path p = writeTableDescriptor(fs, htd, tableDir, status);
  return p != null;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:32,代碼來源:FSTableDescriptors.java

示例6: sameFile

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
/**
 * Check if the two files are equal by looking at the file length,
 * and at the checksum (if user has specified the verifyChecksum flag).
 */
private boolean sameFile(final FileStatus inputStat, final FileStatus outputStat) {
  // Not matching length
  if (inputStat.getLen() != outputStat.getLen()) return false;

  // Mark files as equals, since user asked for no checksum verification
  if (!verifyChecksum) return true;

  // If checksums are not available, files are not the same.
  FileChecksum inChecksum = getFileChecksum(inputFs, inputStat.getPath());
  if (inChecksum == null) return false;

  FileChecksum outChecksum = getFileChecksum(outputFs, outputStat.getPath());
  if (outChecksum == null) return false;

  return inChecksum.equals(outChecksum);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:21,代碼來源:ExportSnapshot.java

示例7: handle

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
private Response handle(PhysicalConnection connection, DFS.WriteDataRequest request, ByteBuf buf) throws IOException {

    final Path path = new Path(request.getPath());
    if(request.getLastOffset() == 0){
      // initial creation and write.
      return writeData(path, buf, true);
    }

    // append, first check last update time and offset. (concurrency danger between check and write but doesn't
    // seem important in this usecase (home file uploads).)
    FileStatus fs = localFS.getFileStatus(path);
    if(fs.getModificationTime() != request.getLastUpdate()){
      throw new IOException(String.format("Unexpected last modification time. Expected time: %d, Actual time: %d.",
        request.getLastUpdate(), fs.getModificationTime()));
    }

    if(fs.getLen() != request.getLastOffset()) {
      throw new IOException(String.format("Unexpected last offset. Remote offset: %d, Actual offset: %d.",
        request.getLastOffset(), fs.getLen()));
    }

    return writeData(path, buf, false);

  }
 
開發者ID:dremio,項目名稱:dremio-oss,代碼行數:25,代碼來源:PDFSProtocol.java

示例8: getSplits

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
@Override
public List<InputSplit> getSplits(JobContext jobCtxt) throws IOException {
  final JobConf jobConf = new JobConf(jobCtxt.getConfiguration());
  final JobClient client = new JobClient(jobConf);
  ClusterStatus stat = client.getClusterStatus(true);
  int numTrackers = stat.getTaskTrackers();
  final int fileCount = jobConf.getInt(GRIDMIX_DISTCACHE_FILE_COUNT, -1);

  // Total size of distributed cache files to be generated
  final long totalSize = jobConf.getLong(GRIDMIX_DISTCACHE_BYTE_COUNT, -1);
  // Get the path of the special file
  String distCacheFileList = jobConf.get(GRIDMIX_DISTCACHE_FILE_LIST);
  if (fileCount < 0 || totalSize < 0 || distCacheFileList == null) {
    throw new RuntimeException("Invalid metadata: #files (" + fileCount
        + "), total_size (" + totalSize + "), filelisturi ("
        + distCacheFileList + ")");
  }

  Path sequenceFile = new Path(distCacheFileList);
  FileSystem fs = sequenceFile.getFileSystem(jobConf);
  FileStatus srcst = fs.getFileStatus(sequenceFile);
  // Consider the number of TTs * mapSlotsPerTracker as number of mappers.
  int numMapSlotsPerTracker = jobConf.getInt(TTConfig.TT_MAP_SLOTS, 2);
  int numSplits = numTrackers * numMapSlotsPerTracker;

  List<InputSplit> splits = new ArrayList<InputSplit>(numSplits);
  LongWritable key = new LongWritable();
  BytesWritable value = new BytesWritable();

  // Average size of data to be generated by each map task
  final long targetSize = Math.max(totalSize / numSplits,
                            DistributedCacheEmulator.AVG_BYTES_PER_MAP);
  long splitStartPosition = 0L;
  long splitEndPosition = 0L;
  long acc = 0L;
  long bytesRemaining = srcst.getLen();
  SequenceFile.Reader reader = null;
  try {
    reader = new SequenceFile.Reader(fs, sequenceFile, jobConf);
    while (reader.next(key, value)) {

      // If adding this file would put this split past the target size,
      // cut the last split and put this file in the next split.
      if (acc + key.get() > targetSize && acc != 0) {
        long splitSize = splitEndPosition - splitStartPosition;
        splits.add(new FileSplit(
            sequenceFile, splitStartPosition, splitSize, (String[])null));
        bytesRemaining -= splitSize;
        splitStartPosition = splitEndPosition;
        acc = 0L;
      }
      acc += key.get();
      splitEndPosition = reader.getPosition();
    }
  } finally {
    if (reader != null) {
      reader.close();
    }
  }
  if (bytesRemaining != 0) {
    splits.add(new FileSplit(
        sequenceFile, splitStartPosition, bytesRemaining, (String[])null));
  }

  return splits;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:67,代碼來源:GenerateDistCacheData.java

示例9: processEndOfFile

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
/**
 * If the queue isn't empty, switch to the next one Else if this is a recovered queue, it means
 * we're done! Else we'll just continue to try reading the log file
 * @return true if we're done with the current file, false if we should continue trying to read
 *         from it
 */
@edu.umd.cs.findbugs.annotations.SuppressWarnings(value = "DE_MIGHT_IGNORE",
    justification = "Yeah, this is how it works")
protected boolean processEndOfFile() {
  if (this.queue.size() != 0) {
    if (LOG.isTraceEnabled()) {
      String filesize = "N/A";
      try {
        FileStatus stat = fs.getFileStatus(this.currentPath);
        filesize = stat.getLen() + "";
      } catch (IOException ex) {
      }
      LOG.trace("Reached the end of log " + this.currentPath + ", stats: " + getStats()
          + ", and the length of the file is " + filesize);
    }
    this.currentPath = null;
    this.repLogReader.finishCurrentFile();
    this.reader = null;
    return true;
  } else if (this.replicationQueueInfo.isQueueRecovered()) {
    LOG.debug("Finished recovering queue for group " + walGroupId + " of peer "
        + peerClusterZnode);
    workerRunning = false;
    return true;
  }
  return false;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:33,代碼來源:ReplicationSource.java

示例10: compareFiles

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
public static boolean compareFiles(FileStatus f1, FileStatus f2) throws Exception {
  byte[] original = new byte[(int)f1.getLen()];
  byte[] withDict = new byte[(int)f2.getLen()];

  try (FSDataInputStream in1 = localFs.open(f1.getPath()); FSDataInputStream in2 = localFs.open(f2.getPath());) {
    IOUtils.readFully(in1, original, 0, original.length);
    IOUtils.readFully(in2, withDict, 0, withDict.length);
  }

  return Arrays.equals(original, withDict);
}
 
開發者ID:dremio,項目名稱:dremio-oss,代碼行數:12,代碼來源:BaseTestQuery.java

示例11: getFileBlockLocations

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
/**
 * Return an array containing hostnames, offset and size of
 * portions of the given file. For WASB we'll just lie and give
 * fake hosts to make sure we get many splits in MR jobs.
 */
@Override
public BlockLocation[] getFileBlockLocations(FileStatus file,
    long start, long len) throws IOException {
  if (file == null) {
    return null;
  }

  if ((start < 0) || (len < 0)) {
    throw new IllegalArgumentException("Invalid start or len parameter");
  }

  if (file.getLen() < start) {
    return new BlockLocation[0];
  }
  final String blobLocationHost = getConf().get(
      AZURE_BLOCK_LOCATION_HOST_PROPERTY_NAME,
      AZURE_BLOCK_LOCATION_HOST_DEFAULT);
  final String[] name = { blobLocationHost };
  final String[] host = { blobLocationHost };
  long blockSize = file.getBlockSize();
  if (blockSize <= 0) {
    throw new IllegalArgumentException(
        "The block size for the given file is not a positive number: "
            + blockSize);
  }
  int numberOfLocations = (int) (len / blockSize)
      + ((len % blockSize == 0) ? 0 : 1);
  BlockLocation[] locations = new BlockLocation[numberOfLocations];
  for (int i = 0; i < locations.length; i++) {
    long currentOffset = start + (i * blockSize);
    long currentLength = Math.min(blockSize, start + len - currentOffset);
    locations[i] = new BlockLocation(name, host, currentOffset, currentLength);
  }
  return locations;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:41,代碼來源:NativeAzureFileSystem.java

示例12: getJobSize

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
/**
 * @return the number of bytes across all files in the job.
 */
private long getJobSize(JobContext job) throws IOException {
  List<FileStatus> stats = listStatus(job);
  long count = 0;
  for (FileStatus stat : stats) {
    count += stat.getLen();
  }

  return count;
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:13,代碼來源:ExportInputFormat.java

示例13: visitRegionRecoveredEdits

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
/**
 * Iterate over recovered.edits of the specified region
 *
 * @param fs {@link FileSystem}
 * @param regionDir {@link Path} to the Region directory
 * @param visitor callback object to get the recovered.edits files
 * @throws IOException if an error occurred while scanning the directory
 */
public static void visitRegionRecoveredEdits(final FileSystem fs, final Path regionDir,
    final FSVisitor.RecoveredEditsVisitor visitor) throws IOException {
  NavigableSet<Path> files = WALSplitter.getSplitEditFilesSorted(fs, regionDir);
  if (files == null || files.size() == 0) return;

  for (Path source: files) {
    // check to see if the file is zero length, in which case we can skip it
    FileStatus stat = fs.getFileStatus(source);
    if (stat.getLen() <= 0) continue;

    visitor.recoveredEdits(regionDir.getName(), source.getName());
  }
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:22,代碼來源:FSVisitor.java

示例14: createLocalResource

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
/**
 * Create a {@link LocalResource} record with all the given parameters.
 */
private static LocalResource createLocalResource(FileSystem fc, Path file,
    LocalResourceType type, LocalResourceVisibility visibility)
    throws IOException {
  FileStatus fstat = fc.getFileStatus(file);
  URL resourceURL = ConverterUtils.getYarnUrlFromPath(fc.resolvePath(fstat
      .getPath()));
  long resourceSize = fstat.getLen();
  long resourceModificationTime = fstat.getModificationTime();

  return LocalResource.newInstance(resourceURL, type, visibility,
    resourceSize, resourceModificationTime);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:16,代碼來源:TaskAttemptImpl.java

示例15: compare

import org.apache.hadoop.fs.FileStatus; //導入方法依賴的package包/類
public int compare(FileStatus a, FileStatus b) {
  if (a.getLen() < b.getLen())
    return -1;
  else if (a.getLen() == b.getLen())
    if (a.getPath().toString().equals(b.getPath().toString()))
      return 0;
    else
      return -1; 
  else
    return 1;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:12,代碼來源:ReduceTask.java


注:本文中的org.apache.hadoop.fs.FileStatus.getLen方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。