当前位置: 首页>>代码示例>>Java>>正文


Java FileStatus.getReplication方法代码示例

本文整理汇总了Java中org.apache.hadoop.fs.FileStatus.getReplication方法的典型用法代码示例。如果您正苦于以下问题:Java FileStatus.getReplication方法的具体用法?Java FileStatus.getReplication怎么用?Java FileStatus.getReplication使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.fs.FileStatus的用法示例。


在下文中一共展示了FileStatus.getReplication方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: create

import org.apache.hadoop.fs.FileStatus; //导入方法依赖的package包/类
private FSDataOutputStream create(Path f, Reporter reporter,
    FileStatus srcstat) throws IOException {
  if (destFileSys.exists(f)) {
    destFileSys.delete(f, false);
  }
  if (!preserve_status) {
    return destFileSys.create(f, true, sizeBuf, reporter);
  }

  FsPermission permission = preseved.contains(FileAttribute.PERMISSION)?
      srcstat.getPermission(): null;
  short replication = preseved.contains(FileAttribute.REPLICATION)?
      srcstat.getReplication(): destFileSys.getDefaultReplication(f);
  long blockSize = preseved.contains(FileAttribute.BLOCK_SIZE)?
      srcstat.getBlockSize(): destFileSys.getDefaultBlockSize(f);
  return destFileSys.create(f, permission, true, sizeBuf, replication,
      blockSize, reporter);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:DistCpV1.java

示例2: setReplication

import org.apache.hadoop.fs.FileStatus; //导入方法依赖的package包/类
/**
 * Increase the replication factor of _distcp_src_files to
 * sqrt(min(maxMapsOnCluster, numMaps)). This is to reduce the chance of
 * failing of distcp because of "not having a replication of _distcp_src_files
 * available for reading for some maps".
 */
private static void setReplication(Configuration conf, JobConf jobConf,
                       Path srcfilelist, int numMaps) throws IOException {
  int numMaxMaps = new JobClient(jobConf).getClusterStatus().getMaxMapTasks();
  short replication = (short) Math.ceil(
                              Math.sqrt(Math.min(numMaxMaps, numMaps)));
  FileSystem fs = srcfilelist.getFileSystem(conf);
  FileStatus srcStatus = fs.getFileStatus(srcfilelist);

  if (srcStatus.getReplication() < replication) {
    if (!fs.setReplication(srcfilelist, replication)) {
      throw new IOException("Unable to increase the replication of file " +
                            srcfilelist);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:DistCpV1.java

示例3: transform

import org.apache.hadoop.fs.FileStatus; //导入方法依赖的package包/类
private static FileStatus transform(FileStatus input, String bucket) {
  String relativePath = removeLeadingSlash(Path.getPathWithoutSchemeAndAuthority(input.getPath()).toString());
  Path bucketPath  = new Path(Path.SEPARATOR + bucket);
  Path fullPath = Strings.isEmpty(relativePath) ? bucketPath : new Path(bucketPath, relativePath);
  return new FileStatus(input.getLen(),
          input.isDirectory(),
          input.getReplication(),
          input.getBlockSize(),
          input.getModificationTime(),
          input.getAccessTime(),
          input.getPermission(),
          input.getOwner(),
          input.getGroup(),
          fullPath);
}
 
开发者ID:dremio,项目名称:dremio-oss,代码行数:16,代码来源:S3FileSystem.java

示例4: getFileLists

import org.apache.hadoop.fs.FileStatus; //导入方法依赖的package包/类
/**
 * @return <expected, gotten, backup>, where each is sorted
 */
private static List<List<String>> getFileLists(FileStatus[] previous, FileStatus[] archived) {
  List<List<String>> files = new ArrayList<List<String>>();

  // copy over the original files
  List<String> originalFileNames = convertToString(previous);
  files.add(originalFileNames);

  List<String> currentFiles = new ArrayList<String>(previous.length);
  List<FileStatus> backedupFiles = new ArrayList<FileStatus>(previous.length);
  for (FileStatus f : archived) {
    String name = f.getPath().getName();
    // if the file has been backed up
    if (name.contains(".")) {
      Path parent = f.getPath().getParent();
      String shortName = name.split("[.]")[0];
      Path modPath = new Path(parent, shortName);
      FileStatus file = new FileStatus(f.getLen(), f.isDirectory(), f.getReplication(),
          f.getBlockSize(), f.getModificationTime(), modPath);
      backedupFiles.add(file);
    } else {
      // otherwise, add it to the list to compare to the original store files
      currentFiles.add(name);
    }
  }

  files.add(currentFiles);
  files.add(convertToString(backedupFiles));
  return files;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:33,代码来源:HFileArchiveTestingUtil.java

示例5: getReplicationFactor

import org.apache.hadoop.fs.FileStatus; //导入方法依赖的package包/类
private static short getReplicationFactor(
        EnumSet<FileAttribute> fileAttributes,
        FileStatus sourceFile, FileSystem targetFS, Path tmpTargetPath) {
  return fileAttributes.contains(FileAttribute.REPLICATION)?
          sourceFile.getReplication() : targetFS.getDefaultReplication(tmpTargetPath);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:7,代码来源:RetriableFileCopyCommand.java


注:本文中的org.apache.hadoop.fs.FileStatus.getReplication方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。