当前位置: 首页>>代码示例>>Java>>正文


Java FileSystem.Statistics方法代码示例

本文整理汇总了Java中org.apache.hadoop.fs.FileSystem.Statistics方法的典型用法代码示例。如果您正苦于以下问题:Java FileSystem.Statistics方法的具体用法?Java FileSystem.Statistics怎么用?Java FileSystem.Statistics使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.fs.FileSystem的用法示例。


在下文中一共展示了FileSystem.Statistics方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: SFTPInputStream

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
SFTPInputStream(InputStream stream, ChannelSftp channel,
    FileSystem.Statistics stats) {

  if (stream == null) {
    throw new IllegalArgumentException(E_NULL_INPUTSTREAM);
  }
  if (channel == null || !channel.isConnected()) {
    throw new IllegalArgumentException(E_CLIENT_NULL);
  }
  this.wrappedStream = stream;
  this.channel = channel;
  this.stats = stats;

  this.pos = 0;
  this.closed = false;
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:17,代码来源:SFTPInputStream.java

示例2: getEvents

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
private byte[] getEvents() throws Exception {
  ByteArrayOutputStream output = new ByteArrayOutputStream();
  FSDataOutputStream fsOutput = new FSDataOutputStream(output,
      new FileSystem.Statistics("scheme"));
  EventWriter writer = new EventWriter(fsOutput);
  writer.write(getJobPriorityChangedEvent());
  writer.write(getJobStatusChangedEvent());
  writer.write(getTaskUpdatedEvent());
  writer.write(getReduceAttemptKilledEvent());
  writer.write(getJobKilledEvent());
  writer.write(getSetupAttemptStartedEvent());
  writer.write(getTaskAttemptFinishedEvent());
  writer.write(getSetupAttemptFieledEvent());
  writer.write(getSetupAttemptKilledEvent());
  writer.write(getCleanupAttemptStartedEvent());
  writer.write(getCleanupAttemptFinishedEvent());
  writer.write(getCleanupAttemptFiledEvent());
  writer.write(getCleanupAttemptKilledEvent());

  writer.flush();
  writer.close();

  return output.toByteArray();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:TestEvents.java

示例3: createWrappedOutputStream

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
/**
 * Wraps the stream in a CryptoOutputStream if the underlying file is
 * encrypted.
 */
public HdfsDataOutputStream createWrappedOutputStream(DFSOutputStream dfsos,
    FileSystem.Statistics statistics, long startPos) throws IOException {
  final FileEncryptionInfo feInfo = dfsos.getFileEncryptionInfo();
  if (feInfo != null) {
    // File is encrypted, wrap the stream in a crypto stream.
    // Currently only one version, so no special logic based on the version #
    getCryptoProtocolVersion(feInfo);
    final CryptoCodec codec = getCryptoCodec(conf, feInfo);
    KeyVersion decrypted = decryptEncryptedDataEncryptionKey(feInfo);
    final CryptoOutputStream cryptoOut =
        new CryptoOutputStream(dfsos, codec,
            decrypted.getMaterial(), feInfo.getIV(), startPos);
    return new HdfsDataOutputStream(cryptoOut, statistics, startPos);
  } else {
    // No FileEncryptionInfo present so no encryption.
    return new HdfsDataOutputStream(dfsos, statistics, startPos);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:DFSClient.java

示例4: testStatistics

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
@Test
public void testStatistics() throws Exception {
  FileSystem.clearStatistics();
  FileSystem.Statistics stats = FileSystem.getStatistics("wasb",
      NativeAzureFileSystem.class);
  assertEquals(0, stats.getBytesRead());
  assertEquals(0, stats.getBytesWritten());
  Path newFile = new Path("testStats");
  writeString(newFile, "12345678");
  assertEquals(8, stats.getBytesWritten());
  assertEquals(0, stats.getBytesRead());
  String readBack = readString(newFile);
  assertEquals("12345678", readBack);
  assertEquals(8, stats.getBytesRead());
  assertEquals(8, stats.getBytesWritten());
  assertTrue(fs.delete(newFile, true));
  assertEquals(8, stats.getBytesRead());
  assertEquals(8, stats.getBytesWritten());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:NativeAzureFileSystemBaseTest.java

示例5: S3AInputStream

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
public S3AInputStream(String bucket, String key, long contentLength, AmazonS3Client client,
                      FileSystem.Statistics stats) {
  this.bucket = bucket;
  this.key = key;
  this.contentLength = contentLength;
  this.client = client;
  this.stats = stats;
  this.pos = 0;
  this.closed = false;
  this.wrappedStream = null;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:S3AInputStream.java

示例6: FTPInputStream

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
public FTPInputStream(InputStream stream, FTPClient client,
    FileSystem.Statistics stats) {
  if (stream == null) {
    throw new IllegalArgumentException("Null InputStream");
  }
  if (client == null || !client.isConnected()) {
    throw new IllegalArgumentException("FTP client null or not connected");
  }
  this.wrappedStream = stream;
  this.client = client;
  this.stats = stats;
  this.pos = 0;
  this.closed = false;
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:15,代码来源:FTPInputStream.java

示例7: SwiftNativeInputStream

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
public SwiftNativeInputStream(SwiftNativeFileSystemStore storeNative,
    FileSystem.Statistics statistics, Path path, long bufferSize)
        throws IOException {
  this.nativeStore = storeNative;
  this.statistics = statistics;
  this.path = path;
  if (bufferSize <= 0) {
    throw new IllegalArgumentException("Invalid buffer size");
  }
  this.bufferSize = bufferSize;
  //initial buffer fill
  this.httpStream = storeNative.getObject(path).getInputStream();
  //fillBuffer(0);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:15,代码来源:SwiftNativeInputStream.java

示例8: getFsStatistics

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
/**
 * Gets a handle to the Statistics instance based on the scheme associated
 * with path.
 * 
 * @param path the path.
 * @param conf the configuration to extract the scheme from if not part of 
 *   the path.
 * @return a Statistics instance, or null if none is found for the scheme.
 */
protected static List<Statistics> getFsStatistics(Path path, Configuration conf) throws IOException {
  List<Statistics> matchedStats = new ArrayList<FileSystem.Statistics>();
  path = path.getFileSystem(conf).makeQualified(path);
  String scheme = path.toUri().getScheme();
  for (Statistics stats : FileSystem.getAllStatistics()) {
    if (stats.getScheme().equals(scheme)) {
      matchedStats.add(stats);
    }
  }
  return matchedStats;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:Task.java

示例9: updateCounters

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
void updateCounters() {
  if (readBytesCounter == null) {
    readBytesCounter = counters.findCounter(scheme,
        FileSystemCounter.BYTES_READ);
  }
  if (writeBytesCounter == null) {
    writeBytesCounter = counters.findCounter(scheme,
        FileSystemCounter.BYTES_WRITTEN);
  }
  if (readOpsCounter == null) {
    readOpsCounter = counters.findCounter(scheme,
        FileSystemCounter.READ_OPS);
  }
  if (largeReadOpsCounter == null) {
    largeReadOpsCounter = counters.findCounter(scheme,
        FileSystemCounter.LARGE_READ_OPS);
  }
  if (writeOpsCounter == null) {
    writeOpsCounter = counters.findCounter(scheme,
        FileSystemCounter.WRITE_OPS);
  }
  long readBytes = 0;
  long writeBytes = 0;
  long readOps = 0;
  long largeReadOps = 0;
  long writeOps = 0;
  for (FileSystem.Statistics stat: stats) {
    readBytes = readBytes + stat.getBytesRead();
    writeBytes = writeBytes + stat.getBytesWritten();
    readOps = readOps + stat.getReadOps();
    largeReadOps = largeReadOps + stat.getLargeReadOps();
    writeOps = writeOps + stat.getWriteOps();
  }
  readBytesCounter.setValue(readBytes);
  writeBytesCounter.setValue(writeBytes);
  readOpsCounter.setValue(readOps);
  largeReadOpsCounter.setValue(largeReadOps);
  writeOpsCounter.setValue(writeOps);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:40,代码来源:Task.java

示例10: FSDataOutputStreamWrapper

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
public FSDataOutputStreamWrapper(FSDataOutputStream os, FileSystem.Statistics stats,
    long startPosition) throws IOException {
  super(os, stats, startPosition);
  underlyingOS = os;
}
 
开发者ID:dremio,项目名称:dremio-oss,代码行数:6,代码来源:FSDataOutputStreamWrapper.java

示例11: PositionCache

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
public PositionCache(OutputStream out, FileSystem.Statistics stats, long pos)
    throws IOException {
  super(out);
  statistics = stats;
  position = pos;
}
 
开发者ID:ampool,项目名称:monarch,代码行数:7,代码来源:ADataOutputStream.java

示例12: ADataOutputStream

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
public ADataOutputStream(OutputStream out, FileSystem.Statistics stats, long startPosition)
    throws IOException {
  super(new ADataOutputStream.PositionCache(out, stats, startPosition));
  wrappedStream = out;
}
 
开发者ID:ampool,项目名称:monarch,代码行数:6,代码来源:ADataOutputStream.java

示例13: FileSystemStatisticUpdater

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
FileSystemStatisticUpdater(List<FileSystem.Statistics> stats, String scheme) {
  this.stats = stats;
  this.scheme = scheme;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:5,代码来源:Task.java

示例14: HdfsDataOutputStream

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
public HdfsDataOutputStream(DFSOutputStream out, FileSystem.Statistics stats,
    long startPosition) throws IOException {
  super(out, stats, startPosition);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:5,代码来源:HdfsDataOutputStream.java

示例15: S3AFastOutputStream

import org.apache.hadoop.fs.FileSystem; //导入方法依赖的package包/类
/**
 * Creates a fast OutputStream that uploads to S3 from memory.
 * For MultiPartUploads, as soon as sufficient bytes have been written to
 * the stream a part is uploaded immediately (by using the low-level
 * multi-part upload API on the AmazonS3Client).
 *
 * @param client AmazonS3Client used for S3 calls
 * @param fs S3AFilesystem
 * @param bucket S3 bucket name
 * @param key S3 key name
 * @param progress report progress in order to prevent timeouts
 * @param statistics track FileSystem.Statistics on the performed operations
 * @param cannedACL used CannedAccessControlList
 * @param serverSideEncryptionAlgorithm algorithm for server side encryption
 * @param partSize size of a single part in a multi-part upload (except
 * last part)
 * @param multiPartThreshold files at least this size use multi-part upload
 * @throws IOException
 */
public S3AFastOutputStream(AmazonS3Client client, S3AFileSystem fs,
    String bucket, String key, Progressable progress,
    FileSystem.Statistics statistics, CannedAccessControlList cannedACL,
    String serverSideEncryptionAlgorithm, long partSize,
    long multiPartThreshold, ThreadPoolExecutor threadPoolExecutor)
    throws IOException {
  this.bucket = bucket;
  this.key = key;
  this.client = client;
  this.fs = fs;
  this.cannedACL = cannedACL;
  this.statistics = statistics;
  this.serverSideEncryptionAlgorithm = serverSideEncryptionAlgorithm;
  //Ensure limit as ByteArrayOutputStream size cannot exceed Integer.MAX_VALUE
  if (partSize > Integer.MAX_VALUE) {
    this.partSize = Integer.MAX_VALUE;
    LOG.warn("s3a: MULTIPART_SIZE capped to ~2.14GB (maximum allowed size " +
        "when using 'FAST_UPLOAD = true')");
  } else {
    this.partSize = (int) partSize;
  }
  if (multiPartThreshold > Integer.MAX_VALUE) {
    this.multiPartThreshold = Integer.MAX_VALUE;
    LOG.warn("s3a: MIN_MULTIPART_THRESHOLD capped to ~2.14GB (maximum " +
        "allowed size when using 'FAST_UPLOAD = true')");
  } else {
    this.multiPartThreshold = (int) multiPartThreshold;
  }
  this.bufferLimit = this.multiPartThreshold;
  this.closed = false;
  int initialBufferSize = this.fs.getConf()
      .getInt(Constants.FAST_BUFFER_SIZE, Constants.DEFAULT_FAST_BUFFER_SIZE);
  if (initialBufferSize < 0) {
    LOG.warn("s3a: FAST_BUFFER_SIZE should be a positive number. Using " +
        "default value");
    initialBufferSize = Constants.DEFAULT_FAST_BUFFER_SIZE;
  } else if (initialBufferSize > this.bufferLimit) {
    LOG.warn("s3a: automatically adjusting FAST_BUFFER_SIZE to not " +
        "exceed MIN_MULTIPART_THRESHOLD");
    initialBufferSize = this.bufferLimit;
  }
  this.buffer = new ByteArrayOutputStream(initialBufferSize);
  this.executorService = MoreExecutors.listeningDecorator(threadPoolExecutor);
  this.multiPartUpload = null;
  this.progressListener = new ProgressableListener(progress);
  if (LOG.isDebugEnabled()){
    LOG.debug("Initialized S3AFastOutputStream for bucket '{}' key '{}'",
        bucket, key);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:70,代码来源:S3AFastOutputStream.java


注:本文中的org.apache.hadoop.fs.FileSystem.Statistics方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。