當前位置: 首頁>>代碼示例>>Java>>正文


Java FSDataInputStream.getWrappedStream方法代碼示例

本文整理匯總了Java中org.apache.hadoop.fs.FSDataInputStream.getWrappedStream方法的典型用法代碼示例。如果您正苦於以下問題:Java FSDataInputStream.getWrappedStream方法的具體用法?Java FSDataInputStream.getWrappedStream怎麽用?Java FSDataInputStream.getWrappedStream使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.fs.FSDataInputStream的用法示例。


在下文中一共展示了FSDataInputStream.getWrappedStream方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: check

import org.apache.hadoop.fs.FSDataInputStream; //導入方法依賴的package包/類
public static void check(FileSystem fs, Path p, long length) throws IOException {
  int i = -1;
  try {
    final FileStatus status = fs.getFileStatus(p);
    FSDataInputStream in = fs.open(p);
    if (in.getWrappedStream() instanceof DFSInputStream) {
      long len = ((DFSInputStream)in.getWrappedStream()).getFileLength();
      assertEquals(length, len);
    } else {
      assertEquals(length, status.getLen());
    }
    
    for(i++; i < length; i++) {
      assertEquals((byte)i, (byte)in.read());  
    }
    i = -(int)length;
    assertEquals(-1, in.read()); //EOF  
    in.close();
  } catch(IOException ioe) {
    throw new IOException("p=" + p + ", length=" + length + ", i=" + i, ioe);
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:23,代碼來源:AppendTestUtil.java

示例2: readDiskRanges

import org.apache.hadoop.fs.FSDataInputStream; //導入方法依賴的package包/類
static DiskRangeList readDiskRanges(FSDataInputStream file, HadoopShims.ZeroCopyReaderShim zcr,
    long base, DiskRangeList range, boolean doForceDirect) throws IOException {
  if (range == null)
    return null;
  DiskRangeList prev = range.prev;
  if (prev == null) {
    prev = new DiskRangeList.MutateHelper(range);
  }
  while (range != null) {
    if (range.hasData()) {
      range = range.next;
      continue;
    }
    int len = (int) (range.getEnd() - range.getOffset());
    long off = range.getOffset();
    ByteBuffer bb = null;
    if (file.getWrappedStream() instanceof ADataInputStream) {
      ADataInputStream ads = (ADataInputStream) file.getWrappedStream();
      bb = ByteBuffer.wrap(ads.getBuffer(), (int) (base + off), len);
    } else {
      // Don't use HDFS ByteBuffer API because it has no readFully, and is buggy and pointless.
      byte[] buffer = new byte[len];
      file.readFully((base + off), buffer, 0, buffer.length);
      if (doForceDirect) {
        bb = ByteBuffer.allocateDirect(len);
        bb.put(buffer);
        bb.position(0);
        bb.limit(len);
      } else {
        bb = ByteBuffer.wrap(buffer);
      }
    }
    range = range.replaceSelfWith(new BufferChunk(bb, range.getOffset()));
    range = range.next;
  }
  return prev.next;
}
 
開發者ID:ampool,項目名稱:monarch,代碼行數:38,代碼來源:ADataReader.java

示例3: doPread

import org.apache.hadoop.fs.FSDataInputStream; //導入方法依賴的package包/類
private void doPread(FSDataInputStream stm, long position, byte[] buffer,
                     int offset, int length) throws IOException {
  int nread = 0;
  long totalRead = 0;
  DFSInputStream dfstm = null;

  if (stm.getWrappedStream() instanceof DFSInputStream) {
    dfstm = (DFSInputStream) (stm.getWrappedStream());
    totalRead = dfstm.getReadStatistics().getTotalBytesRead();
  }

  while (nread < length) {
    int nbytes =
        stm.read(position + nread, buffer, offset + nread, length - nread);
    assertTrue("Error in pread", nbytes > 0);
    nread += nbytes;
  }

  if (dfstm != null) {
    if (isHedgedRead) {
      assertTrue("Expected read statistic to be incremented", length <= dfstm
          .getReadStatistics().getTotalBytesRead() - totalRead);
    } else {
      assertEquals("Expected read statistic to be incremented", length, dfstm
          .getReadStatistics().getTotalBytesRead() - totalRead);
    }
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:29,代碼來源:TestPread.java


注:本文中的org.apache.hadoop.fs.FSDataInputStream.getWrappedStream方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。