当前位置: 首页>>代码示例>>Java>>正文


Java BlockReader类代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.BlockReader的典型用法代码示例。如果您正苦于以下问题:Java BlockReader类的具体用法?Java BlockReader怎么用?Java BlockReader使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


BlockReader类属于org.apache.hadoop.hdfs包,在下文中一共展示了BlockReader类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: readFromBlock

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
private Callable<Void> readFromBlock(final BlockReader reader,
    final ByteBuffer buf) {
  return new Callable<Void>() {

    @Override
    public Void call() throws Exception {
      try {
        actualReadFromBlock(reader, buf);
        return null;
      } catch (IOException e) {
        LOG.info(e.getMessage());
        throw e;
      }
    }

  };
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:18,代码来源:ErasureCodingWorker.java

示例2: newBlockReader

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
private BlockReader newBlockReader(final ExtendedBlock block, 
    long offsetInBlock, DatanodeInfo dnInfo) {
  if (offsetInBlock >= block.getNumBytes()) {
    return null;
  }
  try {
    InetSocketAddress dnAddr = getSocketAddress4Transfer(dnInfo);
    Token<BlockTokenIdentifier> blockToken = datanode.getBlockAccessToken(
        block, EnumSet.of(BlockTokenIdentifier.AccessMode.READ));
    /*
     * This can be further improved if the replica is local, then we can
     * read directly from DN and need to check the replica is FINALIZED
     * state, notice we should not use short-circuit local read which
     * requires config for domain-socket in UNIX or legacy config in Windows.
     *
     * TODO: add proper tracer
     */
    return RemoteBlockReader2.newBlockReader(
        "dummy", block, blockToken, offsetInBlock, 
        block.getNumBytes() - offsetInBlock, true,
        "", newConnectedPeer(block, dnAddr, blockToken, dnInfo), dnInfo,
        null, cachingStrategy, datanode.getTracer());
  } catch (IOException e) {
    return null;
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:27,代码来源:ErasureCodingWorker.java

示例3: tryGetLocalFile

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
private void tryGetLocalFile() {
    if (tryGetLocalFileTimes >= TRY_GET_LOCAL_FILE_LIMIT) {
        return;
    }
    if (isSingleBlock && HDFS_READ_HACK_ENABLE) {
        try {
            InputStream is = input.getWrappedStream();
            if (is instanceof DFSInputStream) {
                BlockReader blockReader = MemoryUtil.getDFSInputStream_blockReader(is);
                if (blockReader != null && blockReader.isShortCircuit()) {
                    localFile = MemoryUtil.getBlockReaderLocal_dataIn(blockReader);
                }
            }
        } catch (Throwable e) {
            logger.debug("HDFS READ HACK failed.", e);
        }
    }
    tryGetLocalFileTimes++;
}
 
开发者ID:shunfei,项目名称:indexr,代码行数:20,代码来源:DFSByteBufferReader.java

示例4: closeBlockReader

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
/**
 * Close the given BlockReader and cache its socket.
 */
private void closeBlockReader(BlockReader reader, boolean reuseConnection) 
    throws IOException {
  if (reader.hasSentStatusCode()) {
    Socket oldSock = reader.takeSocket();
    if (dfsClient.getDataTransferProtocolVersion() < 
        DataTransferProtocol.READ_REUSE_CONNECTION_VERSION ||
        !reuseConnection) {
        // close the sock for old datanode.
      if (oldSock != null) {
        IOUtils.closeSocket(oldSock);
      }
    } else {
      socketCache.put(oldSock);
    }
  }
  reader.close();
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:21,代码来源:DFSInputStream.java

示例5: accessBlock

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());
    
  s = NetUtils.getDefaultSocketFactory(conf).createSocket();
  s.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

  String file = BlockReaderFactory.getFileName(targetAddr, 
      "test-blockpoolid",
      block.getBlockId());
  BlockReader blockReader =
    BlockReaderFactory.newBlockReader(new DFSClient.Conf(conf), file, block,
      lblock.getBlockToken(), 0, -1, true, "TestDataNodeVolumeFailure",
      TcpPeerServer.peerFromSocket(s), datanode, null, null, null, false);
  blockReader.close();
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:28,代码来源:TestDataNodeVolumeFailure.java

示例6: accessBlock

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());
    
  s = NetUtils.getDefaultSocketFactory(conf).createSocket();
  s.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

  String file = BlockReaderFactory.getFileName(targetAddr, 
      "test-blockpoolid",
      block.getBlockId());
  BlockReader blockReader =
    BlockReaderFactory.newBlockReader(new DFSClient.Conf(conf), file, block,
      lblock.getBlockToken(), 0, -1, true, "TestDataNodeVolumeFailure",
      TcpPeerServer.peerFromSocket(s), datanode, null, null, null, false,
      CachingStrategy.newDefaultStrategy());
  blockReader.close();
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:29,代码来源:TestDataNodeVolumeFailure.java

示例7: accessBlock

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  BlockReader blockReader = null; 
  Block block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getName());
    
  s = new Socket();
  s.connect(targetAddr, HdfsConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsConstants.READ_TIMEOUT);

  String file = BlockReader.getFileName(targetAddr, block.getBlockId());
  blockReader = 
    BlockReader.newBlockReader(s, file, block, lblock
      .getBlockToken(), 0, -1, 4096);

  // nothing - if it fails - it will throw and exception
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:27,代码来源:TestDataNodeVolumeFailure.java

示例8: initChecksumAndBufferSizeIfNeeded

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
private void initChecksumAndBufferSizeIfNeeded(BlockReader blockReader) {
  if (checksum == null) {
    checksum = blockReader.getDataChecksum();
    bytesPerChecksum = checksum.getBytesPerChecksum();
    // The bufferSize is flat to divide bytesPerChecksum
    int readBufferSize = STRIPED_READ_BUFFER_SIZE;
    bufferSize = readBufferSize < bytesPerChecksum ? bytesPerChecksum :
      readBufferSize - readBufferSize % bytesPerChecksum;
  } else {
    assert blockReader.getDataChecksum().equals(checksum);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:13,代码来源:ErasureCodingWorker.java

示例9: actualReadFromBlock

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
/**
 * Read bytes from block
 */
private void actualReadFromBlock(BlockReader reader, ByteBuffer buf)
    throws IOException {
  int len = buf.remaining();
  int n = 0;
  while (n < len) {
    int nread = reader.read(buf);
    if (nread <= 0) {
      break;
    }
    n += nread;
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:16,代码来源:ErasureCodingWorker.java

示例10: closeBlockReader

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
private void closeBlockReader(BlockReader blockReader) {
  try {
    if (blockReader != null) {
      blockReader.close();
    }
  } catch (IOException e) {
    // ignore
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:10,代码来源:ErasureCodingWorker.java

示例11: getBlockReader

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
protected BlockReader getBlockReader(int protocolVersion, int namespaceId,
    InetSocketAddress dnAddr, String file, long blockId,
    long generationStamp, long startOffset, long len, int bufferSize,
    boolean verifyChecksum, String clientName, long bytesToCheckReadSpeed,
    long minReadSpeedBps, boolean reuseConnection,
    FSClientReadProfilingData cliData)
    throws IOException {
  return getBlockReader(protocolVersion, namespaceId, dnAddr, file, blockId,
      generationStamp, startOffset, len, bufferSize, verifyChecksum,
      clientName, bytesToCheckReadSpeed, minReadSpeedBps, reuseConnection,
      cliData, options);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:13,代码来源:DFSInputStream.java

示例12: streamBlockInAscii

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
public static void streamBlockInAscii(InetSocketAddress addr, 
    long blockId, Token<BlockTokenIdentifier> blockToken, long genStamp,
    long blockSize, long offsetIntoBlock, long chunkSizeToView,
    JspWriter out, Configuration conf) throws IOException {
  if (chunkSizeToView == 0) return;
  Socket s = new Socket();
  s.connect(addr, HdfsConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsConstants.READ_TIMEOUT);
    
    long amtToRead = Math.min(chunkSizeToView, blockSize - offsetIntoBlock);     
    
    // Use the block name for file name. 
    String file = BlockReader.getFileName(addr, blockId);
    BlockReader blockReader = BlockReader.newBlockReader(s, file,
      new Block(blockId, 0, genStamp), blockToken,
      offsetIntoBlock, amtToRead, conf.getInt("io.file.buffer.size", 4096));
      
  byte[] buf = new byte[(int)amtToRead];
  int readOffset = 0;
  int retries = 2;
  while ( amtToRead > 0 ) {
    int numRead;
    try {
      numRead = blockReader.readAll(buf, readOffset, (int)amtToRead);
    }
    catch (IOException e) {
      retries--;
      if (retries == 0)
        throw new IOException("Could not read data from datanode");
      continue;
    }
    amtToRead -= numRead;
    readOffset += numRead;
  }
  blockReader = null;
  s.close();
  out.print(HtmlQuoting.quoteHtmlChars(new String(buf)));
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:39,代码来源:JspHelper.java

示例13: testCompletePartialRead

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
/**
 * Test that we don't call verifiedByClient() when the client only
 * reads a partial block.
 */
@Test
public void testCompletePartialRead() throws Exception {
  // Ask for half the file
  BlockReader reader = util.getBlockReader(testBlock, 0, FILE_SIZE_K * 1024 / 2);
  DataNode dn = util.getDataNode(testBlock);
  DataBlockScanner scanner = spy(dn.blockScanner);
  dn.blockScanner = scanner;

  // And read half the file
  util.readAndCheckEOS(reader, FILE_SIZE_K * 1024 / 2, true);
  verify(scanner, never()).verifiedByClient(Mockito.isA(Block.class));
  reader.close();
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:18,代码来源:TestDataXceiver.java

示例14: tryRead

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
private static void tryRead(final Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  IOException ioe = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());

    blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
        setFileName(BlockReaderFactory.getFileName(targetAddr, 
                      "test-blockpoolid", block.getBlockId())).
        setBlock(block).
        setBlockToken(lblock.getBlockToken()).
        setInetSocketAddress(targetAddr).
        setStartOffset(0).
        setLength(-1).
        setVerifyChecksum(true).
        setClientName("TestBlockTokenWithDFS").
        setDatanodeInfo(nodes[0]).
        setCachingStrategy(CachingStrategy.newDefaultStrategy()).
        setClientCacheContext(ClientContext.getFromConf(conf)).
        setConfiguration(conf).
        setRemotePeerFactory(new RemotePeerFactory() {
          @Override
          public Peer newConnectedPeer(InetSocketAddress addr,
              Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
              throws IOException {
            Peer peer = null;
            Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
            try {
              sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
              sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
              peer = TcpPeerServer.peerFromSocket(sock);
            } finally {
              if (peer == null) {
                IOUtils.closeSocket(sock);
              }
            }
            return peer;
          }
        }).
        build();
  } catch (IOException ex) {
    ioe = ex;
  } finally {
    if (blockReader != null) {
      try {
        blockReader.close();
      } catch (IOException e) {
        throw new RuntimeException(e);
      }
    }
  }
  if (shouldSucceed) {
    Assert.assertNotNull("OP_READ_BLOCK: access token is invalid, "
          + "when it is expected to be valid", blockReader);
  } else {
    Assert.assertNotNull("OP_READ_BLOCK: access token is valid, "
        + "when it is expected to be invalid", ioe);
    Assert.assertTrue(
        "OP_READ_BLOCK failed due to reasons other than access token: ",
        ioe instanceof InvalidBlockTokenException);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:67,代码来源:TestBlockTokenWithDFS.java

示例15: accessBlock

import org.apache.hadoop.hdfs.BlockReader; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());

  BlockReader blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
    setInetSocketAddress(targetAddr).
    setBlock(block).
    setFileName(BlockReaderFactory.getFileName(targetAddr,
                  "test-blockpoolid", block.getBlockId())).
    setBlockToken(lblock.getBlockToken()).
    setStartOffset(0).
    setLength(-1).
    setVerifyChecksum(true).
    setClientName("TestDataNodeVolumeFailure").
    setDatanodeInfo(datanode).
    setCachingStrategy(CachingStrategy.newDefaultStrategy()).
    setClientCacheContext(ClientContext.getFromConf(conf)).
    setConfiguration(conf).
    setRemotePeerFactory(new RemotePeerFactory() {
      @Override
      public Peer newConnectedPeer(InetSocketAddress addr,
          Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
          throws IOException {
        Peer peer = null;
        Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
        try {
          sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
          sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
          peer = TcpPeerServer.peerFromSocket(sock);
        } finally {
          if (peer == null) {
            IOUtils.closeSocket(sock);
          }
        }
        return peer;
      }
    }).
    build();
  blockReader.close();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:50,代码来源:TestDataNodeVolumeFailure.java


注:本文中的org.apache.hadoop.hdfs.BlockReader类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。