当前位置: 首页>>代码示例>>Java>>正文


Java DataNodeTestUtils.fetchReplicaInfo方法代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils.fetchReplicaInfo方法的典型用法代码示例。如果您正苦于以下问题:Java DataNodeTestUtils.fetchReplicaInfo方法的具体用法?Java DataNodeTestUtils.fetchReplicaInfo怎么用?Java DataNodeTestUtils.fetchReplicaInfo使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils的用法示例。


在下文中一共展示了DataNodeTestUtils.fetchReplicaInfo方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: pipeline_01

import org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils; //导入方法依赖的package包/类
/**
 * Creates and closes a file of certain length.
 * Calls append to allow next write() operation to add to the end of it
 * After write() invocation, calls hflush() to make sure that data sunk through
 * the pipeline and check the state of the last block's replica.
 * It supposes to be in RBW state
 *
 * @throws IOException in case of an error
 */
@Test
public void pipeline_01() throws IOException {
  final String METHOD_NAME = GenericTestUtils.getMethodName();
  if(LOG.isDebugEnabled()) {
    LOG.debug("Running " + METHOD_NAME);
  }
  Path filePath = new Path("/" + METHOD_NAME + ".dat");

  DFSTestUtil.createFile(fs, filePath, FILE_SIZE, REPL_FACTOR, rand.nextLong());
  if(LOG.isDebugEnabled()) {
    LOG.debug("Invoking append but doing nothing otherwise...");
  }
  FSDataOutputStream ofs = fs.append(filePath);
  ofs.writeBytes("Some more stuff to write");
  ((DFSOutputStream) ofs.getWrappedStream()).hflush();

  List<LocatedBlock> lb = cluster.getNameNodeRpc().getBlockLocations(
    filePath.toString(), FILE_SIZE - 1, FILE_SIZE).getLocatedBlocks();

  String bpid = cluster.getNamesystem().getBlockPoolId();
  for (DataNode dn : cluster.getDataNodes()) {
    Replica r = DataNodeTestUtils.fetchReplicaInfo(dn, bpid, lb.get(0)
        .getBlock().getBlockId());

    assertTrue("Replica on DN " + dn + " shouldn't be null", r != null);
    assertEquals("Should be RBW replica on " + dn
        + " after sequence of calls append()/write()/hflush()",
        HdfsServerConstants.ReplicaState.RBW, r.getState());
  }
  ofs.close();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:41,代码来源:TestPipelines.java

示例2: pipeline_01

import org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils; //导入方法依赖的package包/类
/**
 * Creates and closes a file of certain length.
 * Calls append to allow next write() operation to add to the end of it
 * After write() invocation, calls hflush() to make sure that data sunk
 * through
 * the pipeline and check the state of the last block's replica.
 * It supposes to be in RBW state
 *
 * @throws IOException
 *     in case of an error
 */
@Test
public void pipeline_01() throws IOException {
  final String METHOD_NAME = GenericTestUtils.getMethodName();
  if (LOG.isDebugEnabled()) {
    LOG.debug("Running " + METHOD_NAME);
  }
  Path filePath = new Path("/" + METHOD_NAME + ".dat");

  DFSTestUtil
      .createFile(fs, filePath, FILE_SIZE, REPL_FACTOR, rand.nextLong());
  if (LOG.isDebugEnabled()) {
    LOG.debug("Invoking append but doing nothing otherwise...");
  }
  FSDataOutputStream ofs = fs.append(filePath);
  ofs.writeBytes("Some more stuff to write");
  ((DFSOutputStream) ofs.getWrappedStream()).hflush();

  List<LocatedBlock> lb = cluster.getNameNodeRpc()
      .getBlockLocations(filePath.toString(), FILE_SIZE - 1, FILE_SIZE)
      .getLocatedBlocks();

  String bpid = cluster.getNamesystem().getBlockPoolId();
  for (DataNode dn : cluster.getDataNodes()) {
    Replica r = DataNodeTestUtils
        .fetchReplicaInfo(dn, bpid, lb.get(0).getBlock().getBlockId());

    assertTrue("Replica on DN " + dn + " shouldn't be null", r != null);
    assertEquals("Should be RBW replica on " + dn +
            " after sequence of calls append()/write()/hflush()",
        HdfsServerConstants.ReplicaState.RBW, r.getState());
  }
  ofs.close();
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:45,代码来源:TestPipelines.java


注:本文中的org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils.fetchReplicaInfo方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。