当前位置: 首页>>代码示例>>Java>>正文


Java HdfsDataInputStream.getAllBlocks方法代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.client.HdfsDataInputStream.getAllBlocks方法的典型用法代码示例。如果您正苦于以下问题:Java HdfsDataInputStream.getAllBlocks方法的具体用法?Java HdfsDataInputStream.getAllBlocks怎么用?Java HdfsDataInputStream.getAllBlocks使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hdfs.client.HdfsDataInputStream的用法示例。


在下文中一共展示了HdfsDataInputStream.getAllBlocks方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getAllBlocks

import org.apache.hadoop.hdfs.client.HdfsDataInputStream; //导入方法依赖的package包/类
public static List<LocatedBlock> getAllBlocks(FileSystem fs, Path path)
    throws IOException {
  HdfsDataInputStream in = (HdfsDataInputStream) fs.open(path);
  return in.getAllBlocks();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:6,代码来源:DFSTestUtil.java

示例2: checkFile

import org.apache.hadoop.hdfs.client.HdfsDataInputStream; //导入方法依赖的package包/类
/**
 * Verify that the number of replicas are as expected for each block in
 * the given file.
 * For blocks with a decommissioned node, verify that their replication
 * is 1 more than what is specified.
 * For blocks without decommissioned nodes, verify their replication is
 * equal to what is specified.
 * 
 * @param downnode - if null, there is no decommissioned node for this file.
 * @return - null if no failure found, else an error message string.
 */
private static String checkFile(FileSystem fileSys, Path name, int repl,
  String downnode, int numDatanodes) throws IOException {
  boolean isNodeDown = (downnode != null);
  // need a raw stream
  assertTrue("Not HDFS:"+fileSys.getUri(),
      fileSys instanceof DistributedFileSystem);
  HdfsDataInputStream dis = (HdfsDataInputStream)
      fileSys.open(name);
  Collection<LocatedBlock> dinfo = dis.getAllBlocks();
  for (LocatedBlock blk : dinfo) { // for each block
    int hasdown = 0;
    DatanodeInfo[] nodes = blk.getLocations();
    for (int j = 0; j < nodes.length; j++) { // for each replica
      if (isNodeDown && nodes[j].getXferAddr().equals(downnode)) {
        hasdown++;
        //Downnode must actually be decommissioned
        if (!nodes[j].isDecommissioned()) {
          return "For block " + blk.getBlock() + " replica on " +
            nodes[j] + " is given as downnode, " +
            "but is not decommissioned";
        }
        //Decommissioned node (if any) should only be last node in list.
        if (j != nodes.length - 1) {
          return "For block " + blk.getBlock() + " decommissioned node "
            + nodes[j] + " was not last node in list: "
            + (j + 1) + " of " + nodes.length;
        }
        LOG.info("Block " + blk.getBlock() + " replica on " +
          nodes[j] + " is decommissioned.");
      } else {
        //Non-downnodes must not be decommissioned
        if (nodes[j].isDecommissioned()) {
          return "For block " + blk.getBlock() + " replica on " +
            nodes[j] + " is unexpectedly decommissioned";
        }
      }
    }

    LOG.info("Block " + blk.getBlock() + " has " + hasdown
      + " decommissioned replica.");
    if(Math.min(numDatanodes, repl+hasdown) != nodes.length) {
      return "Wrong number of replicas for block " + blk.getBlock() +
        ": " + nodes.length + ", expected " +
        Math.min(numDatanodes, repl+hasdown);
    }
  }
  return null;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:60,代码来源:TestDecommission.java

示例3: checkFile

import org.apache.hadoop.hdfs.client.HdfsDataInputStream; //导入方法依赖的package包/类
/**
 * Verify that the number of replicas are as expected for each block in
 * the given file.
 * For blocks with a decommissioned node, verify that their replication
 * is 1 more than what is specified.
 * For blocks without decommissioned nodes, verify their replication is
 * equal to what is specified.
 * 
 * @param downnode - if null, there is no decommissioned node for this file.
 * @return - null if no failure found, else an error message string.
 */
private String checkFile(FileSystem fileSys, Path name, int repl,
  String downnode, int numDatanodes) throws IOException {
  boolean isNodeDown = (downnode != null);
  // need a raw stream
  assertTrue("Not HDFS:"+fileSys.getUri(),
      fileSys instanceof DistributedFileSystem);
  HdfsDataInputStream dis = (HdfsDataInputStream)
      fileSys.open(name);
  Collection<LocatedBlock> dinfo = dis.getAllBlocks();
  for (LocatedBlock blk : dinfo) { // for each block
    int hasdown = 0;
    DatanodeInfo[] nodes = blk.getLocations();
    for (int j = 0; j < nodes.length; j++) { // for each replica
      if (isNodeDown && nodes[j].getXferAddr().equals(downnode)) {
        hasdown++;
        //Downnode must actually be decommissioned
        if (!nodes[j].isDecommissioned()) {
          return "For block " + blk.getBlock() + " replica on " +
            nodes[j] + " is given as downnode, " +
            "but is not decommissioned";
        }
        //Decommissioned node (if any) should only be last node in list.
        if (j != nodes.length - 1) {
          return "For block " + blk.getBlock() + " decommissioned node "
            + nodes[j] + " was not last node in list: "
            + (j + 1) + " of " + nodes.length;
        }
        LOG.info("Block " + blk.getBlock() + " replica on " +
          nodes[j] + " is decommissioned.");
      } else {
        //Non-downnodes must not be decommissioned
        if (nodes[j].isDecommissioned()) {
          return "For block " + blk.getBlock() + " replica on " +
            nodes[j] + " is unexpectedly decommissioned";
        }
      }
    }

    LOG.info("Block " + blk.getBlock() + " has " + hasdown
      + " decommissioned replica.");
    if(Math.min(numDatanodes, repl+hasdown) != nodes.length) {
      return "Wrong number of replicas for block " + blk.getBlock() +
        ": " + nodes.length + ", expected " +
        Math.min(numDatanodes, repl+hasdown);
    }
  }
  return null;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:60,代码来源:TestDecommission.java

示例4: checkFile

import org.apache.hadoop.hdfs.client.HdfsDataInputStream; //导入方法依赖的package包/类
/**
 * Verify that the number of replicas are as expected for each block in
 * the given file.
 * For blocks with a decommissioned node, verify that their replication
 * is 1 more than what is specified.
 * For blocks without decommissioned nodes, verify their replication is
 * equal to what is specified.
 * 
 * @param downnode - if null, there is no decommissioned node for this file.
 * @return - null if no failure found, else an error message string.
 */
private String checkFile(FileSystem fileSys, Path name, int repl,
  String downnode, int numDatanodes) throws IOException {
  boolean isNodeDown = (downnode != null);
  // need a raw stream
  assertTrue("Not HDFS:"+fileSys.getUri(),
      fileSys instanceof DistributedFileSystem);
  HdfsDataInputStream dis = (HdfsDataInputStream)
      ((DistributedFileSystem)fileSys).open(name);
  Collection<LocatedBlock> dinfo = dis.getAllBlocks();
  for (LocatedBlock blk : dinfo) { // for each block
    int hasdown = 0;
    DatanodeInfo[] nodes = blk.getLocations();
    for (int j = 0; j < nodes.length; j++) { // for each replica
      if (isNodeDown && nodes[j].getXferAddr().equals(downnode)) {
        hasdown++;
        //Downnode must actually be decommissioned
        if (!nodes[j].isDecommissioned()) {
          return "For block " + blk.getBlock() + " replica on " +
            nodes[j] + " is given as downnode, " +
            "but is not decommissioned";
        }
        //Decommissioned node (if any) should only be last node in list.
        if (j != nodes.length - 1) {
          return "For block " + blk.getBlock() + " decommissioned node "
            + nodes[j] + " was not last node in list: "
            + (j + 1) + " of " + nodes.length;
        }
        LOG.info("Block " + blk.getBlock() + " replica on " +
          nodes[j] + " is decommissioned.");
      } else {
        //Non-downnodes must not be decommissioned
        if (nodes[j].isDecommissioned()) {
          return "For block " + blk.getBlock() + " replica on " +
            nodes[j] + " is unexpectedly decommissioned";
        }
      }
    }

    LOG.info("Block " + blk.getBlock() + " has " + hasdown
      + " decommissioned replica.");
    if(Math.min(numDatanodes, repl+hasdown) != nodes.length) {
      return "Wrong number of replicas for block " + blk.getBlock() +
        ": " + nodes.length + ", expected " +
        Math.min(numDatanodes, repl+hasdown);
    }
  }
  return null;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:60,代码来源:TestDecommission.java

示例5: checkFile

import org.apache.hadoop.hdfs.client.HdfsDataInputStream; //导入方法依赖的package包/类
/**
 * Verify that the number of replicas are as expected for each block in
 * the given file.
 * For blocks with a decommissioned node, verify that their replication
 * is 1 more than what is specified.
 * For blocks without decommissioned nodes, verify their replication is
 * equal to what is specified.
 *
 * @param downnode
 *     - if null, there is no decommissioned node for this file.
 * @return - null if no failure found, else an error message string.
 */
private String checkFile(FileSystem fileSys, Path name, int repl,
    String downnode, int numDatanodes) throws IOException {
  boolean isNodeDown = (downnode != null);
  // need a raw stream
  assertTrue("Not HDFS:" + fileSys.getUri(),
      fileSys instanceof DistributedFileSystem);
  HdfsDataInputStream dis =
      (HdfsDataInputStream) ((DistributedFileSystem) fileSys).open(name);
  Collection<LocatedBlock> dinfo = dis.getAllBlocks();
  for (LocatedBlock blk : dinfo) { // for each block
    int hasdown = 0;
    DatanodeInfo[] nodes = blk.getLocations();
    for (int j = 0; j < nodes.length; j++) { // for each replica
      if (isNodeDown && nodes[j].getXferAddr().equals(downnode)) {
        hasdown++;
        //Downnode must actually be decommissioned
        if (!nodes[j].isDecommissioned()) {
          return "For block " + blk.getBlock() + " replica on " +
              nodes[j] + " is given as downnode, " +
              "but is not decommissioned";
        }
        //Decommissioned node (if any) should only be last node in list.
        if (j != nodes.length - 1) {
          return "For block " + blk.getBlock() + " decommissioned node " +
              nodes[j] + " was not last node in list: " + (j + 1) + " of " +
              nodes.length;
        }
        LOG.info("Block " + blk.getBlock() + " replica on " +
            nodes[j] + " is decommissioned.");
      } else {
        //Non-downnodes must not be decommissioned
        if (nodes[j].isDecommissioned()) {
          return "For block " + blk.getBlock() + " replica on " +
              nodes[j] + " is unexpectedly decommissioned";
        }
      }
    }

    LOG.info("Block " + blk.getBlock() + " has " + hasdown +
        " decommissioned replica.");
    if (Math.min(numDatanodes, repl + hasdown) != nodes.length) {
      return "Wrong number of replicas for block " + blk.getBlock() +
          ": " + nodes.length + ", expected " +
          Math.min(numDatanodes, repl + hasdown);
    }
  }
  return null;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:61,代码来源:TestDecommission.java


注:本文中的org.apache.hadoop.hdfs.client.HdfsDataInputStream.getAllBlocks方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。