当前位置: 首页>>代码示例>>Java>>正文


Java LocatedBlocks.isUnderConstruction方法代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.LocatedBlocks.isUnderConstruction方法的典型用法代码示例。如果您正苦于以下问题:Java LocatedBlocks.isUnderConstruction方法的具体用法?Java LocatedBlocks.isUnderConstruction怎么用?Java LocatedBlocks.isUnderConstruction使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hdfs.protocol.LocatedBlocks的用法示例。


在下文中一共展示了LocatedBlocks.isUnderConstruction方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: checkBlockRecovery

import org.apache.hadoop.hdfs.protocol.LocatedBlocks; //导入方法依赖的package包/类
public static void checkBlockRecovery(Path p, DistributedFileSystem dfs,
    int attempts, long sleepMs) throws IOException {
  boolean success = false;
  for(int i = 0; i < attempts; i++) {
    LocatedBlocks blocks = getLocatedBlocks(p, dfs);
    boolean noLastBlock = blocks.getLastLocatedBlock() == null;
    if(!blocks.isUnderConstruction() &&
        (noLastBlock || blocks.isLastBlockComplete())) {
      success = true;
      break;
    }
    try { Thread.sleep(sleepMs); } catch (InterruptedException ignored) {}
  }
  assertThat("inode should complete in ~" + sleepMs * attempts + " ms.",
      success, is(true));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:17,代码来源:TestFileTruncate.java

示例2: collectFileSummary

import org.apache.hadoop.hdfs.protocol.LocatedBlocks; //导入方法依赖的package包/类
private void collectFileSummary(String path, HdfsFileStatus file, Result res,
    LocatedBlocks blocks) throws IOException {
  long fileLen = file.getLen();
  boolean isOpen = blocks.isUnderConstruction();
  if (isOpen && !showOpenFiles) {
    // We collect these stats about open files to report with default options
    res.totalOpenFilesSize += fileLen;
    res.totalOpenFilesBlocks += blocks.locatedBlockCount();
    res.totalOpenFiles++;
    return;
  }
  res.totalFiles++;
  res.totalSize += fileLen;
  res.totalBlocks += blocks.locatedBlockCount();
  if (showOpenFiles && isOpen) {
    out.print(path + " " + fileLen + " bytes, " +
      blocks.locatedBlockCount() + " block(s), OPENFORWRITE: ");
  } else if (showFiles) {
    out.print(path + " " + fileLen + " bytes, " +
      blocks.locatedBlockCount() + " block(s): ");
  } else if (showprogress) {
    out.print('.');
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:25,代码来源:NamenodeFsck.java

示例3: testHardLeaseRecovery

import org.apache.hadoop.hdfs.protocol.LocatedBlocks; //导入方法依赖的package包/类
/**
 * This test makes the client does not renew its lease and also
 * set the hard lease expiration period to be short 1s. Thus triggering
 * lease expiration to happen while the client is still alive.
 * 
 * The test makes sure that the lease recovery completes and the client
 * fails if it continues to write to the file.
 * 
 * @throws Exception
 */
@Test
public void testHardLeaseRecovery() throws Exception {
  //create a file
  String filestr = "/hardLeaseRecovery";
  AppendTestUtil.LOG.info("filestr=" + filestr);
  Path filepath = new Path(filestr);
  FSDataOutputStream stm = dfs.create(filepath, true,
      BUF_SIZE, REPLICATION_NUM, BLOCK_SIZE);
  assertTrue(dfs.dfs.exists(filestr));

  // write bytes into the file.
  int size = AppendTestUtil.nextInt(FILE_SIZE);
  AppendTestUtil.LOG.info("size=" + size);
  stm.write(buffer, 0, size);

  // hflush file
  AppendTestUtil.LOG.info("hflush");
  stm.hflush();
  
  // kill the lease renewal thread
  AppendTestUtil.LOG.info("leasechecker.interruptAndJoin()");
  dfs.dfs.getLeaseRenewer().interruptAndJoin();

  // set the hard limit to be 1 second 
  cluster.setLeasePeriod(LONG_LEASE_PERIOD, SHORT_LEASE_PERIOD);
  
  // wait for lease recovery to complete
  LocatedBlocks locatedBlocks;
  do {
    Thread.sleep(SHORT_LEASE_PERIOD);
    locatedBlocks = dfs.dfs.getLocatedBlocks(filestr, 0L, size);
  } while (locatedBlocks.isUnderConstruction());
  assertEquals(size, locatedBlocks.getFileLength());

  // make sure that the writer thread gets killed
  try {
    stm.write('b');
    stm.close();
    fail("Writer thread should have been killed");
  } catch (IOException e) {
    e.printStackTrace();
  }      

  // verify data
  AppendTestUtil.LOG.info(
      "File size is good. Now validating sizes from datanodes...");
  AppendTestUtil.checkFullFile(dfs, filepath, size, buffer, filestr);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:59,代码来源:TestLeaseRecovery2.java


注:本文中的org.apache.hadoop.hdfs.protocol.LocatedBlocks.isUnderConstruction方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。