当前位置: 首页>>代码示例>>Java>>正文


Java AppendTestUtil.checkFullFile方法代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.AppendTestUtil.checkFullFile方法的典型用法代码示例。如果您正苦于以下问题:Java AppendTestUtil.checkFullFile方法的具体用法?Java AppendTestUtil.checkFullFile怎么用?Java AppendTestUtil.checkFullFile使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hdfs.AppendTestUtil的用法示例。


在下文中一共展示了AppendTestUtil.checkFullFile方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testTruncate

import org.apache.hadoop.hdfs.AppendTestUtil; //导入方法依赖的package包/类
private void testTruncate() throws Exception {
  if (!isLocalFS()) {
    final short repl = 3;
    final int blockSize = 1024;
    final int numOfBlocks = 2;
    FileSystem fs = FileSystem.get(getProxiedFSConf());
    fs.mkdirs(getProxiedFSTestDir());
    Path file = new Path(getProxiedFSTestDir(), "foo.txt");
    final byte[] data = FileSystemTestHelper.getFileData(
        numOfBlocks, blockSize);
    FileSystemTestHelper.createFile(fs, file, data, blockSize, repl);

    final int newLength = blockSize;

    boolean isReady = fs.truncate(file, newLength);
    Assert.assertTrue("Recovery is not expected.", isReady);

    FileStatus fileStatus = fs.getFileStatus(file);
    Assert.assertEquals(fileStatus.getLen(), newLength);
    AppendTestUtil.checkFullFile(fs, file, newLength, data, file.toString());

    fs.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:BaseTestHttpFSWith.java

示例2: testTruncate

import org.apache.hadoop.hdfs.AppendTestUtil; //导入方法依赖的package包/类
@Test
public void testTruncate() throws Exception {
  final short repl = 3;
  final int blockSize = 1024;
  final int numOfBlocks = 2;
  Path dir = getTestRootPath(fSys, "test/hadoop");
  Path file = getTestRootPath(fSys, "test/hadoop/file");

  final byte[] data = getFileData(numOfBlocks, blockSize);
  createFile(fSys, file, data, blockSize, repl);

  final int newLength = blockSize;

  boolean isReady = fSys.truncate(file, newLength);

  Assert.assertTrue("Recovery is not expected.", isReady);

  FileStatus fileStatus = fSys.getFileStatus(file);
  Assert.assertEquals(fileStatus.getLen(), newLength);
  AppendTestUtil.checkFullFile(fSys, file, newLength, data, file.toString());

  ContentSummary cs = fSys.getContentSummary(dir);
  Assert.assertEquals("Bad disk space usage", cs.getSpaceConsumed(),
      newLength * repl);
  Assert.assertTrue("Deleted", fSys.delete(dir, true));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:TestFSMainOperationsWebHdfs.java

示例3: testTruncate

import org.apache.hadoop.hdfs.AppendTestUtil; //导入方法依赖的package包/类
@Test
public void testTruncate() throws Exception {
  final short repl = 3;
  final int blockSize = 1024;
  final int numOfBlocks = 2;
  DistributedFileSystem fs = cluster.getFileSystem();
  Path dir = getTestRootPath(fc, "test/hadoop");
  Path file = getTestRootPath(fc, "test/hadoop/file");

  final byte[] data = FileSystemTestHelper.getFileData(
      numOfBlocks, blockSize);
  FileSystemTestHelper.createFile(fs, file, data, blockSize, repl);

  final int newLength = blockSize;

  boolean isReady = fc.truncate(file, newLength);

  Assert.assertTrue("Recovery is not expected.", isReady);

  FileStatus fileStatus = fc.getFileStatus(file);
  Assert.assertEquals(fileStatus.getLen(), newLength);
  AppendTestUtil.checkFullFile(fs, file, newLength, data, file.toString());

  ContentSummary cs = fs.getContentSummary(dir);
  Assert.assertEquals("Bad disk space usage", cs.getSpaceConsumed(),
      newLength * repl);
  Assert.assertTrue(fs.delete(dir, true));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:TestHDFSFileContextMainOperations.java

示例4: checkFullFile

import org.apache.hadoop.hdfs.AppendTestUtil; //导入方法依赖的package包/类
static void checkFullFile(Path p, int newLength, byte[] contents)
    throws IOException {
  AppendTestUtil.checkFullFile(fs, p, newLength, contents, p.toString());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:5,代码来源:TestFileTruncate.java


注:本文中的org.apache.hadoop.hdfs.AppendTestUtil.checkFullFile方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。