當前位置: 首頁>>代碼示例>>Java>>正文


Java DistributedFileSystem.append方法代碼示例

本文整理匯總了Java中org.apache.hadoop.hdfs.DistributedFileSystem.append方法的典型用法代碼示例。如果您正苦於以下問題:Java DistributedFileSystem.append方法的具體用法?Java DistributedFileSystem.append怎麽用?Java DistributedFileSystem.append使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.hdfs.DistributedFileSystem的用法示例。


在下文中一共展示了DistributedFileSystem.append方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: testAddBlockUC

import org.apache.hadoop.hdfs.DistributedFileSystem; //導入方法依賴的package包/類
/**
 * Test adding new blocks but without closing the corresponding the file
 */
@Test
public void testAddBlockUC() throws Exception {
  DistributedFileSystem fs = cluster.getFileSystem();
  final Path file1 = new Path("/file1");
  DFSTestUtil.createFile(fs, file1, BLOCKSIZE - 1, REPLICATION, 0L);
  
  FSDataOutputStream out = null;
  try {
    // append files without closing the streams
    out = fs.append(file1);
    String appendContent = "appending-content";
    out.writeBytes(appendContent);
    ((DFSOutputStream) out.getWrappedStream()).hsync(
        EnumSet.of(SyncFlag.UPDATE_LENGTH));
    
    // restart NN
    cluster.restartNameNode(true);
    FSDirectory fsdir = cluster.getNamesystem().getFSDirectory();
    
    INodeFile fileNode = fsdir.getINode4Write(file1.toString()).asFile();
    BlockInfoContiguous[] fileBlocks = fileNode.getBlocks();
    assertEquals(2, fileBlocks.length);
    assertEquals(BLOCKSIZE, fileBlocks[0].getNumBytes());
    assertEquals(BlockUCState.COMPLETE, fileBlocks[0].getBlockUCState());
    assertEquals(appendContent.length() - 1, fileBlocks[1].getNumBytes());
    assertEquals(BlockUCState.UNDER_CONSTRUCTION,
        fileBlocks[1].getBlockUCState());
  } finally {
    if (out != null) {
      out.close();
    }
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:37,代碼來源:TestAddBlock.java

示例2: testHSyncOperation

import org.apache.hadoop.hdfs.DistributedFileSystem; //導入方法依賴的package包/類
private void testHSyncOperation(boolean testWithAppend) throws IOException {
  Configuration conf = new HdfsConfiguration();
  MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();
  final DistributedFileSystem fs = cluster.getFileSystem();

  final Path p = new Path("/testHSync/foo");
  final int len = 1 << 16;
  FSDataOutputStream out = fs.create(p, FsPermission.getDefault(),
      EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE, CreateFlag.SYNC_BLOCK),
      4096, (short) 1, len, null);
  if (testWithAppend) {
    // re-open the file with append call
    out.close();
    out = fs.append(p, EnumSet.of(CreateFlag.APPEND, CreateFlag.SYNC_BLOCK),
        4096, null);
  }
  out.hflush();
  // hflush does not sync
  checkSyncMetric(cluster, 0);
  out.hsync();
  // hsync on empty file does nothing
  checkSyncMetric(cluster, 0);
  out.write(1);
  checkSyncMetric(cluster, 0);
  out.hsync();
  checkSyncMetric(cluster, 1);
  // avoiding repeated hsyncs is a potential future optimization
  out.hsync();
  checkSyncMetric(cluster, 2);
  out.hflush();
  // hflush still does not sync
  checkSyncMetric(cluster, 2);
  out.close();
  // close is sync'ing
  checkSyncMetric(cluster, 3);

  // same with a file created with out SYNC_BLOCK
  out = fs.create(p, FsPermission.getDefault(),
      EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE),
      4096, (short) 1, len, null);
  out.hsync();
  checkSyncMetric(cluster, 3);
  out.write(1);
  checkSyncMetric(cluster, 3);
  out.hsync();
  checkSyncMetric(cluster, 4);
  // repeated hsyncs
  out.hsync();
  checkSyncMetric(cluster, 5);
  out.close();
  // close does not sync (not opened with SYNC_BLOCK)
  checkSyncMetric(cluster, 5);
  cluster.shutdown();
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:55,代碼來源:TestHSync.java


注:本文中的org.apache.hadoop.hdfs.DistributedFileSystem.append方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。