當前位置: 首頁>>代碼示例>>Java>>正文


Java FileSystem.setVerifyChecksum方法代碼示例

本文整理匯總了Java中org.apache.hadoop.fs.FileSystem.setVerifyChecksum方法的典型用法代碼示例。如果您正苦於以下問題:Java FileSystem.setVerifyChecksum方法的具體用法?Java FileSystem.setVerifyChecksum怎麽用?Java FileSystem.setVerifyChecksum使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.fs.FileSystem的用法示例。


在下文中一共展示了FileSystem.setVerifyChecksum方法的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: testChecker

import org.apache.hadoop.fs.FileSystem; //導入方法依賴的package包/類
/**
 * Tests read/seek/getPos/skipped opeation for input stream.
 */
private void testChecker(FileSystem fileSys, boolean readCS)
throws Exception {
  Path file = new Path("try.dat");
  writeFile(fileSys, file);

  try {
    if (!readCS) {
      fileSys.setVerifyChecksum(false);
    }

    stm = fileSys.open(file);
    checkReadAndGetPos();
    checkSeek();
    checkSkip();
    //checkMark
    assertFalse(stm.markSupported());
    stm.close();
  } finally {
    if (!readCS) {
      fileSys.setVerifyChecksum(true);
    }
    cleanupFile(fileSys, file);
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:28,代碼來源:TestFSInputChecker.java

示例2: dfsPreadTest

import org.apache.hadoop.fs.FileSystem; //導入方法依賴的package包/類
private void dfsPreadTest(Configuration conf, boolean disableTransferTo, boolean verifyChecksum)
    throws IOException {
  conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 4096);
  conf.setLong(DFSConfigKeys.DFS_CLIENT_READ_PREFETCH_SIZE_KEY, 4096);
  // Set short retry timeouts so this test runs faster
  conf.setInt(DFSConfigKeys.DFS_CLIENT_RETRY_WINDOW_BASE, 0);
  if (simulatedStorage) {
    SimulatedFSDataset.setFactory(conf);
  }
  if (disableTransferTo) {
    conf.setBoolean("dfs.datanode.transferTo.allowed", false);
  }
  MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
  FileSystem fileSys = cluster.getFileSystem();
  fileSys.setVerifyChecksum(verifyChecksum);
  try {
    Path file1 = new Path("preadtest.dat");
    writeFile(fileSys, file1);
    pReadFile(fileSys, file1);
    datanodeRestartTest(cluster, fileSys, file1);
    cleanupFile(fileSys, file1);
  } finally {
    fileSys.close();
    cluster.shutdown();
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:27,代碼來源:TestPread.java

示例3: run

import org.apache.hadoop.fs.FileSystem; //導入方法依賴的package包/類
/**
 * The main driver for <code>DumpTypedBytes</code>.
 */
public int run(String[] args) throws Exception {
  if (args.length == 0) {
    System.err.println("Too few arguments!");
    printUsage();
    return 1;
  }
  Path pattern = new Path(args[0]);
  FileSystem fs = pattern.getFileSystem(getConf());
  fs.setVerifyChecksum(true);
  for (Path p : FileUtil.stat2Paths(fs.globStatus(pattern), pattern)) {
    List<FileStatus> inputFiles = new ArrayList<FileStatus>();
    FileStatus status = fs.getFileStatus(p);
    if (status.isDirectory()) {
      FileStatus[] files = fs.listStatus(p);
      Collections.addAll(inputFiles, files);
    } else {
      inputFiles.add(status);
    }
    return dumpTypedBytes(inputFiles);
  }
  return -1;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:26,代碼來源:DumpTypedBytes.java

示例4: testBlockCompressSequenceFileWriterSync

import org.apache.hadoop.fs.FileSystem; //導入方法依賴的package包/類
/**
 * This test simulates what happens when a batch of events is written to a compressed sequence
 * file (and thus hsync'd to hdfs) but the file is not yet closed.
 *
 * When this happens, the data that we wrote should still be readable.
 */
@Test
public void testBlockCompressSequenceFileWriterSync() throws IOException, EventDeliveryException {
  String hdfsPath = testPath + "/sequenceFileWriterSync";
  FileSystem fs = FileSystem.get(new Configuration());
  // Since we are reading a partial file we don't want to use checksums
  fs.setVerifyChecksum(false);
  fs.setWriteChecksum(false);

  // Compression codecs that don't require native hadoop libraries
  String [] codecs = {"BZip2Codec", "DeflateCodec"};

  for (String codec : codecs) {
    sequenceFileWriteAndVerifyEvents(fs, hdfsPath, codec, Collections.singletonList(
        "single-event"
    ));

    sequenceFileWriteAndVerifyEvents(fs, hdfsPath, codec, Arrays.asList(
        "multiple-events-1",
        "multiple-events-2",
        "multiple-events-3",
        "multiple-events-4",
        "multiple-events-5"
    ));
  }

  fs.close();
}
 
開發者ID:moueimei,項目名稱:flume-release-1.7.0,代碼行數:34,代碼來源:TestHDFSEventSink.java


注:本文中的org.apache.hadoop.fs.FileSystem.setVerifyChecksum方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。