當前位置: 首頁>>代碼示例>>Java>>正文


Java FSDataOutputStream.writeDouble方法代碼示例

本文整理匯總了Java中org.apache.hadoop.fs.FSDataOutputStream.writeDouble方法的典型用法代碼示例。如果您正苦於以下問題:Java FSDataOutputStream.writeDouble方法的具體用法?Java FSDataOutputStream.writeDouble怎麽用?Java FSDataOutputStream.writeDouble使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.fs.FSDataOutputStream的用法示例。


在下文中一共展示了FSDataOutputStream.writeDouble方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: txt2dat

import org.apache.hadoop.fs.FSDataOutputStream; //導入方法依賴的package包/類
public static void txt2dat(Path dir, String inputFile, String outputFile)
        throws IOException {

    FileSystem fileSystem = dir.getFileSystem(new Configuration());

    Path in = new Path(dir, inputFile);
    Path out = new Path(dir, outputFile);

    FSDataInputStream fsDataInputStream = fileSystem.open(in);
    InputStreamReader inputStreamReader = new InputStreamReader(fsDataInputStream);
    BufferedReader reader = new BufferedReader(inputStreamReader);

    FSDataOutputStream writer = fileSystem.create(out);

    try {
        String line;
        line = reader.readLine();
        while (line != null){

            String[] keyVal = line.split("\\t");
            writer.writeLong(Long.parseLong(keyVal[0]));

            for (String aij : keyVal[1].split(",")) {
                writer.writeDouble(Double.parseDouble(aij));
            }

            line = reader.readLine();
        }
    } finally {
        reader.close();
        inputStreamReader.close();
        fsDataInputStream.close();
        writer.flush();
        writer.close();
    }
}
 
開發者ID:Romm17,項目名稱:MRNMF,代碼行數:37,代碼來源:MatrixByteConverter.java

示例2: testBlockLocation

import org.apache.hadoop.fs.FSDataOutputStream; //導入方法依賴的package包/類
/**
 * Test that the reorder algo works as we expect.
 */
@Test
public void testBlockLocation() throws Exception {
  // We need to start HBase to get  HConstants.HBASE_DIR set in conf
  htu.startMiniZKCluster();
  MiniHBaseCluster hbm = htu.startMiniHBaseCluster(1, 1);
  conf = hbm.getConfiguration();


  // The "/" is mandatory, without it we've got a null pointer exception on the namenode
  final String fileName = "/helloWorld";
  Path p = new Path(fileName);

  final int repCount = 3;
  Assert.assertTrue((short) cluster.getDataNodes().size() >= repCount);

  // Let's write the file
  FSDataOutputStream fop = dfs.create(p, (short) repCount);
  final double toWrite = 875.5613;
  fop.writeDouble(toWrite);
  fop.close();

  for (int i=0; i<10; i++){
    // The interceptor is not set in this test, so we get the raw list at this point
    LocatedBlocks l;
    final long max = System.currentTimeMillis() + 10000;
    do {
      l = getNamenode(dfs.getClient()).getBlockLocations(fileName, 0, 1);
      Assert.assertNotNull(l.getLocatedBlocks());
      Assert.assertEquals(l.getLocatedBlocks().size(), 1);
      Assert.assertTrue("Expecting " + repCount + " , got " + l.get(0).getLocations().length,
          System.currentTimeMillis() < max);
    } while (l.get(0).getLocations().length != repCount);

    // Should be filtered, the name is different => The order won't change
    Object originalList[] = l.getLocatedBlocks().toArray();
    HFileSystem.ReorderWALBlocks lrb = new HFileSystem.ReorderWALBlocks();
    lrb.reorderBlocks(conf, l, fileName);
    Assert.assertArrayEquals(originalList, l.getLocatedBlocks().toArray());

    // Should be reordered, as we pretend to be a file name with a compliant stuff
    Assert.assertNotNull(conf.get(HConstants.HBASE_DIR));
    Assert.assertFalse(conf.get(HConstants.HBASE_DIR).isEmpty());
    String pseudoLogFile = conf.get(HConstants.HBASE_DIR) + "/" +
        HConstants.HREGION_LOGDIR_NAME + "/" + host1 + ",6977,6576" + "/mylogfile";

    // Check that it will be possible to extract a ServerName from our construction
    Assert.assertNotNull("log= " + pseudoLogFile,
        DefaultWALProvider.getServerNameFromWALDirectoryName(dfs.getConf(), pseudoLogFile));

    // And check we're doing the right reorder.
    lrb.reorderBlocks(conf, l, pseudoLogFile);
    Assert.assertEquals(host1, l.get(0).getLocations()[2].getHostName());

    // Check again, it should remain the same.
    lrb.reorderBlocks(conf, l, pseudoLogFile);
    Assert.assertEquals(host1, l.get(0).getLocations()[2].getHostName());
  }
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:62,代碼來源:TestBlockReorder.java


注:本文中的org.apache.hadoop.fs.FSDataOutputStream.writeDouble方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。