當前位置: 首頁>>代碼示例>>Java>>正文


Java DistributedFileSystem.listStatus方法代碼示例

本文整理匯總了Java中org.apache.hadoop.hdfs.DistributedFileSystem.listStatus方法的典型用法代碼示例。如果您正苦於以下問題:Java DistributedFileSystem.listStatus方法的具體用法?Java DistributedFileSystem.listStatus怎麽用?Java DistributedFileSystem.listStatus使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.hdfs.DistributedFileSystem的用法示例。


在下文中一共展示了DistributedFileSystem.listStatus方法的7個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: print

import org.apache.hadoop.hdfs.DistributedFileSystem; //導入方法依賴的package包/類
/**
 * 查看輸出結果
 *
 * @param path
 */
public void print(String path) {
    log.info("mapreduce輸出結果:...................................................");
    DistributedFileSystem distributedFileSystem = distributedFileSystem();
    try {
        FileStatus[] fileStatuses = distributedFileSystem.listStatus(new Path(path));
        for (FileStatus fs : fileStatuses) {
            log.info(fs);
            FSDataInputStream fsDataInputStream = distributedFileSystem.open(fs.getPath());
            byte[] bs = new byte[fsDataInputStream.available()];
            fsDataInputStream.read(bs);
            log.info("\n" + new String(bs) + "\n");
        }
    } catch (IOException e) {
        log.error(e);
    } finally {
        close(distributedFileSystem);
    }
}
 
開發者ID:mumuhadoop,項目名稱:mumu-mapreduce,代碼行數:24,代碼來源:MapReduceConfiguration.java

示例2: printMessage

import org.apache.hadoop.hdfs.DistributedFileSystem; //導入方法依賴的package包/類
public void printMessage(String path) {
    System.out.println("\nprint result:");
    DistributedFileSystem distributedFileSystem = distributedFileSystem();
    try {
        FileStatus[] fileStatuses = distributedFileSystem.listStatus(new Path(path));
        for (FileStatus fileStatus : fileStatuses) {
            System.out.println(fileStatus);
            if (fileStatus.isFile()) {
                FSDataInputStream fsDataInputStream = distributedFileSystem.open(fileStatus.getPath());
                byte[] bs = new byte[fsDataInputStream.available()];
                fsDataInputStream.read(bs);
                fsDataInputStream.close();
                System.out.println(new String(bs));
            }
        }
    } catch (IOException e) {
        e.printStackTrace();
    } finally {
        close(distributedFileSystem);
    }
}
 
開發者ID:mumuhadoop,項目名稱:mumu-pig,代碼行數:22,代碼來源:MumuPigConfiguration.java

示例3: mapreduce

import org.apache.hadoop.hdfs.DistributedFileSystem; //導入方法依賴的package包/類
@Test
public void mapreduce() {
    String inputPath = ParquetConfiguration.HDFS_URI + "//parquet/mapreduce/input";
    String outputPath = ParquetConfiguration.HDFS_URI + "//parquet/mapreduce/output" + DateFormatUtils.format(new Date(), "yyyyMMddHHmmss");
    try {
        MapReduceParquetMapReducer.main(new String[]{inputPath, outputPath});
        DistributedFileSystem distributedFileSystem = new ParquetConfiguration().distributedFileSystem();
        FileStatus[] fileStatuses = distributedFileSystem.listStatus(new Path(outputPath));
        for (FileStatus fileStatus : fileStatuses) {
            System.out.println(fileStatus);
        }
        distributedFileSystem.close();
    } catch (Exception e) {
        e.printStackTrace();
    }
}
 
開發者ID:mumuhadoop,項目名稱:mumu-parquet,代碼行數:17,代碼來源:MapReduceParquetMapReducerTest.java

示例4: collectFileNames

import org.apache.hadoop.hdfs.DistributedFileSystem; //導入方法依賴的package包/類
private  void collectFileNames(DistributedFileSystem fs, String zonepath, List<String> names)
		throws IOException
{
	FileStatus[] statuses = fs.listStatus(new Path(zonepath));
	// System.out.println("## cheking path " + new Path(zonepath).toString() + " iter " + statuses.length);
	for (FileStatus status : statuses) {
		String fname = zonepath + "/" + status.getPath().getName();
		if (status.isDirectory())
			collectFileNames(fs, fname, names);
		else
			names.add(fname);
	}
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:14,代碼來源:ApplicationMasterKMS.java

示例5: collectFileNames

import org.apache.hadoop.hdfs.DistributedFileSystem; //導入方法依賴的package包/類
public static void collectFileNames(DistributedFileSystem fs, String zonepath, List<String> names)
			throws IOException
	{
		FileStatus[] statuses = fs.listStatus(new Path(zonepath));
//		System.out.println("## cheking path " + new Path(zonepath).toString() + " iter " + statuses.length);
		for (FileStatus status : statuses) {
			String fname = zonepath + "/" + status.getPath().getName();
			if (status.isDirectory())
				collectFileNames(fs, fname, names);
			else
				names.add(fname);
		}
	}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:14,代碼來源:KeyRotationBC.java

示例6: concat

import org.apache.hadoop.hdfs.DistributedFileSystem; //導入方法依賴的package包/類
public static void concat(String dir) throws IOException {


        String directory = NodeConfig.HDFS_PATH + dir;
        Configuration conf = new Configuration();
        DistributedFileSystem fs = (DistributedFileSystem)FileSystem.get(URI.create(directory), conf);
        FileStatus fileList[] = fs.listStatus(new Path(directory));

        if (fileList.length>=2) {

            ArrayList<Path>  srcs = new ArrayList<Path>(fileList.length);
            for (FileStatus fileStatus : fileList) {
                if ( fileStatus.isFile() &&
                        (fileStatus.getLen()&~fileStatus.getBlockSize())<fileStatus.getBlockSize()/2 ) {
                    srcs.add(fileStatus.getPath());
                }
            }

            if (srcs.size()>=2) {
                Logger.println("come to here");
                Path appended = srcs.get(0);
                Path[] sources = new Path[srcs.size()-1];
                for (int i=0; i<srcs.size()-1; i++) {
                    sources[i] = srcs.get(i+1);
                }
                Logger.println(fs==null);
                Logger.println(appended==null);
                Logger.println(sources==null);
                fs.concat(appended, sources);
                Logger.println("concat to : " + appended.getName());
                Logger.println(Arrays.toString(sources));
            }

            fs.close();
        }


    }
 
開發者ID:cuiods,項目名稱:WIFIProbe,代碼行數:39,代碼來源:HDFSTool.java

示例7: checkSnapshotCreation

import org.apache.hadoop.hdfs.DistributedFileSystem; //導入方法依賴的package包/類
/**
 * Check the functionality of a snapshot.
 * 
 * @param hdfs DistributedFileSystem instance
 * @param snapshotRoot The root of the snapshot
 * @param snapshottedDir The snapshotted directory
 */
public static void checkSnapshotCreation(DistributedFileSystem hdfs,
    Path snapshotRoot, Path snapshottedDir) throws Exception {
  // Currently we only check if the snapshot was created successfully
  assertTrue(hdfs.exists(snapshotRoot));
  // Compare the snapshot with the current dir
  FileStatus[] currentFiles = hdfs.listStatus(snapshottedDir);
  FileStatus[] snapshotFiles = hdfs.listStatus(snapshotRoot);
  assertEquals("snapshottedDir=" + snapshottedDir
      + ", snapshotRoot=" + snapshotRoot,
      currentFiles.length, snapshotFiles.length);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:19,代碼來源:SnapshotTestHelper.java


注:本文中的org.apache.hadoop.hdfs.DistributedFileSystem.listStatus方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。