當前位置: 首頁>>代碼示例>>Java>>正文


Java LocalFileSystem.copyToLocalFile方法代碼示例

本文整理匯總了Java中org.apache.hadoop.fs.LocalFileSystem.copyToLocalFile方法的典型用法代碼示例。如果您正苦於以下問題:Java LocalFileSystem.copyToLocalFile方法的具體用法?Java LocalFileSystem.copyToLocalFile怎麽用?Java LocalFileSystem.copyToLocalFile使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.fs.LocalFileSystem的用法示例。


在下文中一共展示了LocalFileSystem.copyToLocalFile方法的9個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: createBlockPoolStorageDirs

import org.apache.hadoop.fs.LocalFileSystem; //導入方法依賴的package包/類
/**
 * Simulate the {@link DFSConfigKeys#DFS_DATANODE_DATA_DIR_KEY} of a 
 * populated DFS filesystem.
 * This method populates for each parent directory, <code>parent/dirName</code>
 * with the content of block pool storage directory that comes from a singleton
 * datanode master (that contains version and block files). If the destination
 * directory does not exist, it will be created.  If the directory already 
 * exists, it will first be deleted.
 * 
 * @param parents parent directory where {@code dirName} is created
 * @param dirName directory under which storage directory is created
 * @param bpid block pool id for which the storage directory is created.
 * @return the array of created directories
 */
public static File[] createBlockPoolStorageDirs(String[] parents,
    String dirName, String bpid) throws Exception {
  File[] retVal = new File[parents.length];
  Path bpCurDir = new Path(MiniDFSCluster.getBPDir(datanodeStorage,
      bpid, Storage.STORAGE_DIR_CURRENT));
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i] + "/current/" + bpid, dirName);
    createEmptyDirs(new String[] {newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new HdfsConfiguration());
    localFS.copyToLocalFile(bpCurDir,
                            new Path(newDir.toString()),
                            false);
    retVal[i] = newDir;
  }
  return retVal;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:31,代碼來源:UpgradeUtilities.java

示例2: createFederatedDatanodeDirs

import org.apache.hadoop.fs.LocalFileSystem; //導入方法依賴的package包/類
public static File[] createFederatedDatanodeDirs(String[] parents,
    String dirName, int namespaceId) throws IOException {
  File[] retVal = new File[parents.length];
  for (int i = 0; i < parents.length; i++) {
    File nsDir = new File(new File(parents[i], "current"), "NS-"
        + namespaceId);
    File newDir = new File(nsDir, dirName);
    File srcDir = new File(new File(datanodeStorage, "current"), "NS-"
        + namespaceId);

    LocalFileSystem localFS = FileSystem.getLocal(new Configuration());
    localFS.copyToLocalFile(new Path(srcDir.toString(), "current"), new Path(
        newDir.toString()), false);
    retVal[i] = new File(parents[i], "current");
  }
  return retVal;
}
 
開發者ID:rhli,項目名稱:hadoop-EAR,代碼行數:18,代碼來源:UpgradeUtilities.java

示例3: createNameNodeStorageDirs

import org.apache.hadoop.fs.LocalFileSystem; //導入方法依賴的package包/類
/**
 * Simulate the {@link DFSConfigKeys#DFS_NAMENODE_NAME_DIR_KEY} of a populated 
 * DFS filesystem.
 * This method populates for each parent directory, <code>parent/dirName</code>
 * with the content of namenode storage directory that comes from a singleton
 * namenode master (that contains edits, fsimage, version and time files). 
 * If the destination directory does not exist, it will be created.  
 * If the directory already exists, it will first be deleted.
 *
 * @param parents parent directory where {@code dirName} is created
 * @param dirName directory under which storage directory is created
 * @return the array of created directories
 */
public static File[] createNameNodeStorageDirs(String[] parents,
    String dirName) throws Exception {
  File[] retVal = new File[parents.length];
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i], dirName);
    createEmptyDirs(new String[] {newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new HdfsConfiguration());
    localFS.copyToLocalFile(new Path(namenodeStorage.toString(), "current"),
                            new Path(newDir.toString()),
                            false);
    retVal[i] = newDir;
  }
  return retVal;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:28,代碼來源:UpgradeUtilities.java

示例4: createDataNodeStorageDirs

import org.apache.hadoop.fs.LocalFileSystem; //導入方法依賴的package包/類
/**
 * Simulate the {@link DFSConfigKeys#DFS_DATANODE_DATA_DIR_KEY} of a 
 * populated DFS filesystem.
 * This method populates for each parent directory, <code>parent/dirName</code>
 * with the content of datanode storage directory that comes from a singleton
 * datanode master (that contains version and block files). If the destination
 * directory does not exist, it will be created.  If the directory already 
 * exists, it will first be deleted.
 * 
 * @param parents parent directory where {@code dirName} is created
 * @param dirName directory under which storage directory is created
 * @return the array of created directories
 */
public static File[] createDataNodeStorageDirs(String[] parents,
    String dirName) throws Exception {
  File[] retVal = new File[parents.length];
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i], dirName);
    createEmptyDirs(new String[] {newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new HdfsConfiguration());
    localFS.copyToLocalFile(new Path(datanodeStorage.toString(), "current"),
                            new Path(newDir.toString()),
                            false);
    retVal[i] = newDir;
  }
  return retVal;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:28,代碼來源:UpgradeUtilities.java

示例5: createFederatedNameNodeStorageDirs

import org.apache.hadoop.fs.LocalFileSystem; //導入方法依賴的package包/類
public static void createFederatedNameNodeStorageDirs(String[] parents) 
    throws Exception {
  LocalFileSystem localFS = FileSystem.getLocal(new Configuration());
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i]);
    createEmptyDirs(new String[] {newDir.toString()});
    localFS.copyToLocalFile(new Path(namenodeStorage.toString()),
        new Path(newDir.toString()),
        false);
  }
}
 
開發者ID:rhli,項目名稱:hadoop-EAR,代碼行數:12,代碼來源:UpgradeUtilities.java

示例6: createStorageDirs

import org.apache.hadoop.fs.LocalFileSystem; //導入方法依賴的package包/類
public static File[] createStorageDirs(NodeType nodeType, String[] parents, String dirName,
    File srcFile) throws Exception {
  File[] retVal = new File[parents.length];
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i], dirName);
    createEmptyDirs(new String[] {newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new Configuration());
    switch (nodeType) {
    case NAME_NODE:
      localFS.copyToLocalFile(new Path(srcFile.toString(), "current"),
                              new Path(newDir.toString()),
                              false);
      Path newImgDir = new Path(newDir.getParent(), "image");
      if (!localFS.exists(newImgDir))
        localFS.copyToLocalFile(
            new Path(srcFile.toString(), "image"),
            newImgDir,
            false);
      break;
    case DATA_NODE:
      localFS.copyToLocalFile(new Path(srcFile.toString(), "current"),
                              new Path(newDir.toString()),
                              false);
      Path newStorageFile = new Path(newDir.getParent(), "storage");
      if (!localFS.exists(newStorageFile))
        localFS.copyToLocalFile(
            new Path(srcFile.toString(), "storage"),
            newStorageFile,
            false);
      break;
    }
    retVal[i] = newDir;
  }
  return retVal;
}
 
開發者ID:rhli,項目名稱:hadoop-EAR,代碼行數:36,代碼來源:UpgradeUtilities.java

示例7: createNameNodeStorageDirs

import org.apache.hadoop.fs.LocalFileSystem; //導入方法依賴的package包/類
/**
 * Simulate the {@link DFSConfigKeys#DFS_NAMENODE_NAME_DIR_KEY} of a
 * populated
 * DFS filesystem.
 * This method populates for each parent directory, <code>parent/dirName</code>
 * with the content of namenode storage directory that comes from a singleton
 * namenode master (that contains edits, fsimage, version and time files).
 * If the destination directory does not exist, it will be created.
 * If the directory already exists, it will first be deleted.
 *
 * @param parents
 *     parent directory where {@code dirName} is created
 * @param dirName
 *     directory under which storage directory is created
 * @return the array of created directories
 */
public static File[] createNameNodeStorageDirs(String[] parents,
    String dirName) throws Exception {
  File[] retVal = new File[parents.length];
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i], dirName);
    createEmptyDirs(new String[]{newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new HdfsConfiguration());
    localFS.copyToLocalFile(new Path(namenodeStorage.toString(), "current"),
        new Path(newDir.toString()), false);
    retVal[i] = newDir;
  }
  return retVal;
}
 
開發者ID:hopshadoop,項目名稱:hops,代碼行數:30,代碼來源:UpgradeUtilities.java

示例8: createDataNodeStorageDirs

import org.apache.hadoop.fs.LocalFileSystem; //導入方法依賴的package包/類
/**
 * Simulate the {@link DFSConfigKeys#DFS_DATANODE_DATA_DIR_KEY} of a
 * populated DFS filesystem.
 * This method populates for each parent directory, <code>parent/dirName</code>
 * with the content of datanode storage directory that comes from a singleton
 * datanode master (that contains version and block files). If the
 * destination
 * directory does not exist, it will be created.  If the directory already
 * exists, it will first be deleted.
 *
 * @param parents
 *     parent directory where {@code dirName} is created
 * @param dirName
 *     directory under which storage directory is created
 * @return the array of created directories
 */
public static File[] createDataNodeStorageDirs(String[] parents,
    String dirName) throws Exception {
  File[] retVal = new File[parents.length];
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i], dirName);
    createEmptyDirs(new String[]{newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new HdfsConfiguration());
    localFS.copyToLocalFile(new Path(datanodeStorage.toString(), "current"),
        new Path(newDir.toString()), false);
    retVal[i] = newDir;
  }
  return retVal;
}
 
開發者ID:hopshadoop,項目名稱:hops,代碼行數:30,代碼來源:UpgradeUtilities.java

示例9: createBlockPoolStorageDirs

import org.apache.hadoop.fs.LocalFileSystem; //導入方法依賴的package包/類
/**
 * Simulate the {@link DFSConfigKeys#DFS_DATANODE_DATA_DIR_KEY} of a
 * populated DFS filesystem.
 * This method populates for each parent directory, <code>parent/dirName</code>
 * with the content of block pool storage directory that comes from a
 * singleton
 * datanode master (that contains version and block files). If the
 * destination
 * directory does not exist, it will be created.  If the directory already
 * exists, it will first be deleted.
 *
 * @param parents
 *     parent directory where {@code dirName} is created
 * @param dirName
 *     directory under which storage directory is created
 * @param bpid
 *     block pool id for which the storage directory is created.
 * @return the array of created directories
 */
public static File[] createBlockPoolStorageDirs(String[] parents,
    String dirName, String bpid) throws Exception {
  File[] retVal = new File[parents.length];
  Path bpCurDir = new Path(MiniDFSCluster
      .getBPDir(datanodeStorage, bpid, Storage.STORAGE_DIR_CURRENT));
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i] + "/current/" + bpid, dirName);
    createEmptyDirs(new String[]{newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new HdfsConfiguration());
    localFS.copyToLocalFile(bpCurDir, new Path(newDir.toString()), false);
    retVal[i] = newDir;
  }
  return retVal;
}
 
開發者ID:hopshadoop,項目名稱:hops,代碼行數:34,代碼來源:UpgradeUtilities.java


注:本文中的org.apache.hadoop.fs.LocalFileSystem.copyToLocalFile方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。