當前位置: 首頁>>代碼示例>>Java>>正文


Java LocalFileSystem.create方法代碼示例

本文整理匯總了Java中org.apache.hadoop.fs.LocalFileSystem.create方法的典型用法代碼示例。如果您正苦於以下問題:Java LocalFileSystem.create方法的具體用法?Java LocalFileSystem.create怎麽用?Java LocalFileSystem.create使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.fs.LocalFileSystem的用法示例。


在下文中一共展示了LocalFileSystem.create方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: setUpSchedulerConfigFile

import org.apache.hadoop.fs.LocalFileSystem; //導入方法依賴的package包/類
private void setUpSchedulerConfigFile(Properties schedulerConfProps)
    throws IOException {
  LocalFileSystem fs = FileSystem.getLocal(new Configuration());

  String myResourcePath = System.getProperty("test.build.data");
  Path schedulerConfigFilePath =
      new Path(myResourcePath, CapacitySchedulerConf.SCHEDULER_CONF_FILE);
  OutputStream out = fs.create(schedulerConfigFilePath);

  Configuration config = new Configuration(false);
  for (Enumeration<?> e = schedulerConfProps.propertyNames(); e
      .hasMoreElements();) {
    String key = (String) e.nextElement();
    LOG.debug("Adding " + key + schedulerConfProps.getProperty(key));
    config.set(key, schedulerConfProps.getProperty(key));
  }

  config.writeXml(out);
  out.close();

  LOG.info("setting resource path where capacity-scheduler's config file "
      + "is placed to " + myResourcePath);
  System.setProperty(MY_SCHEDULER_CONF_PATH_PROPERTY, myResourcePath);
}
 
開發者ID:Nextzero,項目名稱:hadoop-2.6.0-cdh5.4.3,代碼行數:25,代碼來源:ClusterWithCapacityScheduler.java

示例2: writeToTextFile

import org.apache.hadoop.fs.LocalFileSystem; //導入方法依賴的package包/類
private static void writeToTextFile(LocalFileSystem local, Path path) throws IOException {
    try (BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(local.create(path)))) {
        for (String value : ENTRIES) {
            writer.write(value);
            writer.flush();
        }
    }
}
 
開發者ID:hazelcast,項目名稱:hazelcast-jet,代碼行數:9,代碼來源:ReadHdfsPTest.java

示例3: truncateLogsAsUser

import org.apache.hadoop.fs.LocalFileSystem; //導入方法依賴的package包/類
@Override
public void truncateLogsAsUser(String user, List<Task> allAttempts)
  throws IOException {
  
  Task firstTask = allAttempts.get(0);
  String taskid = firstTask.getTaskID().toString();
  
  LocalDirAllocator ldirAlloc =
      new LocalDirAllocator(JobConf.MAPRED_LOCAL_DIR_PROPERTY);
  String taskRanFile = TaskTracker.TT_LOG_TMP_DIR + Path.SEPARATOR + taskid;
  Configuration conf = getConf();
  
  //write the serialized task information to a file to pass to the truncater
  Path taskRanFilePath = 
    ldirAlloc.getLocalPathForWrite(taskRanFile, conf);
  LocalFileSystem lfs = FileSystem.getLocal(conf);
  FSDataOutputStream out = lfs.create(taskRanFilePath);
  out.writeInt(allAttempts.size());
  for (Task t : allAttempts) {
    out.writeBoolean(t.isMapTask());
    t.write(out);
  }
  out.close();
  lfs.setPermission(taskRanFilePath, 
                    FsPermission.createImmutable((short)0755));
  
  List<String> command = new ArrayList<String>();
  File jvm =                                  // use same jvm as parent
    new File(new File(System.getProperty("java.home"), "bin"), "java");
  command.add(jvm.toString());
  command.add("-Djava.library.path=" + 
              System.getProperty("java.library.path"));
  command.add("-Dhadoop.log.dir=" + TaskLog.getBaseLogDir());
  command.add("-Dhadoop.root.logger=INFO,console");
  command.add("-classpath");
  command.add(System.getProperty("java.class.path"));
  // main of TaskLogsTruncater
  command.add(TaskLogsTruncater.class.getName()); 
  command.add(taskRanFilePath.toString());
  String[] taskControllerCmd = new String[4 + command.size()];
  taskControllerCmd[0] = taskControllerExe;
  taskControllerCmd[1] = user;
  taskControllerCmd[2] = localStorage.getDirsString();
  taskControllerCmd[3] = Integer.toString(
      Commands.RUN_COMMAND_AS_USER.getValue());

  int i = 4;
  for (String cmdArg : command) {
    taskControllerCmd[i++] = cmdArg;
  }
  if (LOG.isDebugEnabled()) {
    for (String cmd : taskControllerCmd) {
      LOG.debug("taskctrl command = " + cmd);
    }
  }
  ShellCommandExecutor shExec = new ShellCommandExecutor(taskControllerCmd);
  try {
    shExec.execute();
  } catch (Exception e) {
    LOG.warn("Exit code from " + taskControllerExe.toString() + " is : "
        + shExec.getExitCode() + " for truncateLogs");
    LOG.warn("Exception thrown by " + taskControllerExe.toString() + " : "
        + StringUtils.stringifyException(e));
    LOG.info("Output from LinuxTaskController's "
             + taskControllerExe.toString() + " follows:");
    logOutput(shExec.getOutput());
    lfs.delete(taskRanFilePath, false);
    throw new IOException(e);
  }
  lfs.delete(taskRanFilePath, false);
  if (LOG.isDebugEnabled()) {
    LOG.info("Output from LinuxTaskController's "
             + taskControllerExe.toString() + " follows:");
    logOutput(shExec.getOutput());
  }
}
 
開發者ID:Nextzero,項目名稱:hadoop-2.6.0-cdh5.4.3,代碼行數:77,代碼來源:LinuxTaskController.java


注:本文中的org.apache.hadoop.fs.LocalFileSystem.create方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。