當前位置: 首頁>>代碼示例>>Java>>正文


Java ApplicationId.getId方法代碼示例

本文整理匯總了Java中org.apache.hadoop.yarn.api.records.ApplicationId.getId方法的典型用法代碼示例。如果您正苦於以下問題:Java ApplicationId.getId方法的具體用法?Java ApplicationId.getId怎麽用?Java ApplicationId.getId使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.yarn.api.records.ApplicationId的用法示例。


在下文中一共展示了ApplicationId.getId方法的7個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: initializeApplication

import org.apache.hadoop.yarn.api.records.ApplicationId; //導入方法依賴的package包/類
@Override
public void initializeApplication(ApplicationInitializationContext context) {

  String user = context.getUser();
  ApplicationId appId = context.getApplicationId();
  ByteBuffer secret = context.getApplicationDataForService();
  // TODO these bytes should be versioned
  try {
    Token<JobTokenIdentifier> jt = deserializeServiceData(secret);
     // TODO: Once SHuffle is out of NM, this can use MR APIs
    JobID jobId = new JobID(Long.toString(appId.getClusterTimestamp()), appId.getId());
    recordJobShuffleInfo(jobId, user, jt);
  } catch (IOException e) {
    LOG.error("Error during initApp", e);
    // TODO add API to AuxiliaryServices to report failures
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:18,代碼來源:ShuffleHandler.java

示例2: findHistoryFilePath

import org.apache.hadoop.yarn.api.records.ApplicationId; //導入方法依賴的package包/類
public static Optional<String> findHistoryFilePath(Iterator<LocatedFileStatus> listing,
    ApplicationId applicationId) {

  JobID jobId = new JobID(String.valueOf(applicationId.getClusterTimestamp()), applicationId.getId());

  List<LocatedFileStatus> jhistFiles = Lists.newArrayList();
  // maybe this could work more nicely with some recursive glob and a filter
  try {
    jhistFiles = StreamSupport
        .stream(Spliterators.spliteratorUnknownSize(listing, Spliterator.NONNULL), false)
        .filter(fstatus -> fstatus.getPath().toString()
            .matches(".*" + jobId.toString() + ".*.jhist"))
        .collect(Collectors.toList());
  } catch (RemoteIteratorAdaptor.WrappedRemoteIteratorException wrie) {
    // We can't really do overly much at this point, as this is an error from the
    // underlying hadoop filesystem implementation. But we want to at least log this
    // separately from other conditions.
    logger.error("Retrieving remote listing failed", wrie);
  }

  if (jhistFiles.size() < 1) {
    logger.error("Could not locate a history file for parameters");
    return Optional.empty();
  } else if (jhistFiles.size() > 1) {
    logger.error("Found two or more matching files, will dump first");
  }

  return jhistFiles.stream()
      .findFirst()
      .map(x -> x.getPath().toString());
}
 
開發者ID:spotify,項目名稱:spydra,代碼行數:32,代碼來源:HistoryLogUtils.java

示例3: stopApplication

import org.apache.hadoop.yarn.api.records.ApplicationId; //導入方法依賴的package包/類
@Override
public void stopApplication(ApplicationTerminationContext context) {
  ApplicationId appId = context.getApplicationId();
  JobID jobId = new JobID(Long.toString(appId.getClusterTimestamp()), appId.getId());
  try {
    removeJobShuffleInfo(jobId);
  } catch (IOException e) {
    LOG.error("Error during stopApp", e);
    // TODO add API to AuxiliaryServices to report failures
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:12,代碼來源:ShuffleHandler.java

示例4: forceKillApplication

import org.apache.hadoop.yarn.api.records.ApplicationId; //導入方法依賴的package包/類
@Override
public boolean forceKillApplication(ApplicationId applicationId)
    throws IOException {
  int jobid = applicationId.getId();
  String scancelCmd = conf.get(
      HPCConfiguration.YARN_APPLICATION_HPC_COMMAND_SLURM_SCANCEL,
      HPCConfiguration.DEFAULT_YARN_APPLICATION_HPC_COMMAND_SLURM_SCANCEL);
  Shell.execCommand(scancelCmd, String.valueOf(jobid));
  return true;
}
 
開發者ID:intel-hpdd,項目名稱:scheduling-connector-for-hadoop,代碼行數:11,代碼來源:SlurmApplicationClient.java

示例5: submitApplication

import org.apache.hadoop.yarn.api.records.ApplicationId; //導入方法依賴的package包/類
@Override
public void submitApplication(ApplicationSubmissionContext context)
    throws IOException {
  int waitingTime = conf.getInt(
      HPCConfiguration.YARN_APPLICATION_HPC_CLIENT_RS_MAX_WAIT_MS,
      HPCConfiguration.DEFAULT_YARN_APPLICATION_HPC_CLIENT_RS_MAX_WAIT_MS);
  int noOfTimes = conf.getInt(
      HPCConfiguration.YARN_APPLICATION_HPC_CLIENT_RS_RETRIES_MAX,
      HPCConfiguration.DEFAULT_YARN_APPLICATION_HPC_CLIENT_RS_RETRIES_MAX);
  ApplicationId applicationId = context.getApplicationId();

  String applicationName = context.getApplicationName();
  SocketWrapper socket = SocketCache.getSocket(applicationId.getId());
  if (socket.waitForReady(waitingTime * noOfTimes)) {
    PBSCommandExecutor.launchContainer(
        context.getAMContainerSpec(),
        ContainerId.newContainerId(
            ApplicationAttemptId.newInstance(applicationId, 1), 1l)
            .toString(), applicationName, conf, applicationId.getId(), true,
        socket.getContainerHostName());
  }

  // Set the Job Name
  int jobid = applicationId.getId();
  String pbsJobName = applicationName.replaceAll("\\s", "");
  if (pbsJobName.length() > 13) {
    pbsJobName = pbsJobName.substring(0, 12);
  }

  String qalterCmd = conf.get(
      HPCConfiguration.YARN_APPLICATION_HPC_COMMAND_PBS_QALTER,
      HPCConfiguration.DEFAULT_YARN_APPLICATION_HPC_COMMAND_PBS_QALTER);
  Shell
      .execCommand(qalterCmd, String.valueOf(jobid), "-N", "Y#" + pbsJobName);
}
 
開發者ID:intel-hpdd,項目名稱:scheduling-connector-for-hadoop,代碼行數:36,代碼來源:PBSApplicationClient.java

示例6: forceKillApplication

import org.apache.hadoop.yarn.api.records.ApplicationId; //導入方法依賴的package包/類
@Override
public boolean forceKillApplication(ApplicationId applicationId)
    throws IOException {
  int jobid = applicationId.getId();
  String qdelCmd = conf.get(
      HPCConfiguration.YARN_APPLICATION_HPC_COMMAND_PBS_QDEL,
      HPCConfiguration.DEFAULT_YARN_APPLICATION_HPC_COMMAND_PBS_QDEL);
  Shell.execCommand(qdelCmd, String.valueOf(jobid));
  return true;
}
 
開發者ID:intel-hpdd,項目名稱:scheduling-connector-for-hadoop,代碼行數:11,代碼來源:PBSApplicationClient.java

示例7: fromYarn

import org.apache.hadoop.yarn.api.records.ApplicationId; //導入方法依賴的package包/類
public static org.apache.hadoop.mapreduce.JobID fromYarn(ApplicationId appID) {
  String identifier = fromClusterTimeStamp(appID.getClusterTimestamp());
  return new org.apache.hadoop.mapred.JobID(identifier, appID.getId());
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:5,代碼來源:TypeConverter.java


注:本文中的org.apache.hadoop.yarn.api.records.ApplicationId.getId方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。