当前位置: 首页>>代码示例>>Java>>正文


Java JobSubmissionFiles.getJobConfPath方法代码示例

本文整理汇总了Java中org.apache.hadoop.mapreduce.JobSubmissionFiles.getJobConfPath方法的典型用法代码示例。如果您正苦于以下问题:Java JobSubmissionFiles.getJobConfPath方法的具体用法?Java JobSubmissionFiles.getJobConfPath怎么用?Java JobSubmissionFiles.getJobConfPath使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.mapreduce.JobSubmissionFiles的用法示例。


在下文中一共展示了JobSubmissionFiles.getJobConfPath方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: uploadJobFiles

import org.apache.hadoop.mapreduce.JobSubmissionFiles; //导入方法依赖的package包/类
private void uploadJobFiles(JobID id, InputSplit[] splits,
                           Path jobSubmitDir, UserGroupInformation ugi,
                           final JobConf conf)
throws Exception {
  final Path confLocation = JobSubmissionFiles.getJobConfPath(jobSubmitDir);
  FileSystem fs = ugi.doAs(new PrivilegedExceptionAction<FileSystem>() {
    public FileSystem run() throws IOException {
      return confLocation.getFileSystem(conf);
    }
  });
  JobSplitWriter.createSplitFiles(jobSubmitDir, conf, fs, splits);
  FsPermission perm = new FsPermission((short)0700);
 
  // localize conf
  DataOutputStream confOut = FileSystem.create(fs, confLocation, perm);
  conf.writeXml(confOut);
  confOut.close();
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:19,代码来源:TestMiniMRWithDFSWithDistinctUsers.java

示例2: startPSServer

import org.apache.hadoop.mapreduce.JobSubmissionFiles; //导入方法依赖的package包/类
@Override
public void startPSServer() throws AngelException {
  try {
    setUser();
    setLocalAddr();
    Path stagingDir = AngelApps.getStagingDir(conf, userName);

    // 2.get job id
    yarnClient = YarnClient.createYarnClient();
    YarnConfiguration yarnConf = new YarnConfiguration(conf);
    yarnClient.init(yarnConf);
    yarnClient.start();
    YarnClientApplication newApp;

    newApp = yarnClient.createApplication();
    GetNewApplicationResponse newAppResponse = newApp.getNewApplicationResponse();
    appId = newAppResponse.getApplicationId();
    JobID jobId = TypeConverter.fromYarn(appId);

    Path submitJobDir = new Path(stagingDir, appId.toString());
    jtFs = submitJobDir.getFileSystem(conf);

    conf.set("hadoop.http.filter.initializers",
        "org.apache.hadoop.yarn.server.webproxy.amfilter.AmFilterInitializer");
    conf.set(AngelConf.ANGEL_JOB_DIR, submitJobDir.toString());
    conf.set(AngelConf.ANGEL_JOB_ID, jobId.toString());

    setInputDirectory();
    setOutputDirectory();

    // Credentials credentials = new Credentials();
    credentials.addAll(UserGroupInformation.getCurrentUser().getCredentials());
    TokenCache.obtainTokensForNamenodes(credentials, new Path[] {submitJobDir}, conf);
    checkParameters(conf);
    handleDeprecatedParameters(conf);

    // 4.copy resource files to hdfs
    copyAndConfigureFiles(conf, submitJobDir, (short) 10);

    // 5.write configuration to a xml file
    Path submitJobFile = JobSubmissionFiles.getJobConfPath(submitJobDir);
    TokenCache.cleanUpTokenReferral(conf);
    writeConf(conf, submitJobFile);

    // 6.create am container context
    ApplicationSubmissionContext appContext =
        createApplicationSubmissionContext(conf, submitJobDir, credentials, appId);

    conf.set(AngelConf.ANGEL_JOB_LIBJARS, "");

    // 7.Submit to ResourceManager
    appId = yarnClient.submitApplication(appContext);

    // 8.get app master client
    updateMaster(10 * 60);
    
    waitForAllPS(conf.getInt(AngelConf.ANGEL_PS_NUMBER, AngelConf.DEFAULT_ANGEL_PS_NUMBER));
    LOG.info("start pss success");
  } catch (Exception x) {
    LOG.error("submit application to yarn failed.", x);
    throw new AngelException(x);
  }
}
 
开发者ID:Tencent,项目名称:angel,代码行数:64,代码来源:AngelYarnClient.java

示例3: SubmittedJob

import org.apache.hadoop.mapreduce.JobSubmissionFiles; //导入方法依赖的package包/类
SubmittedJob(JobID jobID, String jobSubmitDirectory, Credentials credentials, Configuration configuration) throws IOException, InterruptedException {
    this.jobID = jobID;
    this.configuration = configuration;
    this.jobSubmitDirectoryPath = new Path(jobSubmitDirectory);
    this.fileSystem = FileSystem.get(configuration);

    JobSplit.TaskSplitMetaInfo splitInfo[] = SplitMetaInfoReader.readSplitMetaInfo(jobID, fileSystem, configuration, jobSubmitDirectoryPath);

    Path jobSplitFile = JobSubmissionFiles.getJobSplitFile(jobSubmitDirectoryPath);
    FSDataInputStream stream = fileSystem.open(jobSplitFile);

    for (JobSplit.TaskSplitMetaInfo info : splitInfo) {
        Object split = getSplitDetails(stream, info.getStartOffset(), configuration);
        inputSplits.add(split);
        splitLocations.put(split, info.getLocations());
        LOG.info("Adding split for execution. Split = " + split + " Locations: " + Arrays.toString(splitLocations.get(split)));
    }

    stream.close();

    jobConfPath = JobSubmissionFiles.getJobConfPath(jobSubmitDirectoryPath);

    if (!fileSystem.exists(jobConfPath)) {
        throw new IOException("Cannot find job.xml. Path = " + jobConfPath);
    }

    //We cannot just use JobConf(Path) constructor,
    //because it does not work for HDFS locations.
    //The comment in Configuration#loadResource() states,
    //for the case when the Path to the resource is provided:
    //"Can't use FileSystem API or we get an infinite loop
    //since FileSystem uses Configuration API.  Use java.io.File instead."
    //
    //Workaround: construct empty Configuration, provide it with
    //input stream and give it to JobConf constructor.
    FSDataInputStream jobConfStream = fileSystem.open(jobConfPath);
    Configuration jobXML = new Configuration(false);
    jobXML.addResource(jobConfStream);

    //The configuration does not actually gets read before we attempt to
    //read some property. Call to #size() will make Configuration to
    //read the input stream.
    jobXML.size();

    //We are done with input stream, can close it now.
    jobConfStream.close();

    jobConf = new JobConf(jobXML);

    newApi = jobConf.getUseNewMapper();


    jobStatus = new JobStatus(jobID, 0f, 0f, 0f, 0f,
            JobStatus.State.RUNNING,
            JobPriority.NORMAL,
            UserGroupInformation.getCurrentUser().getUserName(),
            jobID.toString(),
            jobConfPath.toString(), "");
}
 
开发者ID:scaleoutsoftware,项目名称:hServer,代码行数:60,代码来源:SubmittedJob.java


注:本文中的org.apache.hadoop.mapreduce.JobSubmissionFiles.getJobConfPath方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。