本文整理汇总了Java中org.apache.hadoop.mapreduce.JobPriority.NORMAL属性的典型用法代码示例。如果您正苦于以下问题:Java JobPriority.NORMAL属性的具体用法?Java JobPriority.NORMAL怎么用?Java JobPriority.NORMAL使用的例子?那么, 这里精选的属性代码示例或许可以为您提供帮助。您也可以进一步了解该属性所在类org.apache.hadoop.mapreduce.JobPriority
的用法示例。
在下文中一共展示了JobPriority.NORMAL属性的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: submitJob
@Override
public JobStatus submitJob(
JobID jobId, String jobSubmitDir, Credentials ts) throws IOException {
JobStatus status = new JobStatus(jobId, 0.0f, 0.0f, 0.0f, 0.0f,
JobStatus.State.RUNNING, JobPriority.NORMAL, "", "", "", "");
return status;
}
示例2: submitJob
@Override
public JobStatus submitJob(JobID jobId) throws IOException {
JobStatus status = new JobStatus(jobId, 0.0f, 0.0f, 0.0f, 0.0f,
JobStatus.State.RUNNING, JobPriority.NORMAL, "", "", "", "");
return status;
}
示例3: SubmittedJob
SubmittedJob(JobID jobID, String jobSubmitDirectory, Credentials credentials, Configuration configuration) throws IOException, InterruptedException {
this.jobID = jobID;
this.configuration = configuration;
this.jobSubmitDirectoryPath = new Path(jobSubmitDirectory);
this.fileSystem = FileSystem.get(configuration);
JobSplit.TaskSplitMetaInfo splitInfo[] = SplitMetaInfoReader.readSplitMetaInfo(jobID, fileSystem, configuration, jobSubmitDirectoryPath);
Path jobSplitFile = JobSubmissionFiles.getJobSplitFile(jobSubmitDirectoryPath);
FSDataInputStream stream = fileSystem.open(jobSplitFile);
for (JobSplit.TaskSplitMetaInfo info : splitInfo) {
Object split = getSplitDetails(stream, info.getStartOffset(), configuration);
inputSplits.add(split);
splitLocations.put(split, info.getLocations());
LOG.info("Adding split for execution. Split = " + split + " Locations: " + Arrays.toString(splitLocations.get(split)));
}
stream.close();
jobConfPath = JobSubmissionFiles.getJobConfPath(jobSubmitDirectoryPath);
if (!fileSystem.exists(jobConfPath)) {
throw new IOException("Cannot find job.xml. Path = " + jobConfPath);
}
//We cannot just use JobConf(Path) constructor,
//because it does not work for HDFS locations.
//The comment in Configuration#loadResource() states,
//for the case when the Path to the resource is provided:
//"Can't use FileSystem API or we get an infinite loop
//since FileSystem uses Configuration API. Use java.io.File instead."
//
//Workaround: construct empty Configuration, provide it with
//input stream and give it to JobConf constructor.
FSDataInputStream jobConfStream = fileSystem.open(jobConfPath);
Configuration jobXML = new Configuration(false);
jobXML.addResource(jobConfStream);
//The configuration does not actually gets read before we attempt to
//read some property. Call to #size() will make Configuration to
//read the input stream.
jobXML.size();
//We are done with input stream, can close it now.
jobConfStream.close();
jobConf = new JobConf(jobXML);
newApi = jobConf.getUseNewMapper();
jobStatus = new JobStatus(jobID, 0f, 0f, 0f, 0f,
JobStatus.State.RUNNING,
JobPriority.NORMAL,
UserGroupInformation.getCurrentUser().getUserName(),
jobID.toString(),
jobConfPath.toString(), "");
}
示例4: getPriority
@Override
public synchronized JobPriority getPriority() {
// TEX-147: return real priority
return JobPriority.NORMAL;
}