當前位置: 首頁>>代碼示例>>Java>>正文


Java JobID類代碼示例

本文整理匯總了Java中org.apache.hadoop.mapred.JobID的典型用法代碼示例。如果您正苦於以下問題:Java JobID類的具體用法?Java JobID怎麽用?Java JobID使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


JobID類屬於org.apache.hadoop.mapred包,在下文中一共展示了JobID類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: testJobState

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
@SuppressWarnings("deprecation")
@Test(timeout = 30000)
public void testJobState() throws Exception {
  Job job_1 = getCopyJob();
  JobControl jc = new JobControl("Test");
  jc.addJob(job_1);
  Assert.assertEquals(Job.WAITING, job_1.getState());
  job_1.setState(Job.SUCCESS);
  Assert.assertEquals(Job.WAITING, job_1.getState());

  org.apache.hadoop.mapreduce.Job mockjob =
      mock(org.apache.hadoop.mapreduce.Job.class);
  org.apache.hadoop.mapreduce.JobID jid =
      new org.apache.hadoop.mapreduce.JobID("test", 0);
  when(mockjob.getJobID()).thenReturn(jid);
  job_1.setJob(mockjob);
  Assert.assertEquals("job_test_0000", job_1.getMapredJobID());
  job_1.setMapredJobID("job_test_0001");
  Assert.assertEquals("job_test_0000", job_1.getMapredJobID());
  jc.stop();
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:22,代碼來源:TestJobControl.java

示例2: runJob

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
/**
 * Submit/run a map/reduce job.
 * 
 * @param job
 * @return true for success
 * @throws IOException
 */
public static boolean runJob(JobConf job) throws IOException {
  JobClient jc = new JobClient(job);
  boolean sucess = true;
  RunningJob running = null;
  try {
    running = jc.submitJob(job);
    JobID jobId = running.getID();
    System.out.println("Job " + jobId + " is submitted");
    while (!running.isComplete()) {
      System.out.println("Job " + jobId + " is still running.");
      try {
        Thread.sleep(60000);
      } catch (InterruptedException e) {
      }
      running = jc.getJob(jobId);
    }
    sucess = running.isSuccessful();
  } finally {
    if (!sucess && (running != null)) {
      running.killJob();
    }
    jc.close();
  }
  return sucess;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:33,代碼來源:DataJoinJob.java

示例3: testRemoveTaskDistributedCacheManager

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
public void testRemoveTaskDistributedCacheManager() throws Exception {
  if (!canRun()) {
    return;
  }
  TrackerDistributedCacheManager manager = new TrackerDistributedCacheManager(
      conf, taskController);
  JobID jobId = new JobID("jobtracker", 1);
  manager.newTaskDistributedCacheManager(jobId, conf);

  TaskDistributedCacheManager taskDistributedCacheManager = manager
      .getTaskDistributedCacheManager(jobId);
  assertNotNull(taskDistributedCacheManager);

  manager.removeTaskDistributedCacheManager(jobId);

  taskDistributedCacheManager = manager.getTaskDistributedCacheManager(jobId);
  assertNull(taskDistributedCacheManager);
}
 
開發者ID:Nextzero,項目名稱:hadoop-2.6.0-cdh5.4.3,代碼行數:19,代碼來源:TestTrackerDistributedCacheManager.java

示例4: signalAllTasks

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
/**
  * Allow the job to continue through MR control job.
  * @param id of the job. 
  * @throws IOException when failed to get task info. 
  */
public void signalAllTasks(JobID id) throws IOException{
  TaskInfo[] taskInfos = getJTClient().getProxy().getTaskInfo(id);
  if(taskInfos !=null) {
    for (TaskInfo taskInfoRemaining : taskInfos) {
      if(taskInfoRemaining != null) {
        FinishTaskControlAction action = new FinishTaskControlAction(TaskID
            .downgrade(taskInfoRemaining.getTaskID()));
        Collection<TTClient> tts = getTTClients();
        for (TTClient cli : tts) {
          cli.getProxy().sendAction(action);
        }
      }
    }  
  }
}
 
開發者ID:Nextzero,項目名稱:hadoop-2.6.0-cdh5.4.3,代碼行數:21,代碼來源:MRCluster.java

示例5: submitAndValidateJob

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
private Job submitAndValidateJob(JobConf conf, int numMaps, int numReds)
    throws IOException, InterruptedException, ClassNotFoundException {
  conf.setJobSetupCleanupNeeded(false);
  Job job = MapReduceTestUtil.createJob(conf, inDir, outDir,
              numMaps, numReds);

  job.setOutputFormatClass(MyOutputFormat.class);
  job.waitForCompletion(true);
  assertTrue(job.isSuccessful());
  JobID jobid = (org.apache.hadoop.mapred.JobID)job.getID();

  JobClient jc = new JobClient(conf);
  assertTrue(jc.getSetupTaskReports(jobid).length == 0);
  assertTrue(jc.getCleanupTaskReports(jobid).length == 0);
  assertTrue(jc.getMapTaskReports(jobid).length == numMaps);
  assertTrue(jc.getReduceTaskReports(jobid).length == numReds);
  FileSystem fs = FileSystem.get(conf);
  assertTrue("Job output directory doesn't exit!", fs.exists(outDir));
  FileStatus[] list = fs.listStatus(outDir, new OutputFilter());
  int numPartFiles = numReds == 0 ? numMaps : numReds;
  assertTrue("Number of part-files is " + list.length + " and not "
      + numPartFiles, list.length == numPartFiles);
  return job;
}
 
開發者ID:rhli,項目名稱:hadoop-EAR,代碼行數:25,代碼來源:TestNoJobSetupCleanup.java

示例6: testNewTag

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
public void testNewTag() throws IOException {
  LOG.info("Start testNewTag");
  JobID jobid = new JobID("TestJob", 1);
  long oldTimeStamp = releaseTimeStamp;
  long currentTimeStamp = System.currentTimeMillis();
  try {
    Thread.sleep(1000);
  } catch(InterruptedException e) {
  }
  String workingPath = getRelease(releaseTimeStamp, jobid);
  String workingTag = workingPath + "/RELEASE_COPY_DONE";
  FileStatus tagStatus = fs.getFileStatus(new Path(workingTag));
  long newTimeStamp = tagStatus.getModificationTime();
  LOG.info("Before getRelease, " + workingTag + " timestamp is " + oldTimeStamp);
  LOG.info("After getRelease, the timestamp is " + newTimeStamp);
  assertEquals(newTimeStamp > currentTimeStamp, true);
  assertEquals(newTimeStamp > oldTimeStamp, true);
  LOG.info("Done with the testing for testNewTag");
}
 
開發者ID:rhli,項目名稱:hadoop-EAR,代碼行數:20,代碼來源:TestReleaseManager.java

示例7: submitJob

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
@Override
public RunningJob submitJob(final JobConf job) throws IOException
{
    ensureInvocationGridPresent();
    ExecutorService async = Executors.newSingleThreadExecutor();
    final JobID jobID = JobID.forName("job_"+job.getJobName()+"_0");

    Future jobSubmitted = async.submit(new Callable<Object>() {
        @Override
        public Object call() throws Exception {
            try {
                JobScheduler.getInstance().runOldApiJob(job, jobID, sortEnabled, null, grid);
            } finally {
                if (unloadGrid) {
                    grid.unload();
                }
            }
            return null;
        }
    });
    async.shutdown(); //Will shut down after task is done

    return new HServerRunningJob(jobID, jobSubmitted);
}
 
開發者ID:scaleoutsoftware,項目名稱:hServer,代碼行數:25,代碼來源:HServerJobClient.java

示例8: cleanRecovery

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
private void cleanRecovery() throws IOException {
  new LightWeightRequestHandler(HDFSOperationType.DELETE_ENCODING_JOBS) {
    @Override
    public Object performTask() throws IOException {
      EncodingJobsDataAccess da = (EncodingJobsDataAccess)
          HdfsStorageFactory.getDataAccess(EncodingJobsDataAccess.class);
      Iterator<MapReduceEncoder> it = completedJobs.iterator();
      while (it.hasNext()) {
        MapReduceEncoder job = it.next();
        JobID jobId = job.getJobID();
        da.delete(new EncodingJob(jobId.getJtIdentifier(), jobId.getId()));
        it.remove();
      }
      return null;
    }
  }.handle();
}
 
開發者ID:hopshadoop,項目名稱:hops,代碼行數:18,代碼來源:MapReduceEncodingManager.java

示例9: abortJob

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
@Override
public void abortJob(JobContext context, JobStatus.State runState) throws java.io.IOException {
	super.abortJob(context, runState);

	final JobClient jobClient = new JobClient(new JobConf(context.getConfiguration()));
	final RunningJob job = jobClient.getJob((org.apache.hadoop.mapred.JobID) JobID.forName(context.getConfiguration().get("mapred.job.id")));
	String diag = "";
	for (final TaskCompletionEvent event : job.getTaskCompletionEvents(0))
		switch (event.getTaskStatus()) {
			case SUCCEEDED:
				break;
               default:
				diag += "Diagnostics for: " + event.getTaskTrackerHttp() + "\n";
				for (final String s : job.getTaskDiagnostics(event.getTaskAttemptId()))
					diag += s + "\n";
				diag += "\n";
				break;
		}
	updateStatus(diag, context.getConfiguration().getInt("boa.hadoop.jobid", 0));
}
 
開發者ID:boalang,項目名稱:compiler,代碼行數:21,代碼來源:BoaOutputCommitter.java

示例10: addJobStats

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
@SuppressWarnings("deprecation")
JobStats addJobStats(Job job) {
    MapReduceOper mro = jobMroMap.get(job);
     
    if (mro == null) {
        LOG.warn("unable to get MR oper for job: " + job.toString());
        return null;
    }
    JobStats js = mroJobMap.get(mro);
    
    JobID jobId = job.getAssignedJobID();
    js.setId(jobId);
    js.setAlias(mro);
    js.setConf(job.getJobConf());
    return js;
}
 
開發者ID:sigmoidanalytics,項目名稱:spork-streaming,代碼行數:17,代碼來源:SimplePigStats.java

示例11: processKill

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
@Override
protected void processKill(String jobid) throws IOException
{
    if (mJobConf != null) {
        JobClient jc = new JobClient(mJobConf);
        JobID id = JobID.forName(jobid);
        RunningJob job = jc.getJob(id);
        if (job == null)
            System.out.println("Job with id " + jobid + " is not active");
        else
        {
            job.killJob();
            log.info("Kill " + id + " submitted.");
        }
    }
}
 
開發者ID:sigmoidanalytics,項目名稱:spork-streaming,代碼行數:17,代碼來源:GruntParser.java

示例12: testMedianMapReduceTime

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
@Test
public void testMedianMapReduceTime() throws Exception {

	JobConf jobConf = new JobConf();
	JobClient jobClient = Mockito.mock(JobClient.class);
	
	// mock methods to return the predefined map and reduce task reports
	Mockito.when(jobClient.getMapTaskReports(jobID)).thenReturn(mapTaskReports);
	Mockito.when(jobClient.getReduceTaskReports(jobID)).thenReturn(reduceTaskReports);

	PigStats.JobGraph jobGraph = new PigStats.JobGraph();
	JobStats jobStats = createJobStats("JobStatsTest", jobGraph);
	getJobStatsMethod("setId", JobID.class).invoke(jobStats, jobID);
	getJobStatsMethod("setSuccessful", boolean.class).invoke(jobStats, true);

	getJobStatsMethod("addMapReduceStatistics", JobClient.class, Configuration.class)
	    .invoke(jobStats, jobClient, jobConf);
	String msg = (String)getJobStatsMethod("getDisplayString", boolean.class)
	    .invoke(jobStats, false);
	
	System.out.println(JobStats.SUCCESS_HEADER);
	System.out.println(msg);
	
	assertTrue(msg.startsWith(ASSERT_STRING));
}
 
開發者ID:sigmoidanalytics,項目名稱:spork-streaming,代碼行數:26,代碼來源:TestJobStats.java

示例13: killJob

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
@Override
public void killJob(String jobID, Configuration conf) throws BackendException {
    try {
        if (conf != null) {
            JobConf jobConf = new JobConf(conf);
            JobClient jc = new JobClient(jobConf);
            JobID id = JobID.forName(jobID);
            RunningJob job = jc.getJob(id);
            if (job == null)
                System.out.println("Job with id " + jobID + " is not active");
            else
            {
                job.killJob();
                log.info("Kill " + id + " submitted.");
            }
        }
    } catch (IOException e) {
        throw new BackendException(e);
    }
}
 
開發者ID:sigmoidanalytics,項目名稱:spork,代碼行數:21,代碼來源:MapReduceLauncher.java

示例14: testMedianMapReduceTime

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
@Test
public void testMedianMapReduceTime() throws Exception {
    JobClient jobClient = Mockito.mock(JobClient.class);

    // mock methods to return the predefined map and reduce task reports
    Mockito.when(jobClient.getMapTaskReports(jobID)).thenReturn(mapTaskReports);
    Mockito.when(jobClient.getReduceTaskReports(jobID)).thenReturn(reduceTaskReports);

    PigStats.JobGraph jobGraph = new PigStats.JobGraph();
    MRJobStats jobStats = createJobStats("JobStatsTest", jobGraph);
    getJobStatsMethod("setId", JobID.class).invoke(jobStats, jobID);
    jobStats.setSuccessful(true);

    getJobStatsMethod("addMapReduceStatistics", Iterator.class, Iterator.class)
        .invoke(jobStats, Arrays.asList(mapTaskReports).iterator(), Arrays.asList(reduceTaskReports).iterator());
    String msg = (String)getJobStatsMethod("getDisplayString")
        .invoke(jobStats);

    System.out.println(JobStats.SUCCESS_HEADER);
    System.out.println(msg);

    assertTrue(msg.startsWith(ASSERT_STRING));
}
 
開發者ID:sigmoidanalytics,項目名稱:spork,代碼行數:24,代碼來源:TestMRJobStats.java

示例15: testBadUpdate

import org.apache.hadoop.mapred.JobID; //導入依賴的package包/類
@SuppressWarnings("deprecation")
@Test
public void testBadUpdate() throws Exception {
  JobStatus mockStatus = mock(JobStatus.class);
  JobProfile mockProf = mock(JobProfile.class);
  JobSubmissionProtocol mockClient = mock(JobSubmissionProtocol.class);
  
  JobID id = new JobID("test",0);
  
  RunningJob rj = new JobClient.NetworkedJob(mockStatus, mockProf, mockClient);
  
  when(mockProf.getJobID()).thenReturn(id);
  when(mockClient.getJobStatus(id)).thenReturn(null);
  
  boolean caught = false;
  try {
    rj.isSuccessful();
  } catch(IOException e) {
    caught = true;
  }
  assertTrue("Expected updateStatus to throw an IOException bt it did not", caught);
  
  //verification
  verify(mockProf).getJobID();
  verify(mockClient).getJobStatus(id);
}
 
開發者ID:Seagate,項目名稱:hadoop-on-lustre,代碼行數:27,代碼來源:TestNetworkedJob.java


注:本文中的org.apache.hadoop.mapred.JobID類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。