当前位置: 首页>>代码示例>>Java>>正文


Java TaskCompletionEvent类代码示例

本文整理汇总了Java中org.apache.hadoop.mapreduce.TaskCompletionEvent的典型用法代码示例。如果您正苦于以下问题:Java TaskCompletionEvent类的具体用法?Java TaskCompletionEvent怎么用?Java TaskCompletionEvent使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


TaskCompletionEvent类属于org.apache.hadoop.mapreduce包,在下文中一共展示了TaskCompletionEvent类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getAllTaskCompletionEvent

import org.apache.hadoop.mapreduce.TaskCompletionEvent; //导入依赖的package包/类
private static List<TaskCompletionEvent> getAllTaskCompletionEvent(Job completedJob) {
  List<TaskCompletionEvent> completionEvents = new LinkedList<>();

  while (true) {
    try {
      TaskCompletionEvent[] bunchOfEvents;
      bunchOfEvents = completedJob.getTaskCompletionEvents(completionEvents.size());
      if (bunchOfEvents == null || bunchOfEvents.length == 0) {
        break;
      }
      completionEvents.addAll(Arrays.asList(bunchOfEvents));
    } catch (IOException e) {
      break;
    }
  }

  return completionEvents;
}
 
开发者ID:apache,项目名称:incubator-gobblin,代码行数:19,代码来源:CompactionAvroJobConfigurator.java

示例2: removeFailedPaths

import org.apache.hadoop.mapreduce.TaskCompletionEvent; //导入依赖的package包/类
/**
 * Remove all bad paths caused by speculative execution
 * The problem happens when speculative task attempt initialized but then killed in the middle of processing.
 * Some partial file was generated at {tmp_output}/_temporary/1/_temporary/attempt_xxx_xxx/part-m-xxxx.avro,
 * without being committed to its final destination at {tmp_output}/part-m-xxxx.avro.
 *
 * @param job Completed MR job
 * @param fs File system that can handle file system
 * @return all successful paths
 */
public static List<Path> removeFailedPaths(Job job, Path tmpPath, FileSystem fs) throws IOException {
  List<TaskCompletionEvent> failedEvents = CompactionAvroJobConfigurator.getUnsuccessfulTaskCompletionEvent(job);

  List<Path> allFilePaths = DatasetHelper.getApplicableFilePaths(fs, tmpPath, Lists.newArrayList("avro"));
  List<Path> goodPaths = new ArrayList<>();
  for (Path filePath: allFilePaths) {
    if (CompactionAvroJobConfigurator.isFailedPath(filePath, failedEvents)) {
      fs.delete(filePath, false);
      log.error("{} is a bad path so it was deleted", filePath);
    } else {
      goodPaths.add(filePath);
    }
  }

  return goodPaths;
}
 
开发者ID:apache,项目名称:incubator-gobblin,代码行数:27,代码来源:CompactionAvroJobConfigurator.java

示例3: listEvents

import org.apache.hadoop.mapreduce.TaskCompletionEvent; //导入依赖的package包/类
/**
 * List the events for the given job
 * @param jobId the job id for the job's events to list
 * @throws IOException
 */
private void listEvents(Job job, int fromEventId, int numEvents)
    throws IOException, InterruptedException {
  TaskCompletionEvent[] events = job.
    getTaskCompletionEvents(fromEventId, numEvents);
  System.out.println("Task completion events for " + job.getJobID());
  System.out.println("Number of events (from " + fromEventId + ") are: " 
    + events.length);
  for(TaskCompletionEvent event: events) {
    System.out.println(event.getStatus() + " " + 
      event.getTaskAttemptId() + " " + 
      getTaskLogURL(event.getTaskAttemptId(), event.getTaskTrackerHttp()));
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:CLI.java

示例4: getTaskCompletionEvents

import org.apache.hadoop.mapreduce.TaskCompletionEvent; //导入依赖的package包/类
@Override
public TaskCompletionEvent[] getTaskCompletionEvents(JobID jobID, int i, int i2) throws IOException, InterruptedException {
    if (submittedJobs.containsKey(org.apache.hadoop.mapred.JobID.downgrade(jobID))) {
        return new TaskCompletionEvent[0];
    } else {
        return backupRunner.getTaskCompletionEvents(jobID, i, i2);
    }
}
 
开发者ID:scaleoutsoftware,项目名称:hServer,代码行数:9,代码来源:HServerClientProtocol.java

示例5: listEvents

import org.apache.hadoop.mapreduce.TaskCompletionEvent; //导入依赖的package包/类
/**
 * List the events for the given job
 * @param job the job to list
 * @param fromEventId event id for the job's events to list from
 * @param numEvents number of events we want to list
 * @throws IOException
 */
private void listEvents(Job job, int fromEventId, int numEvents)
    throws IOException, InterruptedException {
  TaskCompletionEvent[] events = job.
    getTaskCompletionEvents(fromEventId, numEvents);
  System.out.println("Task completion events for " + job.getJobID());
  System.out.println("Number of events (from " + fromEventId + ") are: " 
    + events.length);
  for(TaskCompletionEvent event: events) {
    System.out.println(event.getStatus() + " " + 
      event.getTaskAttemptId() + " " + 
      getTaskLogURL(event.getTaskAttemptId(), event.getTaskTrackerHttp()));
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:21,代码来源:CLI.java

示例6: getTaskCompletionEvents

import org.apache.hadoop.mapreduce.TaskCompletionEvent; //导入依赖的package包/类
public TaskCompletionEvent[] getTaskCompletionEvents(JobID jobId,
    int fromEventId, int maxEvents)
    throws IOException, InterruptedException {
  // FIXME seems like there is support in client to query task failure
  // related information
  // However, api does not make sense for DAG
  return new TaskCompletionEvent[0];
}
 
开发者ID:apache,项目名称:incubator-tez,代码行数:9,代码来源:ClientServiceDelegate.java

示例7: getTaskCompletionEvents

import org.apache.hadoop.mapreduce.TaskCompletionEvent; //导入依赖的package包/类
@Override
public TaskCompletionEvent[] getTaskCompletionEvents(JobID arg0, int arg1, int arg2)
		throws IOException, InterruptedException {
	return clientCache.getClient(arg0).getTaskCompletionEvents(arg0, arg1, arg2);
}
 
开发者ID:liuhaozzu,项目名称:big_data,代码行数:6,代码来源:YARNRunner.java

示例8: getTaskCompletionEvents

import org.apache.hadoop.mapreduce.TaskCompletionEvent; //导入依赖的package包/类
@Override
public TaskCompletionEvent[] getTaskCompletionEvents(JobID arg0, int arg1,
    int arg2) throws IOException, InterruptedException {
  return clientCache.getClient(arg0).getTaskCompletionEvents(arg0, arg1, arg2);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:6,代码来源:YARNRunner.java

示例9: getMapCompletionEvents

import org.apache.hadoop.mapreduce.TaskCompletionEvent; //导入依赖的package包/类
public MapTaskCompletionEventsUpdate getMapCompletionEvents(JobID jobId, 
    int fromEventId, int maxLocs, TaskAttemptID id) throws IOException {
  return new MapTaskCompletionEventsUpdate(
    org.apache.hadoop.mapred.TaskCompletionEvent.EMPTY_ARRAY, false);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:6,代码来源:LocalJobRunner.java

示例10: getTaskCompletionEvents

import org.apache.hadoop.mapreduce.TaskCompletionEvent; //导入依赖的package包/类
public TaskCompletionEvent[] getTaskCompletionEvents(
    org.apache.hadoop.mapreduce.JobID jobid
    , int fromEventId, int maxEvents) throws IOException {
  return TaskCompletionEvent.EMPTY_ARRAY;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:6,代码来源:LocalJobRunner.java

示例11: getMapCompletionEvents

import org.apache.hadoop.mapreduce.TaskCompletionEvent; //导入依赖的package包/类
@Override
public MapTaskCompletionEventsUpdate getMapCompletionEvents(JobID jobId, 
    int fromEventId, int maxLocs, TaskAttemptID id) throws IOException {
  return new MapTaskCompletionEventsUpdate(
    org.apache.hadoop.mapred.TaskCompletionEvent.EMPTY_ARRAY, false);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:7,代码来源:LocalJobRunner.java

示例12: getTaskCompletionEvents

import org.apache.hadoop.mapreduce.TaskCompletionEvent; //导入依赖的package包/类
/** {@inheritDoc} */
@Override public TaskCompletionEvent[] getTaskCompletionEvents(JobID jobid, int fromEventId, int maxEvents)
    throws IOException, InterruptedException {
    return new TaskCompletionEvent[0];
}
 
开发者ID:apache,项目名称:ignite,代码行数:6,代码来源:HadoopClientProtocol.java


注:本文中的org.apache.hadoop.mapreduce.TaskCompletionEvent类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。