当前位置: 首页>>代码示例>>Java>>正文


Java TaskAttemptStateInternal.SUCCEEDED属性代码示例

本文整理汇总了Java中org.apache.hadoop.mapreduce.v2.app.job.TaskAttemptStateInternal.SUCCEEDED属性的典型用法代码示例。如果您正苦于以下问题:Java TaskAttemptStateInternal.SUCCEEDED属性的具体用法?Java TaskAttemptStateInternal.SUCCEEDED怎么用?Java TaskAttemptStateInternal.SUCCEEDED使用的例子?那么恭喜您, 这里精选的属性代码示例或许可以为您提供帮助。您也可以进一步了解该属性所在org.apache.hadoop.mapreduce.v2.app.job.TaskAttemptStateInternal的用法示例。


在下文中一共展示了TaskAttemptStateInternal.SUCCEEDED属性的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: isFinished

@Override
public boolean isFinished() {
  readLock.lock();
  try {
    // TODO: Use stateMachine level method?
    return (getInternalState() == TaskAttemptStateInternal.SUCCEEDED || 
            getInternalState() == TaskAttemptStateInternal.FAILED ||
            getInternalState() == TaskAttemptStateInternal.KILLED);
  } finally {
    readLock.unlock();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:TaskAttemptImpl.java

示例2: transition

@SuppressWarnings("unchecked")
@Override
public TaskAttemptStateInternal transition(TaskAttemptImpl taskAttempt, 
    TaskAttemptEvent event) {
  if(taskAttempt.getID().getTaskId().getTaskType() == TaskType.REDUCE) {
    // after a reduce task has succeeded, its outputs are in safe in HDFS.
    // logically such a task should not be killed. we only come here when
    // there is a race condition in the event queue. E.g. some logic sends
    // a kill request to this attempt when the successful completion event
    // for this task is already in the event queue. so the kill event will
    // get executed immediately after the attempt is marked successful and 
    // result in this transition being exercised.
    // ignore this for reduce tasks
    LOG.info("Ignoring killed event for successful reduce task attempt" +
              taskAttempt.getID().toString());
    return TaskAttemptStateInternal.SUCCEEDED;
  }
  if(event instanceof TaskAttemptKillEvent) {
    TaskAttemptKillEvent msgEvent = (TaskAttemptKillEvent) event;
    //add to diagnostic
    taskAttempt.addDiagnosticInfo(msgEvent.getMessage());
  }

  // not setting a finish time since it was set on success
  assert (taskAttempt.getFinishTime() != 0);

  assert (taskAttempt.getLaunchTime() != 0);
  taskAttempt.eventHandler
      .handle(createJobCounterUpdateEventTAKilled(taskAttempt, true));
  TaskAttemptUnsuccessfulCompletionEvent tauce = createTaskAttemptUnsuccessfulCompletionEvent(
      taskAttempt, TaskAttemptStateInternal.KILLED);
  taskAttempt.eventHandler.handle(new JobHistoryEvent(taskAttempt.attemptId
      .getTaskId().getJobId(), tauce));
  taskAttempt.eventHandler.handle(new TaskTAttemptEvent(
      taskAttempt.attemptId, TaskEventType.T_ATTEMPT_KILLED));
  return TaskAttemptStateInternal.KILLED;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:37,代码来源:TaskAttemptImpl.java

示例3: transition

@SuppressWarnings("unchecked")
@Override
public TaskAttemptStateInternal transition(TaskAttemptImpl taskAttempt, 
    TaskAttemptEvent event) {
  if(taskAttempt.getID().getTaskId().getTaskType() == TaskType.REDUCE) {
    // after a reduce task has succeeded, its outputs are in safe in HDFS.
    // logically such a task should not be killed. we only come here when
    // there is a race condition in the event queue. E.g. some logic sends
    // a kill request to this attempt when the successful completion event
    // for this task is already in the event queue. so the kill event will
    // get executed immediately after the attempt is marked successful and 
    // result in this transition being exercised.
    // ignore this for reduce tasks
    LOG.info("Ignoring killed event for successful reduce task attempt" +
              taskAttempt.getID().toString());
    return TaskAttemptStateInternal.SUCCEEDED;
  }
  if(event instanceof TaskAttemptKillEvent) {
    TaskAttemptKillEvent msgEvent = (TaskAttemptKillEvent) event;
    //add to diagnostic
    taskAttempt.addDiagnosticInfo(msgEvent.getMessage());
  }

  // not setting a finish time since it was set on success
  assert (taskAttempt.getFinishTime() != 0);

  assert (taskAttempt.getLaunchTime() != 0);
  taskAttempt.eventHandler
      .handle(createJobCounterUpdateEventTAKilled(taskAttempt, true));
  TaskAttemptUnsuccessfulCompletionEvent tauce = createTaskAttemptUnsuccessfulCompletionEvent(
      taskAttempt, TaskAttemptStateInternal.KILLED);
  taskAttempt.eventHandler.handle(new JobHistoryEvent(taskAttempt.attemptId
      .getTaskId().getJobId(), tauce));
  boolean rescheduleNextTaskAttempt = false;
  if (event instanceof TaskAttemptKillEvent) {
    rescheduleNextTaskAttempt =
        ((TaskAttemptKillEvent)event).getRescheduleAttempt();
  }
  taskAttempt.eventHandler.handle(new TaskTAttemptKilledEvent(
      taskAttempt.attemptId, rescheduleNextTaskAttempt));
  return TaskAttemptStateInternal.KILLED;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:42,代码来源:TaskAttemptImpl.java


注:本文中的org.apache.hadoop.mapreduce.v2.app.job.TaskAttemptStateInternal.SUCCEEDED属性示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。