当前位置: 首页>>代码示例>>Java>>正文


Java Context.getTaskAttemptID方法代码示例

本文整理汇总了Java中org.apache.hadoop.mapreduce.Mapper.Context.getTaskAttemptID方法的典型用法代码示例。如果您正苦于以下问题:Java Context.getTaskAttemptID方法的具体用法?Java Context.getTaskAttemptID怎么用?Java Context.getTaskAttemptID使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.mapreduce.Mapper.Context的用法示例。


在下文中一共展示了Context.getTaskAttemptID方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: preApplication

import org.apache.hadoop.mapreduce.Mapper.Context; //导入方法依赖的package包/类
@Override
public void preApplication() {
  Context context = getContext();
  FileSystem fs;

  try {
    fs = FileSystem.get(context.getConfiguration());

    String p = context.getConfiguration()
        .get(SimpleVertexWithWorkerContext.OUTPUTDIR);
    if (p == null) {
      throw new IllegalArgumentException(
          SimpleVertexWithWorkerContext.OUTPUTDIR +
          " undefined!");
    }

    Path path = new Path(p);
    if (!fs.exists(path)) {
      throw new IllegalArgumentException(path +
          " doesn't exist");
    }

    Path outF = new Path(path, FILENAME +
        context.getTaskAttemptID());
    if (fs.exists(outF)) {
      throw new IllegalArgumentException(outF +
          " aready exists");
    }

    out = fs.create(outF);
  } catch (IOException e) {
    throw new RuntimeException(
        "can't initialize WorkerContext", e);
  }
}
 
开发者ID:renato2099,项目名称:giraph-gora,代码行数:36,代码来源:SimpleVertexWithWorkerContext.java

示例2: map

import org.apache.hadoop.mapreduce.Mapper.Context; //导入方法依赖的package包/类
public void map(IntWritable key, IntWritable val, Context context) throws IOException {
  TimelineClient tlc = new TimelineClientImpl();
  Configuration conf = context.getConfiguration();

  final int kbs = conf.getInt(KBS_SENT, KBS_SENT_DEFAULT);

  long totalTime = 0;
  final int testtimes = conf.getInt(TEST_TIMES, TEST_TIMES_DEFAULT);
  final Random rand = new Random();
  final TaskAttemptID taskAttemptId = context.getTaskAttemptID();
  final char[] payLoad = new char[kbs * 1024];

  for (int i = 0; i < testtimes; i++) {
    // Generate a fixed length random payload
    for (int xx = 0; xx < kbs * 1024; xx++) {
      int alphaNumIdx =
          rand.nextInt(ALPHA_NUMS.length);
      payLoad[xx] = ALPHA_NUMS[alphaNumIdx];
    }
    String entId = taskAttemptId + "_" + Integer.toString(i);
    final TimelineEntity entity = new TimelineEntity();
    entity.setEntityId(entId);
    entity.setEntityType("FOO_ATTEMPT");
    entity.addOtherInfo("PERF_TEST", payLoad);
    // add an event
    TimelineEvent event = new TimelineEvent();
    event.setTimestamp(System.currentTimeMillis());
    event.setEventType("foo_event");
    entity.addEvent(event);

    // use the current user for this purpose
    UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
    long startWrite = System.nanoTime();
    try {
      tlc.putEntities(entity);
    } catch (Exception e) {
      context.getCounter(PerfCounters.TIMELINE_SERVICE_WRITE_FAILURES).
          increment(1);
      LOG.error("writing to the timeline service failed", e);
    }
    long endWrite = System.nanoTime();
    totalTime += TimeUnit.NANOSECONDS.toMillis(endWrite-startWrite);
  }
  LOG.info("wrote " + testtimes + " entities (" + kbs*testtimes +
      " kB) in " + totalTime + " ms");
  context.getCounter(PerfCounters.TIMELINE_SERVICE_WRITE_TIME).
      increment(totalTime);
  context.getCounter(PerfCounters.TIMELINE_SERVICE_WRITE_COUNTER).
      increment(testtimes);
  context.getCounter(PerfCounters.TIMELINE_SERVICE_WRITE_KBS).
      increment(kbs*testtimes);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:53,代码来源:SimpleEntityWriterV1.java


注:本文中的org.apache.hadoop.mapreduce.Mapper.Context.getTaskAttemptID方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。