当前位置: 首页>>代码示例>>Java>>正文


Java LocalDirAllocator.getLocalPathToRead方法代码示例

本文整理汇总了Java中org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead方法的典型用法代码示例。如果您正苦于以下问题:Java LocalDirAllocator.getLocalPathToRead方法的具体用法?Java LocalDirAllocator.getLocalPathToRead怎么用?Java LocalDirAllocator.getLocalPathToRead使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.fs.LocalDirAllocator的用法示例。


在下文中一共展示了LocalDirAllocator.getLocalPathToRead方法的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testProviderApi

import org.apache.hadoop.fs.LocalDirAllocator; //导入方法依赖的package包/类
@Test
/**
 * A testing method verifying availability and accessibility of API needed for
 * AuxiliaryService(s) which are "Shuffle-Providers" (ShuffleHandler and 3rd party plugins)
 */
public void testProviderApi() {
  LocalDirAllocator mockLocalDirAllocator = mock(LocalDirAllocator.class);
  JobConf mockJobConf = mock(JobConf.class);
  try {
    mockLocalDirAllocator.getLocalPathToRead("", mockJobConf);
  }
  catch (Exception e) {
    assertTrue("Threw exception:" + e, false);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:TestShufflePlugin.java

示例2: formWorkDir

import org.apache.hadoop.fs.LocalDirAllocator; //导入方法依赖的package包/类
/** Creates the working directory pathname for a task attempt. */ 
static Path formWorkDir(LocalDirAllocator lDirAlloc, JobConf conf) 
    throws IOException {
  Path workDir =
      lDirAlloc.getLocalPathToRead(MRConstants.WORKDIR, conf);
  return workDir;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:8,代码来源:TaskRunner.java

示例3: formWorkDir

import org.apache.hadoop.fs.LocalDirAllocator; //导入方法依赖的package包/类
/** Creates the working directory pathname for a task attempt. */ 
static File formWorkDir(LocalDirAllocator lDirAlloc, 
    TaskAttemptID task, boolean isCleanup, JobConf conf) 
    throws IOException {
  Path workDir =
      lDirAlloc.getLocalPathToRead(TaskTracker.getTaskWorkDir(
          conf.getUser(), task.getJobID().toString(), task.toString(),
          isCleanup), conf);

  return new File(workDir.toString());
}
 
开发者ID:rekhajoshm,项目名称:mapreduce-fork,代码行数:12,代码来源:TaskRunner.java

示例4: getShuffleInputFileName

import org.apache.hadoop.fs.LocalDirAllocator; //导入方法依赖的package包/类
@VisibleForTesting
//TODO: Refactor following to make use of methods from TezTaskOutputFiles to be consistent.
protected Path getShuffleInputFileName(String pathComponent, String suffix)
    throws IOException {
  LocalDirAllocator localDirAllocator = new LocalDirAllocator(TezRuntimeFrameworkConfigs.LOCAL_DIRS);
  suffix = suffix != null ? suffix : "";
  String outputPath = Constants.TEZ_RUNTIME_TASK_OUTPUT_DIR + Path.SEPARATOR +
      pathComponent + Path.SEPARATOR +
      Constants.TEZ_RUNTIME_TASK_OUTPUT_FILENAME_STRING + suffix;
  String pathFromLocalDir = getPathForLocalDir(outputPath);

  return localDirAllocator.getLocalPathToRead(pathFromLocalDir.toString(), conf);
}
 
开发者ID:apache,项目名称:tez,代码行数:14,代码来源:FetcherOrderedGrouped.java

示例5: getMapOutputFile

import org.apache.hadoop.fs.LocalDirAllocator; //导入方法依赖的package包/类
private Path getMapOutputFile(Configuration jobConf, OutputContext outputContext)
    throws IOException {
  LocalDirAllocator lDirAlloc = new LocalDirAllocator(TezRuntimeFrameworkConfigs.LOCAL_DIRS);
  Path attemptOutput = new Path(new Path(Constants.TEZ_RUNTIME_TASK_OUTPUT_DIR, outputContext.getUniqueIdentifier()),
      Constants.TEZ_RUNTIME_TASK_OUTPUT_FILENAME_STRING);
  Path mapOutputFile = lDirAlloc.getLocalPathToRead(attemptOutput.toString(), jobConf);
  return  mapOutputFile;
}
 
开发者ID:apache,项目名称:tez,代码行数:9,代码来源:TestMapProcessor.java

示例6: sendMapOutput

import org.apache.hadoop.fs.LocalDirAllocator; //导入方法依赖的package包/类
protected ChannelFuture sendMapOutput(ChannelHandlerContext ctx, Channel ch,
    String jobId, String mapId, int reduce) throws IOException {
  LocalDirAllocator lDirAlloc = attributes.getLocalDirAllocator();
  FileSystem rfs = ((LocalFileSystem) attributes.getLocalFS()).getRaw();

  ShuffleServerMetrics shuffleMetrics = attributes.getShuffleServerMetrics();
  TaskTracker tracker = attributes.getTaskTracker();

  // Index file
  Path indexFileName = lDirAlloc.getLocalPathToRead(
      TaskTracker.getIntermediateOutputDir(jobId, mapId)
      + "/file.out.index", attributes.getJobConf());
  // Map-output file
  Path mapOutputFileName = lDirAlloc.getLocalPathToRead(
      TaskTracker.getIntermediateOutputDir(jobId, mapId)
      + "/file.out", attributes.getJobConf());

  /**
   * Read the index file to get the information about where
   * the map-output for the given reducer is available.
   */
  IndexRecord info = tracker.getIndexInformation(mapId, reduce,indexFileName);

  HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);

  //set the custom "from-map-task" http header to the map task from which
  //the map output data is being transferred
  response.setHeader(MRConstants.FROM_MAP_TASK, mapId);

  //set the custom "Raw-Map-Output-Length" http header to
  //the raw (decompressed) length
  response.setHeader(MRConstants.RAW_MAP_OUTPUT_LENGTH,
      Long.toString(info.rawLength));

  //set the custom "Map-Output-Length" http header to
  //the actual number of bytes being transferred
  response.setHeader(MRConstants.MAP_OUTPUT_LENGTH,
      Long.toString(info.partLength));

  //set the custom "for-reduce-task" http header to the reduce task number
  //for which this map output is being transferred
  response.setHeader(MRConstants.FOR_REDUCE_TASK, Integer.toString(reduce));

  ch.write(response);
  File spillfile = new File(mapOutputFileName.toString());
  RandomAccessFile spill;
  try {
    spill = new RandomAccessFile(spillfile, "r");
  } catch (FileNotFoundException e) {
    LOG.info(spillfile + " not found");
    return null;
  }
  final FileRegion partition = new DefaultFileRegion(
    spill.getChannel(), info.startOffset, info.partLength);
  ChannelFuture writeFuture = ch.write(partition);
  writeFuture.addListener(new ChanneFutureListenerMetrics(partition));
  shuffleMetrics.outputBytes(info.partLength); // optimistic
  LOG.info("Sending out " + info.partLength + " bytes for reduce: " +
           reduce + " from map: " + mapId + " given " +
           info.partLength + "/" + info.rawLength);
  return writeFuture;
}
 
开发者ID:iVCE,项目名称:RDFS,代码行数:63,代码来源:ShuffleHandler.java


注:本文中的org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。