當前位置: 首頁>>代碼示例>>Java>>正文


Java TaskInputOutputContext類代碼示例

本文整理匯總了Java中org.apache.hadoop.mapreduce.TaskInputOutputContext的典型用法代碼示例。如果您正苦於以下問題:Java TaskInputOutputContext類的具體用法?Java TaskInputOutputContext怎麽用?Java TaskInputOutputContext使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


TaskInputOutputContext類屬於org.apache.hadoop.mapreduce包,在下文中一共展示了TaskInputOutputContext類的7個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: addMapper

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入依賴的package包/類
/**
 * Add mapper(the first mapper) that reads input from the input
 * context and writes to queue
 */
@SuppressWarnings("unchecked")
void addMapper(TaskInputOutputContext inputContext,
    ChainBlockingQueue<KeyValuePair<?, ?>> output, int index)
    throws IOException, InterruptedException {
  Configuration conf = getConf(index);
  Class<?> keyOutClass = conf.getClass(MAPPER_OUTPUT_KEY_CLASS, Object.class);
  Class<?> valueOutClass = conf.getClass(MAPPER_OUTPUT_VALUE_CLASS,
      Object.class);

  RecordReader rr = new ChainRecordReader(inputContext);
  RecordWriter rw = new ChainRecordWriter(keyOutClass, valueOutClass, output,
      conf);
  Mapper.Context mapperContext = createMapContext(rr, rw,
      (MapContext) inputContext, getConf(index));
  MapRunner runner = new MapRunner(mappers.get(index), mapperContext, rr, rw);
  threads.add(runner);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:22,代碼來源:Chain.java

示例2: compute

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入依賴的package包/類
/** Compute sigma */
static void compute(Summation sigma,
    TaskInputOutputContext<?, ?, NullWritable, TaskResult> context
    ) throws IOException, InterruptedException {
  String s;
  LOG.info(s = "sigma=" + sigma);
  context.setStatus(s);

  final long start = System.currentTimeMillis();
  sigma.compute();
  final long duration = System.currentTimeMillis() - start;
  final TaskResult result = new TaskResult(sigma, duration);

  LOG.info(s = "result=" + result);
  context.setStatus(s);
  context.write(NullWritable.get(), result);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:18,代碼來源:DistSum.java

示例3: ResourceUsageMatcherRunner

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入依賴的package包/類
ResourceUsageMatcherRunner(final TaskInputOutputContext context, 
                           ResourceUsageMetrics metrics) {
  Configuration conf = context.getConfiguration();
  
  // set the resource calculator plugin
  Class<? extends ResourceCalculatorPlugin> clazz =
    conf.getClass(TTConfig.TT_RESOURCE_CALCULATOR_PLUGIN,
                  null, ResourceCalculatorPlugin.class);
  ResourceCalculatorPlugin plugin = 
    ResourceCalculatorPlugin.getResourceCalculatorPlugin(clazz, conf);
  
  // set the other parameters
  this.sleepTime = conf.getLong(SLEEP_CONFIG, DEFAULT_SLEEP_TIME);
  progress = new BoostingProgress(context);
  
  // instantiate a resource-usage-matcher
  matcher = new ResourceUsageMatcher();
  matcher.configure(conf, plugin, metrics, progress);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:20,代碼來源:LoadJob.java

示例4: downloadGFF

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入依賴的package包/類
public static String downloadGFF(TaskInputOutputContext context) throws IOException, URISyntaxException, InterruptedException {
    Configuration conf = context.getConfiguration();
    String gff = HalvadeConf.getGff(context.getConfiguration());  
    if(gff == null) 
        return null;
    Boolean refIsLocal = HalvadeConf.getRefIsLocal(context.getConfiguration()); 
    if(refIsLocal) 
        return gff;
    String refDir = HalvadeConf.getScratchTempDir(conf);  
    if(!refDir.endsWith("/")) refDir = refDir + "/";
    String gffSuffix = null;
    int si = gff.lastIndexOf('.');
    if (si > 0)
        gffSuffix = gff.substring(si);
    else 
        throw new InterruptedException("Illegal filename for gff file: " + gff);
    Logger.DEBUG("suffix: " + gffSuffix);
    HalvadeFileLock lock = new HalvadeFileLock(refDir, HalvadeFileConstants.GFF_LOCK);
    String filebase = gff.substring(gff.lastIndexOf("/")+1).replace(gffSuffix, "");
    
    
    FileSystem fs = FileSystem.get(new URI(gff), conf);
    downloadFileWithLock(fs, lock, gff, refDir + filebase + gffSuffix, context.getConfiguration()); 
    return refDir + filebase + gffSuffix;
}
 
開發者ID:biointec,項目名稱:halvade,代碼行數:26,代碼來源:HalvadeFileUtils.java

示例5: rebuildStarGenome

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入依賴的package包/類
public static long rebuildStarGenome(TaskInputOutputContext context, String bin, String newGenomeDir, 
        String ref, String SJouttab, int sjoverhang, int threads, long mem, String stargtf) throws InterruptedException {
    Logger.DEBUG("Creating new genome in " + newGenomeDir);
    String[] command = 
            CommandGenerator.starRebuildGenome(bin, newGenomeDir, ref, SJouttab, 
                    sjoverhang, threads, mem, sparseGenome, stargtf);
    
    ProcessBuilderWrapper starbuild = new ProcessBuilderWrapper(command, bin);
    starbuild.startProcess(System.out, System.err);
    if(!starbuild.isAlive())
        throw new ProcessException("STAR rebuild genome", starbuild.getExitState());
    int error = starbuild.waitForCompletion();
    if(error != 0)
        throw new ProcessException("STAR aligner load", error);
    return starbuild.getExecutionTime();        
}
 
開發者ID:biointec,項目名稱:halvade,代碼行數:17,代碼來源:STARInstance.java

示例6: addReducer

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入依賴的package包/類
/**
 * Add reducer that reads from context and writes to a queue
 */
@SuppressWarnings("unchecked")
void addReducer(TaskInputOutputContext inputContext,
    ChainBlockingQueue<KeyValuePair<?, ?>> outputQueue) throws IOException,
    InterruptedException {

  Class<?> keyOutClass = rConf.getClass(REDUCER_OUTPUT_KEY_CLASS,
      Object.class);
  Class<?> valueOutClass = rConf.getClass(REDUCER_OUTPUT_VALUE_CLASS,
      Object.class);
  RecordWriter rw = new ChainRecordWriter(keyOutClass, valueOutClass,
      outputQueue, rConf);
  Reducer.Context reducerContext = createReduceContext(rw,
      (ReduceContext) inputContext, rConf);
  ReduceRunner runner = new ReduceRunner(reducerContext, reducer, rw);
  threads.add(runner);
}
 
開發者ID:Nextzero,項目名稱:hadoop-2.6.0-cdh5.4.3,代碼行數:20,代碼來源:Chain.java

示例7: configureLogging

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入依賴的package包/類
/**
 * <p>
 * Configures the logging for mapreduce (new api)
 * </p>
 * 
 * @param logFileDir
 *            Directory at slave node where log files will be created
 * @param context
 *            Context
 * @param isMapper
 *            true if mapper
 * @throws IOException
 */
@SuppressWarnings("rawtypes")
public static void configureLogging(String logFileDir,
		TaskInputOutputContext context, boolean isMapper)
		throws IOException {
	// combiner logs not required. They were logged in mapper log files.
	if (isMapper
			|| (!isMapper && !context.getConfiguration().getBoolean(
					"mapred.task.is.map", true))) {
		initializeJumbuneLog();
		try {
			LoggerUtil.loadLogger(logFileDir, context.getTaskAttemptID()
					.toString());
		} catch (Exception e) {
			LOGGER.debug(
					"Error ocurred while loading logger while running instrumented jar",
					e);
		}
	}
}
 
開發者ID:Impetus,項目名稱:jumbune,代碼行數:33,代碼來源:MapReduceExecutionUtil.java


注:本文中的org.apache.hadoop.mapreduce.TaskInputOutputContext類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。