當前位置: 首頁>>代碼示例>>Java>>正文


Java TaskInputOutputContext.getConfiguration方法代碼示例

本文整理匯總了Java中org.apache.hadoop.mapreduce.TaskInputOutputContext.getConfiguration方法的典型用法代碼示例。如果您正苦於以下問題:Java TaskInputOutputContext.getConfiguration方法的具體用法?Java TaskInputOutputContext.getConfiguration怎麽用?Java TaskInputOutputContext.getConfiguration使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.mapreduce.TaskInputOutputContext的用法示例。


在下文中一共展示了TaskInputOutputContext.getConfiguration方法的8個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: ResourceUsageMatcherRunner

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入方法依賴的package包/類
ResourceUsageMatcherRunner(final TaskInputOutputContext context, 
                           ResourceUsageMetrics metrics) {
  Configuration conf = context.getConfiguration();
  
  // set the resource calculator plugin
  Class<? extends ResourceCalculatorPlugin> clazz =
    conf.getClass(TTConfig.TT_RESOURCE_CALCULATOR_PLUGIN,
                  null, ResourceCalculatorPlugin.class);
  ResourceCalculatorPlugin plugin = 
    ResourceCalculatorPlugin.getResourceCalculatorPlugin(clazz, conf);
  
  // set the other parameters
  this.sleepTime = conf.getLong(SLEEP_CONFIG, DEFAULT_SLEEP_TIME);
  progress = new BoostingProgress(context);
  
  // instantiate a resource-usage-matcher
  matcher = new ResourceUsageMatcher();
  matcher.configure(conf, plugin, metrics, progress);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:20,代碼來源:LoadJob.java

示例2: downloadGFF

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入方法依賴的package包/類
public static String downloadGFF(TaskInputOutputContext context) throws IOException, URISyntaxException, InterruptedException {
    Configuration conf = context.getConfiguration();
    String gff = HalvadeConf.getGff(context.getConfiguration());  
    if(gff == null) 
        return null;
    Boolean refIsLocal = HalvadeConf.getRefIsLocal(context.getConfiguration()); 
    if(refIsLocal) 
        return gff;
    String refDir = HalvadeConf.getScratchTempDir(conf);  
    if(!refDir.endsWith("/")) refDir = refDir + "/";
    String gffSuffix = null;
    int si = gff.lastIndexOf('.');
    if (si > 0)
        gffSuffix = gff.substring(si);
    else 
        throw new InterruptedException("Illegal filename for gff file: " + gff);
    Logger.DEBUG("suffix: " + gffSuffix);
    HalvadeFileLock lock = new HalvadeFileLock(refDir, HalvadeFileConstants.GFF_LOCK);
    String filebase = gff.substring(gff.lastIndexOf("/")+1).replace(gffSuffix, "");
    
    
    FileSystem fs = FileSystem.get(new URI(gff), conf);
    downloadFileWithLock(fs, lock, gff, refDir + filebase + gffSuffix, context.getConfiguration()); 
    return refDir + filebase + gffSuffix;
}
 
開發者ID:biointec,項目名稱:halvade,代碼行數:26,代碼來源:HalvadeFileUtils.java

示例3: MapReducePOStoreImpl

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入方法依賴的package包/類
public MapReducePOStoreImpl(TaskInputOutputContext context) {
    // get a copy of the Configuration so that changes to the
    // configuration below (like setting the output location) do
    // not affect the caller's copy
    Configuration outputConf = new Configuration(context.getConfiguration());
    PigStatusReporter.setContext(context);
    reporter = PigStatusReporter.getInstance();
   
    // make a copy of the Context to use here - since in the same
    // task (map or reduce) we could have multiple stores, we should
    // make this copy so that the same context does not get over-written
    // by the different stores.
    
    this.context = HadoopShims.createTaskAttemptContext(outputConf, 
            context.getTaskAttemptID());
}
 
開發者ID:sigmoidanalytics,項目名稱:spork-streaming,代碼行數:17,代碼來源:MapReducePOStoreImpl.java

示例4: MapReducePOStoreImpl

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入方法依賴的package包/類
public MapReducePOStoreImpl(TaskInputOutputContext<?,?,?,?> context) {
    // get a copy of the Configuration so that changes to the
    // configuration below (like setting the output location) do
    // not affect the caller's copy
    Configuration outputConf = new Configuration(context.getConfiguration());
    reporter = PigStatusReporter.getInstance();
    reporter.setContext(new MRTaskContext(context));

    // make a copy of the Context to use here - since in the same
    // task (map or reduce) we could have multiple stores, we should
    // make this copy so that the same context does not get over-written
    // by the different stores.

    this.context = HadoopShims.createTaskAttemptContext(outputConf,
            context.getTaskAttemptID());
}
 
開發者ID:sigmoidanalytics,項目名稱:spork,代碼行數:17,代碼來源:MapReducePOStoreImpl.java

示例5: downloadAlignerIndex

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入方法依賴的package包/類
protected static String downloadAlignerIndex(TaskInputOutputContext context, String[] refFiles) throws IOException, URISyntaxException {
    Configuration conf = context.getConfiguration();
    Boolean refIsLocal = HalvadeConf.getRefIsLocal(context.getConfiguration()); 
    String ref = HalvadeConf.getRef(conf);
    if(refIsLocal) 
        return ref;
    String HDFSRef = ref;
    String refDir = HalvadeConf.getScratchTempDir(conf);
    if(!refDir.endsWith("/")) refDir = refDir + "/";
    HalvadeFileLock lock = new HalvadeFileLock(refDir, HalvadeFileConstants.REF_LOCK);
    FileSystem fs = FileSystem.get(new URI(HDFSRef), conf);
    String suffix = HDFSRef.endsWith(HalvadeFileConstants.FASTA_SUFFIX) ? HalvadeFileConstants.FASTA_SUFFIX : HalvadeFileConstants.FA_SUFFIX;
    String filebase = HDFSRef.substring(HDFSRef.lastIndexOf("/")+1).replace(suffix, "");
    try {
        for (String filesuffix : refFiles) { 
            String newsuffix = filesuffix.replace(HalvadeFileConstants.FASTA_SUFFIX, suffix);
            String newfile = HDFSRef.replace(suffix, newsuffix);
            downloadFileWithLock(fs, lock, newfile, refDir + filebase + newsuffix, context.getConfiguration());          
        }
        
    } catch (InterruptedException ex) {
        Logger.EXCEPTION(ex);
    } finally {
        lock.removeAndReleaseLock();
    }
    Logger.DEBUG("local fasta reference: " + refDir + filebase + suffix);
    return refDir + filebase + suffix; 
}
 
開發者ID:biointec,項目名稱:halvade,代碼行數:29,代碼來源:HalvadeFileUtils.java

示例6: downloadGATKIndex

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入方法依賴的package包/類
public static String downloadGATKIndex(TaskInputOutputContext context) throws IOException, URISyntaxException {
    Configuration conf = context.getConfiguration();
    Boolean refIsLocal = HalvadeConf.getRefIsLocal(context.getConfiguration()); 
    String ref = HalvadeConf.getRef(conf);
    if(refIsLocal) 
        return ref;
    String HDFSRef = ref;
    String refDir = HalvadeConf.getScratchTempDir(conf);
    if(!refDir.endsWith("/")) refDir = refDir + "/";
    HalvadeFileLock lock = new HalvadeFileLock(refDir, HalvadeFileConstants.REF_LOCK);
    FileSystem fs = FileSystem.get(new URI(HDFSRef), conf);
    String suffix = HDFSRef.endsWith(HalvadeFileConstants.FASTA_SUFFIX) ? HalvadeFileConstants.FASTA_SUFFIX : HalvadeFileConstants.FA_SUFFIX;
    String filebase = HDFSRef.substring(HDFSRef.lastIndexOf("/")+1).replace(suffix, "");
    try {
        for (String filesuffix : HalvadeFileConstants.GATK_REF_FILES) {
            String newsuffix = filesuffix.replace(HalvadeFileConstants.FASTA_SUFFIX, suffix);
            String newfile = HDFSRef.replace(suffix, newsuffix);
            downloadFileWithLock(fs, lock, newfile, refDir + filebase + newsuffix, context.getConfiguration());       
        }
        
    } catch (InterruptedException ex) {
        Logger.EXCEPTION(ex);
    } finally {
        lock.removeAndReleaseLock();
    }
    Logger.DEBUG("local fasta reference: " + refDir + filebase + suffix);
    return refDir + filebase + suffix;
}
 
開發者ID:biointec,項目名稱:halvade,代碼行數:29,代碼來源:HalvadeFileUtils.java

示例7: downloadSites

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入方法依賴的package包/類
public static String[] downloadSites(TaskInputOutputContext context, String id) throws IOException, URISyntaxException, InterruptedException {
    Configuration conf = context.getConfiguration();
    Boolean refIsLocal = HalvadeConf.getRefIsLocal(context.getConfiguration()); 
    String sites[] = HalvadeConf.getKnownSitesOnHDFS(conf);
    if(refIsLocal || sites == null || sites.length == 0)
        return sites;
    String HDFSsites[] = sites;
    String localSites[] = new String[sites.length];
    String refDir = HalvadeConf.getScratchTempDir(conf);
    if(!refDir.endsWith("/")) refDir = refDir + "/"; 
    HalvadeFileLock lock = new HalvadeFileLock(refDir, HalvadeFileConstants.REF_LOCK);
    FileSystem fs = FileSystem.get(new URI(sites[0]), conf);
    
    try {
        for (int i= 0; i < HDFSsites.length; i++) {
            String hdfssite = HDFSsites[i];
            String name = hdfssite.substring(hdfssite.lastIndexOf('/') + 1);
            downloadFileWithLock(fs, lock, hdfssite, refDir + name, context.getConfiguration());
            localSites[i] = refDir + name;
            // attempt to download .idx file
            if(fs.exists(new Path(hdfssite + ".idx")))
                downloadFileWithLock(fs, lock, hdfssite + ".idx", refDir + name + ".idx", context.getConfiguration());
        }
        
    } catch (InterruptedException ex) {
        Logger.EXCEPTION(ex);
    } finally {
        lock.removeAndReleaseLock();
    }
    Logger.DEBUG("local sires:");
    for (String site: localSites) {
        Logger.DEBUG(site);
    }
    return localSites;
}
 
開發者ID:biointec,項目名稱:halvade,代碼行數:36,代碼來源:HalvadeFileUtils.java

示例8: setup

import org.apache.hadoop.mapreduce.TaskInputOutputContext; //導入方法依賴的package包/類
public void setup(TaskInputOutputContext<?, ?, ?, ?> context)
    throws IOException {
  Configuration conf = context.getConfiguration();
  Path[] localFiles = context.getLocalCacheFiles();
  URI[] files = context.getCacheFiles();
  Path[] localArchives = context.getLocalCacheArchives();
  URI[] archives = context.getCacheArchives();
  FileSystem fs = LocalFileSystem.get(conf);

  // Check that 2 files and 2 archives are present
  TestCase.assertEquals(2, localFiles.length);
  TestCase.assertEquals(2, localArchives.length);
  TestCase.assertEquals(2, files.length);
  TestCase.assertEquals(2, archives.length);

  // Check the file name
  TestCase.assertTrue(files[0].getPath().endsWith("distributed.first"));
  TestCase.assertTrue(files[1].getPath().endsWith("distributed.second.jar"));
  
  // Check lengths of the files
  TestCase.assertEquals(1, fs.getFileStatus(localFiles[0]).getLen());
  TestCase.assertTrue(fs.getFileStatus(localFiles[1]).getLen() > 1);

  // Check extraction of the archive
  TestCase.assertTrue(fs.exists(new Path(localArchives[0],
      "distributed.jar.inside3")));
  TestCase.assertTrue(fs.exists(new Path(localArchives[1],
      "distributed.jar.inside4")));

  // Check the class loaders
  LOG.info("Java Classpath: " + System.getProperty("java.class.path"));
  ClassLoader cl = Thread.currentThread().getContextClassLoader();
  // Both the file and the archive were added to classpath, so both
  // should be reachable via the class loader.
  TestCase.assertNotNull(cl.getResource("distributed.jar.inside2"));
  TestCase.assertNotNull(cl.getResource("distributed.jar.inside3"));
  TestCase.assertNull(cl.getResource("distributed.jar.inside4"));

  // Check that the symlink for the renaming was created in the cwd;
  TestCase.assertTrue("symlink distributed.first.symlink doesn't exist",
      symlinkFile.exists());
  TestCase.assertEquals("symlink distributed.first.symlink length not 1", 1,
      symlinkFile.length());
  
  //This last one is a difference between MRv2 and MRv1
  TestCase.assertTrue("second file should be symlinked too",
      expectedAbsentSymlinkFile.exists());
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:49,代碼來源:TestMRWithDistributedCache.java


注:本文中的org.apache.hadoop.mapreduce.TaskInputOutputContext.getConfiguration方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。