當前位置: 首頁>>代碼示例>>Java>>正文


Java DistributedCache.addCacheArchive方法代碼示例

本文整理匯總了Java中org.apache.hadoop.filecache.DistributedCache.addCacheArchive方法的典型用法代碼示例。如果您正苦於以下問題:Java DistributedCache.addCacheArchive方法的具體用法?Java DistributedCache.addCacheArchive怎麽用?Java DistributedCache.addCacheArchive使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.filecache.DistributedCache的用法示例。


在下文中一共展示了DistributedCache.addCacheArchive方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: addSolrConfToDistributedCache

import org.apache.hadoop.filecache.DistributedCache; //導入方法依賴的package包/類
public static void addSolrConfToDistributedCache(Job job, File solrHomeZip)
    throws IOException {
  // Make a reasonably unique name for the zip file in the distributed cache
  // to avoid collisions if multiple jobs are running.
  String hdfsZipName = UUID.randomUUID().toString() + '.'
      + ZIP_FILE_BASE_NAME;
  Configuration jobConf = job.getConfiguration();
  jobConf.set(ZIP_NAME, hdfsZipName);

  Path zipPath = new Path("/tmp", getZipName(jobConf));
  FileSystem fs = FileSystem.get(jobConf);
  fs.copyFromLocalFile(new Path(solrHomeZip.toString()), zipPath);
  final URI baseZipUrl = fs.getUri().resolve(
      zipPath.toString() + '#' + getZipName(jobConf));

  DistributedCache.addCacheArchive(baseZipUrl, jobConf);
  LOG.debug("Set Solr distributed cache: {}", Arrays.asList(job.getCacheArchives()));
  LOG.debug("Set zipPath: {}", zipPath);
  // Actually send the path for the configuration zip file
  jobConf.set(SETUP_OK, zipPath.toString());
}
 
開發者ID:europeana,項目名稱:search,代碼行數:22,代碼來源:SolrOutputFormat.java

示例2: testDuplicationsMinicluster

import org.apache.hadoop.filecache.DistributedCache; //導入方法依賴的package包/類
public void testDuplicationsMinicluster() throws Exception {
  OutputStream os = getFileSystem().create(new Path(getInputDir(), "text.txt"));
  Writer wr = new OutputStreamWriter(os);
  wr.write("hello1\n");
  wr.write("hello2\n");
  wr.write("hello3\n");
  wr.write("hello4\n");
  wr.close();

  JobConf conf = createJobConf();
  conf.setJobName("counters");
  
  conf.setInputFormat(TextInputFormat.class);

  conf.setMapOutputKeyClass(LongWritable.class);
  conf.setMapOutputValueClass(Text.class);

  conf.setOutputFormat(TextOutputFormat.class);
  conf.setOutputKeyClass(LongWritable.class);
  conf.setOutputValueClass(Text.class);

  conf.setMapperClass(IdentityMapper.class);
  conf.setReducerClass(IdentityReducer.class);

  FileInputFormat.setInputPaths(conf, getInputDir());

  FileOutputFormat.setOutputPath(conf, getOutputDir());

  Path inputRoot = getInputDir().makeQualified(getFileSystem());
  Path unqualifiedInputRoot = getInputDir();
  System.out.println("The qualified input dir is " + inputRoot.toString());
  System.out.println("The unqualified input dir is " + unqualifiedInputRoot.toString());

  Path duplicatedPath = new Path(inputRoot, "text.txt");
  URI duplicatedURI = duplicatedPath.toUri();

  Path unqualifiedDuplicatedPath = new Path(unqualifiedInputRoot, "text.txt");
  URI unqualifiedDuplicatedURI = unqualifiedDuplicatedPath.toUri();

  System.out.println("The duplicated Path is " + duplicatedPath);
  System.out.println("The duplicated URI is " + duplicatedURI);
  System.out.println("The unqualified duplicated URI is " + unqualifiedDuplicatedURI);

  DistributedCache.addCacheArchive(duplicatedURI, conf);
  DistributedCache.addCacheFile(unqualifiedDuplicatedURI, conf);

  try {
    RunningJob runningJob = JobClient.runJob(conf);

    assertFalse("The job completed, which is wrong since there's a duplication", true);
  } catch (InvalidJobConfException e) {
    System.out.println("We expect to see a stack trace here.");
    e.printStackTrace(System.out);
  }
}
 
開發者ID:Nextzero,項目名稱:hadoop-2.6.0-cdh5.4.3,代碼行數:56,代碼來源:TestDuplicateArchiveFileCachedURLMinicluster.java


注:本文中的org.apache.hadoop.filecache.DistributedCache.addCacheArchive方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。