当前位置: 首页>>代码示例>>Java>>正文


Java DistributedCache.addCacheArchive方法代码示例

本文整理汇总了Java中org.apache.hadoop.filecache.DistributedCache.addCacheArchive方法的典型用法代码示例。如果您正苦于以下问题:Java DistributedCache.addCacheArchive方法的具体用法?Java DistributedCache.addCacheArchive怎么用?Java DistributedCache.addCacheArchive使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.filecache.DistributedCache的用法示例。


在下文中一共展示了DistributedCache.addCacheArchive方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: addSolrConfToDistributedCache

import org.apache.hadoop.filecache.DistributedCache; //导入方法依赖的package包/类
public static void addSolrConfToDistributedCache(Job job, File solrHomeZip)
    throws IOException {
  // Make a reasonably unique name for the zip file in the distributed cache
  // to avoid collisions if multiple jobs are running.
  String hdfsZipName = UUID.randomUUID().toString() + '.'
      + ZIP_FILE_BASE_NAME;
  Configuration jobConf = job.getConfiguration();
  jobConf.set(ZIP_NAME, hdfsZipName);

  Path zipPath = new Path("/tmp", getZipName(jobConf));
  FileSystem fs = FileSystem.get(jobConf);
  fs.copyFromLocalFile(new Path(solrHomeZip.toString()), zipPath);
  final URI baseZipUrl = fs.getUri().resolve(
      zipPath.toString() + '#' + getZipName(jobConf));

  DistributedCache.addCacheArchive(baseZipUrl, jobConf);
  LOG.debug("Set Solr distributed cache: {}", Arrays.asList(job.getCacheArchives()));
  LOG.debug("Set zipPath: {}", zipPath);
  // Actually send the path for the configuration zip file
  jobConf.set(SETUP_OK, zipPath.toString());
}
 
开发者ID:europeana,项目名称:search,代码行数:22,代码来源:SolrOutputFormat.java

示例2: testDuplicationsMinicluster

import org.apache.hadoop.filecache.DistributedCache; //导入方法依赖的package包/类
public void testDuplicationsMinicluster() throws Exception {
  OutputStream os = getFileSystem().create(new Path(getInputDir(), "text.txt"));
  Writer wr = new OutputStreamWriter(os);
  wr.write("hello1\n");
  wr.write("hello2\n");
  wr.write("hello3\n");
  wr.write("hello4\n");
  wr.close();

  JobConf conf = createJobConf();
  conf.setJobName("counters");
  
  conf.setInputFormat(TextInputFormat.class);

  conf.setMapOutputKeyClass(LongWritable.class);
  conf.setMapOutputValueClass(Text.class);

  conf.setOutputFormat(TextOutputFormat.class);
  conf.setOutputKeyClass(LongWritable.class);
  conf.setOutputValueClass(Text.class);

  conf.setMapperClass(IdentityMapper.class);
  conf.setReducerClass(IdentityReducer.class);

  FileInputFormat.setInputPaths(conf, getInputDir());

  FileOutputFormat.setOutputPath(conf, getOutputDir());

  Path inputRoot = getInputDir().makeQualified(getFileSystem());
  Path unqualifiedInputRoot = getInputDir();
  System.out.println("The qualified input dir is " + inputRoot.toString());
  System.out.println("The unqualified input dir is " + unqualifiedInputRoot.toString());

  Path duplicatedPath = new Path(inputRoot, "text.txt");
  URI duplicatedURI = duplicatedPath.toUri();

  Path unqualifiedDuplicatedPath = new Path(unqualifiedInputRoot, "text.txt");
  URI unqualifiedDuplicatedURI = unqualifiedDuplicatedPath.toUri();

  System.out.println("The duplicated Path is " + duplicatedPath);
  System.out.println("The duplicated URI is " + duplicatedURI);
  System.out.println("The unqualified duplicated URI is " + unqualifiedDuplicatedURI);

  DistributedCache.addCacheArchive(duplicatedURI, conf);
  DistributedCache.addCacheFile(unqualifiedDuplicatedURI, conf);

  try {
    RunningJob runningJob = JobClient.runJob(conf);

    assertFalse("The job completed, which is wrong since there's a duplication", true);
  } catch (InvalidJobConfException e) {
    System.out.println("We expect to see a stack trace here.");
    e.printStackTrace(System.out);
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:56,代码来源:TestDuplicateArchiveFileCachedURLMinicluster.java


注:本文中的org.apache.hadoop.filecache.DistributedCache.addCacheArchive方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。