当前位置: 首页>>代码示例>>Java>>正文


Java CompressAwarePath类代码示例

本文整理汇总了Java中org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.CompressAwarePath的典型用法代码示例。如果您正苦于以下问题:Java CompressAwarePath类的具体用法?Java CompressAwarePath怎么用?Java CompressAwarePath使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


CompressAwarePath类属于org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl包,在下文中一共展示了CompressAwarePath类的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: commit

import org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.CompressAwarePath; //导入依赖的package包/类
@Override
public void commit() throws IOException {
  fs.rename(tmpOutputPath, outputPath);
  CompressAwarePath compressAwarePath = new CompressAwarePath(outputPath,
      getSize(), this.compressedSize);
  merger.closeOnDiskFile(compressAwarePath);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:OnDiskMapOutput.java

示例2: commit

import org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.CompressAwarePath; //导入依赖的package包/类
@Override
public void commit() throws IOException {
  fs.rename(tmpOutputPath, outputPath);
  CompressAwarePath compressAwarePath = new CompressAwarePath(outputPath,
      getSize(), this.compressedSize);
  getMerger().closeOnDiskFile(compressAwarePath);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:8,代码来源:OnDiskMapOutput.java

示例3: testOnDiskMerger

import org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.CompressAwarePath; //导入依赖的package包/类
@SuppressWarnings({ "unchecked", "deprecation" })
@Test(timeout=10000)
public void testOnDiskMerger() throws IOException, URISyntaxException,
  InterruptedException {
  JobConf jobConf = new JobConf();
  final int SORT_FACTOR = 5;
  jobConf.setInt(MRJobConfig.IO_SORT_FACTOR, SORT_FACTOR);

  MapOutputFile mapOutputFile = new MROutputFiles();
  FileSystem fs = FileSystem.getLocal(jobConf);
  MergeManagerImpl<IntWritable, IntWritable> manager =
    new MergeManagerImpl<IntWritable, IntWritable>(null, jobConf, fs, null
      , null, null, null, null, null, null, null, null, null, mapOutputFile);

  MergeThread<MapOutput<IntWritable, IntWritable>, IntWritable, IntWritable>
    onDiskMerger = (MergeThread<MapOutput<IntWritable, IntWritable>,
      IntWritable, IntWritable>) Whitebox.getInternalState(manager,
        "onDiskMerger");
  int mergeFactor = (Integer) Whitebox.getInternalState(onDiskMerger,
    "mergeFactor");

  // make sure the io.sort.factor is set properly
  assertEquals(mergeFactor, SORT_FACTOR);

  // Stop the onDiskMerger thread so that we can intercept the list of files
  // waiting to be merged.
  onDiskMerger.suspend();

  //Send the list of fake files waiting to be merged
  Random rand = new Random();
  for(int i = 0; i < 2*SORT_FACTOR; ++i) {
    Path path = new Path("somePath");
    CompressAwarePath cap = new CompressAwarePath(path, 1l, rand.nextInt());
    manager.closeOnDiskFile(cap);
  }

  //Check that the files pending to be merged are in sorted order.
  LinkedList<List<CompressAwarePath>> pendingToBeMerged =
    (LinkedList<List<CompressAwarePath>>) Whitebox.getInternalState(
      onDiskMerger, "pendingToBeMerged");
  assertTrue("No inputs were added to list pending to merge",
    pendingToBeMerged.size() > 0);
  for(int i = 0; i < pendingToBeMerged.size(); ++i) {
    List<CompressAwarePath> inputs = pendingToBeMerged.get(i);
    for(int j = 1; j < inputs.size(); ++j) {
      assertTrue("Not enough / too many inputs were going to be merged",
        inputs.size() > 0 && inputs.size() <= SORT_FACTOR);
      assertTrue("Inputs to be merged were not sorted according to size: ",
        inputs.get(j).getCompressedSize()
        >= inputs.get(j-1).getCompressedSize());
    }
  }

}
 
开发者ID:naver,项目名称:hadoop,代码行数:55,代码来源:TestMergeManager.java


注:本文中的org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.CompressAwarePath类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。