当前位置: 首页>>代码示例>>Java>>正文


Java Merger.writeFile方法代码示例

本文整理汇总了Java中org.apache.hadoop.mapred.Merger.writeFile方法的典型用法代码示例。如果您正苦于以下问题:Java Merger.writeFile方法的具体用法?Java Merger.writeFile怎么用?Java Merger.writeFile使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.mapred.Merger的用法示例。


在下文中一共展示了Merger.writeFile方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: merge

import org.apache.hadoop.mapred.Merger; //导入方法依赖的package包/类
@Override
public void merge(List<InMemoryMapOutput<K, V>> inputs) throws IOException {
  if (inputs == null || inputs.size() == 0) {
    return;
  }

  TaskAttemptID dummyMapId = inputs.get(0).getMapId(); 
  List<Segment<K, V>> inMemorySegments = new ArrayList<Segment<K, V>>();
  long mergeOutputSize = 
    createInMemorySegments(inputs, inMemorySegments, 0);
  int noInMemorySegments = inMemorySegments.size();
  
  InMemoryMapOutput<K, V> mergedMapOutputs = 
    unconditionalReserve(dummyMapId, mergeOutputSize, false);
  
  Writer<K, V> writer = 
    new InMemoryWriter<K, V>(mergedMapOutputs.getArrayStream());
  
  LOG.info("Initiating Memory-to-Memory merge with " + noInMemorySegments +
           " segments of total-size: " + mergeOutputSize);

  RawKeyValueIterator rIter = 
    Merger.merge(jobConf, rfs,
                 (Class<K>)jobConf.getMapOutputKeyClass(),
                 (Class<V>)jobConf.getMapOutputValueClass(),
                 inMemorySegments, inMemorySegments.size(),
                 new Path(reduceId.toString()),
                 (RawComparator<K>)jobConf.getOutputKeyComparator(),
                 reporter, null, null, null);
  Merger.writeFile(rIter, writer, reporter, jobConf);
  writer.close();

  LOG.info(reduceId +  
           " Memory-to-Memory merge of the " + noInMemorySegments +
           " files in-memory complete.");

  // Note the output of the merge
  closeInMemoryMergedFile(mergedMapOutputs);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:40,代码来源:MergeManagerImpl.java

示例2: merge

import org.apache.hadoop.mapred.Merger; //导入方法依赖的package包/类
@Override
public void merge(List<MapOutput<K, V>> inputs) throws IOException {
  if (inputs == null || inputs.size() == 0) {
    return;
  }

  TaskAttemptID dummyMapId = inputs.get(0).getMapId(); 
  List<Segment<K, V>> inMemorySegments = new ArrayList<Segment<K, V>>();
  long mergeOutputSize = 
    createInMemorySegments(inputs, inMemorySegments, 0);
  int noInMemorySegments = inMemorySegments.size();
  
  MapOutput<K, V> mergedMapOutputs = 
    unconditionalReserve(dummyMapId, mergeOutputSize, false);
  
  Writer<K, V> writer = 
    new InMemoryWriter<K, V>(mergedMapOutputs.getArrayStream());
  
  LOG.info("Initiating Memory-to-Memory merge with " + noInMemorySegments +
           " segments of total-size: " + mergeOutputSize);

  RawKeyValueIterator rIter = 
    Merger.merge(jobConf, rfs,
                 (Class<K>)jobConf.getMapOutputKeyClass(),
                 (Class<V>)jobConf.getMapOutputValueClass(),
                 inMemorySegments, inMemorySegments.size(),
                 new Path(reduceId.toString()),
                 (RawComparator<K>)jobConf.getOutputKeyComparator(),
                 reporter, null, null, null);
  Merger.writeFile(rIter, writer, reporter, jobConf);
  writer.close();

  LOG.info(reduceId +  
           " Memory-to-Memory merge of the " + noInMemorySegments +
           " files in-memory complete.");

  // Note the output of the merge
  closeInMemoryMergedFile(mergedMapOutputs);
}
 
开发者ID:rekhajoshm,项目名称:mapreduce-fork,代码行数:40,代码来源:MergeManager.java

示例3: merge

import org.apache.hadoop.mapred.Merger; //导入方法依赖的package包/类
@SuppressWarnings("unchecked")
@Override
public void merge(List<Segment<K,V>> segments) throws IOException {
    // sanity check
    if (segments == null || segments.isEmpty()) {
        LOG.info("No ondisk files to merge...");
        return;
    }
    
    Class<K> keyClass = (Class<K>) jobConf.getMapOutputKeyClass();
    Class<V> valueClass = (Class<V>) jobConf.getMapOutputValueClass();
    final RawComparator<K> comparator = (RawComparator<K>) jobConf.getOutputKeyComparator();
    
    long approxOutputSize = 0;
    int bytesPerSum = jobConf.getInt("io.bytes.per.checksum", 512);
    
    LOG.info("OnDiskMerger: We have  " + segments.size()
             + " map outputs on disk. Triggering merge...");
    
    // 1. Prepare the list of files to be merged.
    for (Segment<K,V> segment : segments) {
        approxOutputSize += segment.getLength();
    }
    
    // add the checksum length
    approxOutputSize += ChecksumFileSystem.getChecksumLength(approxOutputSize, bytesPerSum);
    
    // 2. Start the on-disk merge process
    Path outputPath = new Path(reduceDir, "file-" + (numPasses++)).suffix(Task.MERGED_OUTPUT_PREFIX);
    
    Writer<K, V> writer = new Writer<K, V>(jobConf, lustrefs.create(outputPath),
                                           (Class<K>) jobConf.getMapOutputKeyClass(), 
                                           (Class<V>) jobConf.getMapOutputValueClass(),
                                           codec, null, true);
    RawKeyValueIterator iter = null;
    try {
        iter = Merger.merge(jobConf, lustrefs, keyClass, valueClass, segments, ioSortFactor, mergeTempDir,
                            comparator, reporter, spilledRecordsCounter, mergedMapOutputsCounter, null);
        Merger.writeFile(iter, writer, reporter, jobConf);
        writer.close();
    } catch (IOException e) {
        lustrefs.delete(outputPath, true);
        throw e;
    }
    addSegmentToMerge(new Segment<K, V>(jobConf, lustrefs, outputPath, codec, false, null));
    LOG.info(reduceId + " Finished merging " + segments.size()
             + " map output files on disk of total-size " + approxOutputSize + "."
             + " Local output file is " + outputPath + " of size "
             + lustrefs.getFileStatus(outputPath).getLen());
}
 
开发者ID:intel-hpdd,项目名称:lustre-connector-for-hadoop,代码行数:51,代码来源:LustreFsShuffle.java


注:本文中的org.apache.hadoop.mapred.Merger.writeFile方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。