當前位置: 首頁>>代碼示例>>Java>>正文


Java CompressionCodec類代碼示例

本文整理匯總了Java中org.apache.hadoop.io.compress.CompressionCodec的典型用法代碼示例。如果您正苦於以下問題:Java CompressionCodec類的具體用法?Java CompressionCodec怎麽用?Java CompressionCodec使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


CompressionCodec類屬於org.apache.hadoop.io.compress包,在下文中一共展示了CompressionCodec類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: open

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
protected void open(Path dstPath, CompressionCodec codeC,
    CompressionType compType, Configuration conf, FileSystem hdfs)
        throws IOException {
  if(useRawLocalFileSystem) {
    if(hdfs instanceof LocalFileSystem) {
      hdfs = ((LocalFileSystem)hdfs).getRaw();
    } else {
      logger.warn("useRawLocalFileSystem is set to true but file system " +
          "is not of type LocalFileSystem: " + hdfs.getClass().getName());
    }
  }
  if (conf.getBoolean("hdfs.append.support", false) == true && hdfs.isFile
          (dstPath)) {
    outStream = hdfs.append(dstPath);
  } else {
    outStream = hdfs.create(dstPath);
  }
  writer = SequenceFile.createWriter(conf, outStream,
      serializer.getKeyClass(), serializer.getValueClass(), compType, codeC);

  registerCurrentStream(outStream, hdfs, dstPath);
}
 
開發者ID:Transwarp-DE,項目名稱:Transwarp-Sample-Code,代碼行數:23,代碼來源:HDFSSequenceFile.java

示例2: getPossiblyCompressedOutputStream

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
/**
 * Returns a {@link OutputStream} for a file that might need 
 * compression.
 */
static OutputStream getPossiblyCompressedOutputStream(Path file, 
                                                      Configuration conf)
throws IOException {
  FileSystem fs = file.getFileSystem(conf);
  JobConf jConf = new JobConf(conf);
  if (org.apache.hadoop.mapred.FileOutputFormat.getCompressOutput(jConf)) {
    // get the codec class
    Class<? extends CompressionCodec> codecClass =
      org.apache.hadoop.mapred.FileOutputFormat
                              .getOutputCompressorClass(jConf, 
                                                        GzipCodec.class);
    // get the codec implementation
    CompressionCodec codec = ReflectionUtils.newInstance(codecClass, conf);

    // add the appropriate extension
    file = file.suffix(codec.getDefaultExtension());

    if (isCompressionEmulationEnabled(conf)) {
      FSDataOutputStream fileOut = fs.create(file, false);
      return new DataOutputStream(codec.createOutputStream(fileOut));
    }
  }
  return fs.create(file, false);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:29,代碼來源:CompressionEmulationUtil.java

示例3: open

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
protected void open(Path dstPath, CompressionCodec codeC,
    CompressionType compType, Configuration conf, FileSystem hdfs)
        throws IOException {
  if (useRawLocalFileSystem) {
    if (hdfs instanceof LocalFileSystem) {
      hdfs = ((LocalFileSystem)hdfs).getRaw();
    } else {
      logger.warn("useRawLocalFileSystem is set to true but file system " +
          "is not of type LocalFileSystem: " + hdfs.getClass().getName());
    }
  }
  if (conf.getBoolean("hdfs.append.support", false) == true && hdfs.isFile(dstPath)) {
    outStream = hdfs.append(dstPath);
  } else {
    outStream = hdfs.create(dstPath);
  }
  writer = SequenceFile.createWriter(conf, outStream,
      serializer.getKeyClass(), serializer.getValueClass(), compType, codeC);

  registerCurrentStream(outStream, hdfs, dstPath);
}
 
開發者ID:moueimei,項目名稱:flume-release-1.7.0,代碼行數:22,代碼來源:HDFSSequenceFile.java

示例4: getCompressor

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
public Compressor getCompressor() {
  CompressionCodec codec = getCodec(conf);
  if (codec != null) {
    Compressor compressor = CodecPool.getCompressor(codec);
    if (LOG.isTraceEnabled()) LOG.trace("Retrieved compressor " + compressor + " from pool.");
    if (compressor != null) {
      if (compressor.finished()) {
        // Somebody returns the compressor to CodecPool but is still using it.
        LOG.warn("Compressor obtained from CodecPool is already finished()");
      }
      compressor.reset();
    }
    return compressor;
  }
  return null;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:17,代碼來源:Compression.java

示例5: cloneFileAttributes

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
/**
 * Clones the attributes (like compression of the input file and creates a 
 * corresponding Writer
 * @param inputFile the path of the input file whose attributes should be 
 * cloned
 * @param outputFile the path of the output file 
 * @param prog the Progressable to report status during the file write
 * @return Writer
 * @throws IOException
 */
public Writer cloneFileAttributes(Path inputFile, Path outputFile, 
                                  Progressable prog) throws IOException {
  Reader reader = new Reader(conf,
                             Reader.file(inputFile),
                             new Reader.OnlyHeaderOption());
  CompressionType compress = reader.getCompressionType();
  CompressionCodec codec = reader.getCompressionCodec();
  reader.close();

  Writer writer = createWriter(conf, 
                               Writer.file(outputFile), 
                               Writer.keyClass(keyClass), 
                               Writer.valueClass(valClass), 
                               Writer.compression(compress, codec), 
                               Writer.progressable(prog));
  return writer;
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:28,代碼來源:SequenceFile.java

示例6: getCompressor

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
public Compressor getCompressor() throws IOException {
  CompressionCodec codec = getCodec();
  if (codec != null) {
    Compressor compressor = CodecPool.getCompressor(codec);
    if (compressor != null) {
      if (compressor.finished()) {
        // Somebody returns the compressor to CodecPool but is still using
        // it.
        LOG.warn("Compressor obtained from CodecPool already finished()");
      } else {
        if(LOG.isDebugEnabled()) {
          LOG.debug("Got a compressor: " + compressor.hashCode());
        }
      }
      /**
       * Following statement is necessary to get around bugs in 0.18 where a
       * compressor is referenced after returned back to the codec pool.
       */
      compressor.reset();
    }
    return compressor;
  }
  return null;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:25,代碼來源:Compression.java

示例7: getOutputCompressorClass

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
/**
 * Get the {@link CompressionCodec} for compressing the job outputs.
 * @param conf the {@link JobConf} to look in
 * @param defaultValue the {@link CompressionCodec} to return if not set
 * @return the {@link CompressionCodec} to be used to compress the 
 *         job outputs
 * @throws IllegalArgumentException if the class was specified, but not found
 */
public static Class<? extends CompressionCodec> 
getOutputCompressorClass(JobConf conf, 
                       Class<? extends CompressionCodec> defaultValue) {
  Class<? extends CompressionCodec> codecClass = defaultValue;
  
  String name = conf.get(org.apache.hadoop.mapreduce.lib.output.
    FileOutputFormat.COMPRESS_CODEC);
  if (name != null) {
    try {
      codecClass = 
      	conf.getClassByName(name).asSubclass(CompressionCodec.class);
    } catch (ClassNotFoundException e) {
      throw new IllegalArgumentException("Compression codec " + name + 
                                         " was not found.", e);
    }
  }
  return codecClass;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:27,代碼來源:FileOutputFormat.java

示例8: getDecompressor

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
public Decompressor getDecompressor() throws IOException {
  CompressionCodec codec = getCodec();
  if (codec != null) {
    Decompressor decompressor = CodecPool.getDecompressor(codec);
    if (decompressor != null) {
      if (decompressor.finished()) {
        // Somebody returns the decompressor to CodecPool but is still using
        // it.
        LOG.warn("Deompressor obtained from CodecPool already finished()");
      } else {
        if(LOG.isDebugEnabled()) {
          LOG.debug("Got a decompressor: " + decompressor.hashCode());
        }
      }
      /**
       * Following statement is necessary to get around bugs in 0.18 where a
       * decompressor is referenced after returned back to the codec pool.
       */
      decompressor.reset();
    }
    return decompressor;
  }

  return null;
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:26,代碼來源:Compression.java

示例9: createJsonGenerator

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
private JsonGenerator createJsonGenerator(Configuration conf, Path path) 
throws IOException {
  FileSystem outFS = path.getFileSystem(conf);
  CompressionCodec codec =
    new CompressionCodecFactory(conf).getCodec(path);
  OutputStream output;
  Compressor compressor = null;
  if (codec != null) {
    compressor = CodecPool.getCompressor(codec);
    output = codec.createOutputStream(outFS.create(path), compressor);
  } else {
    output = outFS.create(path);
  }

  JsonGenerator outGen = outFactory.createJsonGenerator(output, 
                                                        JsonEncoding.UTF8);
  outGen.useDefaultPrettyPrinter();
  
  return outGen;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:21,代碼來源:Anonymizer.java

示例10: writeMetadataTest

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
@SuppressWarnings("deprecation")
private void writeMetadataTest(FileSystem fs, int count, int seed, Path file, 
                                      CompressionType compressionType, CompressionCodec codec, SequenceFile.Metadata metadata)
  throws IOException {
  fs.delete(file, true);
  LOG.info("creating " + count + " records with metadata and with " + compressionType +
           " compression");
  SequenceFile.Writer writer = 
    SequenceFile.createWriter(fs, conf, file, 
                              RandomDatum.class, RandomDatum.class, compressionType, codec, null, metadata);
  RandomDatum.Generator generator = new RandomDatum.Generator(seed);
  for (int i = 0; i < count; i++) {
    generator.next();
    RandomDatum key = generator.getKey();
    RandomDatum value = generator.getValue();

    writer.append(key, value);
  }
  writer.close();
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:21,代碼來源:TestSequenceFile.java

示例11: Reader

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
/**
 * Construct an IFile Reader.
 * 
 * @param conf Configuration File 
 * @param in   The input stream
 * @param length Length of the data in the stream, including the checksum
 *               bytes.
 * @param codec codec
 * @param readsCounter Counter for records read from disk
 * @throws IOException
 */
public Reader(Configuration conf, FSDataInputStream in, long length, 
              CompressionCodec codec,
              Counters.Counter readsCounter) throws IOException {
  readRecordsCounter = readsCounter;
  checksumIn = new IFileInputStream(in,length, conf);
  if (codec != null) {
    decompressor = CodecPool.getDecompressor(codec);
    if (decompressor != null) {
      this.in = codec.createInputStream(checksumIn, decompressor);
    } else {
      LOG.warn("Could not obtain decompressor from CodecPool");
      this.in = checksumIn;
    }
  } else {
    this.in = checksumIn;
  }
  this.dataIn = new DataInputStream(this.in);
  this.fileLength = length;
  
  if (conf != null) {
    bufferSize = conf.getInt("io.file.buffer.size", DEFAULT_BUFFER_SIZE);
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:35,代碼來源:IFile.java

示例12: open

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
@Override
public void open(String filePath, CompressionCodec codeC, CompressionType compType)
    throws IOException {
  super.open(filePath, codeC, compType);
  if (closed) {
    opened = true;
  }
}
 
開發者ID:moueimei,項目名稱:flume-release-1.7.0,代碼行數:9,代碼來源:HDFSTestSeqWriter.java

示例13: getOutputCompressorClass

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
/**
 * Get the {@link CompressionCodec} for compressing the job outputs.
 * @param job the {@link Job} to look in
 * @param defaultValue the {@link CompressionCodec} to return if not set
 * @return the {@link CompressionCodec} to be used to compress the 
 *         job outputs
 * @throws IllegalArgumentException if the class was specified, but not found
 */
public static Class<? extends CompressionCodec> 
getOutputCompressorClass(JobContext job, 
                       Class<? extends CompressionCodec> defaultValue) {
  Class<? extends CompressionCodec> codecClass = defaultValue;
  Configuration conf = job.getConfiguration();
  String name = conf.get(FileOutputFormat.COMPRESS_CODEC);
  if (name != null) {
    try {
      codecClass = 
      	conf.getClassByName(name).asSubclass(CompressionCodec.class);
    } catch (ClassNotFoundException e) {
      throw new IllegalArgumentException("Compression codec " + name + 
                                         " was not found.", e);
    }
  }
  return codecClass;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:26,代碼來源:FileOutputFormat.java

示例14: PossiblyDecompressedInputStream

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
public PossiblyDecompressedInputStream(Path inputPath, Configuration conf)
    throws IOException {
  CompressionCodecFactory codecs = new CompressionCodecFactory(conf);
  CompressionCodec inputCodec = codecs.getCodec(inputPath);

  FileSystem ifs = inputPath.getFileSystem(conf);
  FSDataInputStream fileIn = ifs.open(inputPath);

  if (inputCodec == null) {
    decompressor = null;
    coreInputStream = fileIn;
  } else {
    decompressor = CodecPool.getDecompressor(inputCodec);
    coreInputStream = inputCodec.createInputStream(fileIn, decompressor);
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:17,代碼來源:PossiblyDecompressedInputStream.java

示例15: getCodec

import org.apache.hadoop.io.compress.CompressionCodec; //導入依賴的package包/類
/**
 * Given a codec name, instantiate the concrete implementation
 * class that implements it.
 * @throws com.cloudera.sqoop.io.UnsupportedCodecException if a codec cannot
 * be found with the supplied name.
 */
public static CompressionCodec getCodec(String codecName,
  Configuration conf) throws com.cloudera.sqoop.io.UnsupportedCodecException {
  // Try standard Hadoop mechanism first
  CompressionCodec codec = getCodecByName(codecName, conf);
  if (codec != null) {
    return codec;
  }
  // Fall back to Sqoop mechanism
  String codecClassName = null;
  try {
    codecClassName = getCodecClassName(codecName);
    if (null == codecClassName) {
      return null;
    }
    Class<? extends CompressionCodec> codecClass =
        (Class<? extends CompressionCodec>)
        conf.getClassByName(codecClassName);
    return (CompressionCodec) ReflectionUtils.newInstance(
        codecClass, conf);
  } catch (ClassNotFoundException cnfe) {
    throw new com.cloudera.sqoop.io.UnsupportedCodecException(
        "Cannot find codec class "
        + codecClassName + " for codec " + codecName);
  }
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:32,代碼來源:CodecMap.java


注:本文中的org.apache.hadoop.io.compress.CompressionCodec類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。