当前位置: 首页>>代码示例>>Java>>正文


Java TFile.Writer方法代码示例

本文整理汇总了Java中org.apache.hadoop.io.file.tfile.TFile.Writer方法的典型用法代码示例。如果您正苦于以下问题:Java TFile.Writer方法的具体用法?Java TFile.Writer怎么用?Java TFile.Writer使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.io.file.tfile.TFile的用法示例。


在下文中一共展示了TFile.Writer方法的8个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: writeTFile

import org.apache.hadoop.io.file.tfile.TFile; //导入方法依赖的package包/类
public void writeTFile(Path file, String cname) throws Exception
{


  FSDataOutputStream fos = hdfs.create(file);

  TFile.Writer writer =
      new TFile.Writer(fos, blockSize, cname, "jclass:" +
      BytesWritable.Comparator.class.getName(), new Configuration());

  for (int i = 0; i < testSize; i++) {
    String k = getKey(i);
    String v = getValue();
    writer.append(k.getBytes(), v.getBytes());
  }

  writer.close();
  fos.close();
}
 
开发者ID:DataTorrent,项目名称:Megh,代码行数:20,代码来源:HadoopFilePerformanceTest.java

示例2: HistoryFileWriter

import org.apache.hadoop.io.file.tfile.TFile; //导入方法依赖的package包/类
public HistoryFileWriter(Path historyFile) throws IOException {
  if (fs.exists(historyFile)) {
    fsdos = fs.append(historyFile);
  } else {
    fsdos = fs.create(historyFile);
  }
  try {
    fs.setPermission(historyFile, HISTORY_FILE_UMASK);
    writer =
        new TFile.Writer(fsdos, MIN_BLOCK_SIZE, getConfig().get(
            YarnConfiguration.FS_APPLICATION_HISTORY_STORE_COMPRESSION_TYPE,
            YarnConfiguration.DEFAULT_FS_APPLICATION_HISTORY_STORE_COMPRESSION_TYPE), null,
            getConfig());
  } catch (IOException e) {
    IOUtils.cleanup(LOG, fsdos);
    throw e;
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:19,代码来源:FileSystemApplicationHistoryStore.java

示例3: LogWriter

import org.apache.hadoop.io.file.tfile.TFile; //导入方法依赖的package包/类
public LogWriter(final Configuration conf, final Path remoteAppLogFile,
    UserGroupInformation userUgi) throws IOException {
  try {
    this.fsDataOStream =
        userUgi.doAs(new PrivilegedExceptionAction<FSDataOutputStream>() {
          @Override
          public FSDataOutputStream run() throws Exception {
            fc = FileContext.getFileContext(remoteAppLogFile.toUri(), conf);
            fc.setUMask(APP_LOG_FILE_UMASK);
            return fc.create(
                remoteAppLogFile,
                EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE),
                new Options.CreateOpts[] {});
          }
        });
  } catch (InterruptedException e) {
    throw new IOException(e);
  }

  // Keys are not sorted: null arg
  // 256KB minBlockSize : Expected log size for each container too
  this.writer =
      new TFile.Writer(this.fsDataOStream, 256 * 1024, conf.get(
          YarnConfiguration.NM_LOG_AGG_COMPRESSION_TYPE,
          YarnConfiguration.DEFAULT_NM_LOG_AGG_COMPRESSION_TYPE), null, conf);
  //Write the version string
  writeVersion();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:AggregatedLogFormat.java

示例4: HistoryFileWriter

import org.apache.hadoop.io.file.tfile.TFile; //导入方法依赖的package包/类
public HistoryFileWriter(Path historyFile) throws IOException {
  if (fs.exists(historyFile)) {
    fsdos = fs.append(historyFile);
  } else {
    fsdos = fs.create(historyFile);
  }
  fs.setPermission(historyFile, HISTORY_FILE_UMASK);
  writer =
      new TFile.Writer(fsdos, MIN_BLOCK_SIZE, getConfig().get(
        YarnConfiguration.FS_APPLICATION_HISTORY_STORE_COMPRESSION_TYPE,
        YarnConfiguration.DEFAULT_FS_APPLICATION_HISTORY_STORE_COMPRESSION_TYPE), null,
        getConfig());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:FileSystemApplicationHistoryStore.java

示例5: LogWriter

import org.apache.hadoop.io.file.tfile.TFile; //导入方法依赖的package包/类
public LogWriter(final Configuration conf, final Path remoteAppLogFile,
    UserGroupInformation userUgi) throws IOException {
  try {
    this.fsDataOStream =
        userUgi.doAs(new PrivilegedExceptionAction<FSDataOutputStream>() {
          @Override
          public FSDataOutputStream run() throws Exception {
            fc = FileContext.getFileContext(conf);
            fc.setUMask(APP_LOG_FILE_UMASK);
            return fc.create(
                remoteAppLogFile,
                EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE),
                new Options.CreateOpts[] {});
          }
        });
  } catch (InterruptedException e) {
    throw new IOException(e);
  }

  // Keys are not sorted: null arg
  // 256KB minBlockSize : Expected log size for each container too
  this.writer =
      new TFile.Writer(this.fsDataOStream, 256 * 1024, conf.get(
          YarnConfiguration.NM_LOG_AGG_COMPRESSION_TYPE,
          YarnConfiguration.DEFAULT_NM_LOG_AGG_COMPRESSION_TYPE), null, conf);
  //Write the version string
  writeVersion();
}
 
开发者ID:yncxcw,项目名称:big-c,代码行数:29,代码来源:AggregatedLogFormat.java

示例6: LogWriter

import org.apache.hadoop.io.file.tfile.TFile; //导入方法依赖的package包/类
public LogWriter(final Configuration conf, final Path remoteAppLogFile,
    UserGroupInformation userUgi) throws IOException {
  try {
    this.fsDataOStream =
        userUgi.doAs(new PrivilegedExceptionAction<FSDataOutputStream>() {
          @Override
          public FSDataOutputStream run() throws Exception {
            FileContext fc = FileContext.getFileContext(conf);
            fc.setUMask(APP_LOG_FILE_UMASK);
            return fc.create(
                remoteAppLogFile,
                EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE),
                new Options.CreateOpts[] {});
          }
        });
  } catch (InterruptedException e) {
    throw new IOException(e);
  }

  // Keys are not sorted: null arg
  // 256KB minBlockSize : Expected log size for each container too
  this.writer =
      new TFile.Writer(this.fsDataOStream, 256 * 1024, conf.get(
          YarnConfiguration.NM_LOG_AGG_COMPRESSION_TYPE,
          YarnConfiguration.DEFAULT_NM_LOG_AGG_COMPRESSION_TYPE), null, conf);
  //Write the version string
  writeVersion();
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:29,代码来源:AggregatedLogFormat.java

示例7: initialize

import org.apache.hadoop.io.file.tfile.TFile; //导入方法依赖的package包/类
/**
 * Initialize the LogWriter.
 * Must be called just after the instance is created.
 * @param conf Configuration
 * @param remoteAppLogFile remote log file path
 * @param userUgi Ugi of the user
 * @throws IOException Failed to initialize
 */
public void initialize(final Configuration conf,
                       final Path remoteAppLogFile,
                       UserGroupInformation userUgi) throws IOException {
  try {
    this.fsDataOStream =
        userUgi.doAs(new PrivilegedExceptionAction<FSDataOutputStream>() {
          @Override
          public FSDataOutputStream run() throws Exception {
            fc = FileContext.getFileContext(remoteAppLogFile.toUri(), conf);
            fc.setUMask(APP_LOG_FILE_UMASK);
            return fc.create(
                remoteAppLogFile,
                EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE),
                new Options.CreateOpts[] {});
          }
        });
  } catch (InterruptedException e) {
    throw new IOException(e);
  }

  // Keys are not sorted: null arg
  // 256KB minBlockSize : Expected log size for each container too
  this.writer =
      new TFile.Writer(this.fsDataOStream, 256 * 1024, conf.get(
          YarnConfiguration.NM_LOG_AGG_COMPRESSION_TYPE,
          YarnConfiguration.DEFAULT_NM_LOG_AGG_COMPRESSION_TYPE), null, conf);
  //Write the version string
  writeVersion();
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:38,代码来源:AggregatedLogFormat.java

示例8: getWriter

import org.apache.hadoop.io.file.tfile.TFile; //导入方法依赖的package包/类
@VisibleForTesting
public TFile.Writer getWriter() {
  return this.writer;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:5,代码来源:AggregatedLogFormat.java


注:本文中的org.apache.hadoop.io.file.tfile.TFile.Writer方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。