当前位置: 首页>>代码示例>>Java>>正文


Java SecureIOUtils.createForWrite方法代码示例

本文整理汇总了Java中org.apache.hadoop.io.SecureIOUtils.createForWrite方法的典型用法代码示例。如果您正苦于以下问题:Java SecureIOUtils.createForWrite方法的具体用法?Java SecureIOUtils.createForWrite怎么用?Java SecureIOUtils.createForWrite使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.io.SecureIOUtils的用法示例。


在下文中一共展示了SecureIOUtils.createForWrite方法的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: writeToIndexFile

import org.apache.hadoop.io.SecureIOUtils; //导入方法依赖的package包/类
static void writeToIndexFile(String logLocation,
    TaskAttemptID currentTaskid, boolean isCleanup,
    Map<LogName, Long[]> lengths) throws IOException {
  // To ensure atomicity of updates to index file, write to temporary index
  // file first and then rename.
  File tmpIndexFile = getTmpIndexFile(currentTaskid, isCleanup);
  
  BufferedOutputStream bos = 
    new BufferedOutputStream(
      SecureIOUtils.createForWrite(tmpIndexFile, 0644));
  DataOutputStream dos = new DataOutputStream(bos);
  //the format of the index file is
  //LOG_DIR: <the dir where the task logs are really stored>
  //STDOUT: <start-offset in the stdout file> <length>
  //STDERR: <start-offset in the stderr file> <length>
  //SYSLOG: <start-offset in the syslog file> <length>    
  dos.writeBytes(LogFileDetail.LOCATION
      + logLocation
      + "\n");
  for (LogName logName : LOGS_TRACKED_BY_INDEX_FILES) {
    Long[] lens = lengths.get(logName);
    dos.writeBytes(logName.toString() + ":"
        + lens[0].toString() + " "
        + Long.toString(lens[1].longValue() - lens[0].longValue())
        + "\n");}
  dos.close();

  File indexFile = getIndexFile(currentTaskid, isCleanup);
  Path indexFilePath = new Path(indexFile.getAbsolutePath());
  Path tmpIndexFilePath = new Path(tmpIndexFile.getAbsolutePath());

  if (localFS == null) {// set localFS once
    localFS = FileSystem.getLocal(new Configuration());
  }
  localFS.rename (tmpIndexFilePath, indexFilePath);
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:37,代码来源:TaskLog.java

示例2: writeToIndexFile

import org.apache.hadoop.io.SecureIOUtils; //导入方法依赖的package包/类
static synchronized 
void writeToIndexFile(String logLocation,
                      TaskAttemptID currentTaskid, 
                      boolean isCleanup,
                      Map<LogName, Long[]> lengths) throws IOException {
  // To ensure atomicity of updates to index file, write to temporary index
  // file first and then rename.
  File tmpIndexFile = getTmpIndexFile(currentTaskid, isCleanup);
  
  BufferedOutputStream bos = 
    new BufferedOutputStream(
      SecureIOUtils.createForWrite(tmpIndexFile, 0644));
  DataOutputStream dos = new DataOutputStream(bos);
  //the format of the index file is
  //LOG_DIR: <the dir where the task logs are really stored>
  //STDOUT: <start-offset in the stdout file> <length>
  //STDERR: <start-offset in the stderr file> <length>
  //SYSLOG: <start-offset in the syslog file> <length>    
  dos.writeBytes(LogFileDetail.LOCATION
      + logLocation
      + "\n");
  for (LogName logName : LOGS_TRACKED_BY_INDEX_FILES) {
    Long[] lens = lengths.get(logName);
    dos.writeBytes(logName.toString() + ":"
        + lens[0].toString() + " "
        + Long.toString(lens[1].longValue() - lens[0].longValue())
        + "\n");}
  dos.close();

  File indexFile = getIndexFile(currentTaskid, isCleanup);
  Path indexFilePath = new Path(indexFile.getAbsolutePath());
  Path tmpIndexFilePath = new Path(tmpIndexFile.getAbsolutePath());

  if (localFS == null) {// set localFS once
    localFS = FileSystem.getLocal(new Configuration());
  }
  localFS.rename (tmpIndexFilePath, indexFilePath);
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre,代码行数:39,代码来源:TaskLog.java

示例3: writeJobACLs

import org.apache.hadoop.io.SecureIOUtils; //导入方法依赖的package包/类
/**
 *  Creates job-acls.xml under the given directory logDir and writes
 *  job-view-acl, queue-admins-acl, jobOwner name and queue name into this
 *  file.
 *  queue name is the queue to which the job was submitted to.
 *  queue-admins-acl is the queue admins ACL of the queue to which this
 *  job was submitted to.
 * @param conf   job configuration
 * @param logDir job userlog dir
 * @throws IOException
 */
private static void writeJobACLs(JobConf conf, File logDir)
    throws IOException {
  File aclFile = new File(logDir, jobACLsFile);
  JobConf aclConf = new JobConf(false);

  // set the job view acl in aclConf
  String jobViewACL = conf.get(MRJobConfig.JOB_ACL_VIEW_JOB, " ");
  aclConf.set(MRJobConfig.JOB_ACL_VIEW_JOB, jobViewACL);

  // set the job queue name in aclConf
  String queue = conf.getQueueName();
  aclConf.setQueueName(queue);

  // set the queue admins acl in aclConf
  String qACLName = toFullPropertyName(queue,
      QueueACL.ADMINISTER_JOBS.getAclName());
  String queueAdminsACL = conf.get(qACLName, " ");
  aclConf.set(qACLName, queueAdminsACL);

  // set jobOwner as user.name in aclConf
  String jobOwner = conf.getUser();
  aclConf.set("user.name", jobOwner);

  FileOutputStream out;
  try {
    out = SecureIOUtils.createForWrite(aclFile, 0600);
  } catch (SecureIOUtils.AlreadyExistsException aee) {
    LOG.warn("Job ACL file already exists at " + aclFile, aee);
    return;
  }
  try {
    aclConf.writeXml(out);
  } finally {
    out.close();
  }
}
 
开发者ID:rekhajoshm,项目名称:mapreduce-fork,代码行数:48,代码来源:TaskTracker.java

示例4: writeToIndexFile

import org.apache.hadoop.io.SecureIOUtils; //导入方法依赖的package包/类
private static synchronized 
void writeToIndexFile(String logLocation,
                      boolean isCleanup) throws IOException {
  // To ensure atomicity of updates to index file, write to temporary index
  // file first and then rename.
  File tmpIndexFile = getTmpIndexFile(currentTaskid, isCleanup);

  BufferedOutputStream bos = null;
  DataOutputStream dos = null;
  try{
    bos = new BufferedOutputStream(
        SecureIOUtils.createForWrite(tmpIndexFile, 0644));
    dos = new DataOutputStream(bos);
    //the format of the index file is
    //LOG_DIR: <the dir where the task logs are really stored>
    //STDOUT: <start-offset in the stdout file> <length>
    //STDERR: <start-offset in the stderr file> <length>
    //SYSLOG: <start-offset in the syslog file> <length>   

    dos.writeBytes(LogFileDetail.LOCATION + logLocation + "\n"
        + LogName.STDOUT.toString() + ":");
    dos.writeBytes(Long.toString(prevOutLength) + " ");
    dos.writeBytes(Long.toString(new File(logLocation, LogName.STDOUT
        .toString()).length() - prevOutLength)
        + "\n" + LogName.STDERR + ":");
    dos.writeBytes(Long.toString(prevErrLength) + " ");
    dos.writeBytes(Long.toString(new File(logLocation, LogName.STDERR
        .toString()).length() - prevErrLength)
        + "\n" + LogName.SYSLOG.toString() + ":");
    dos.writeBytes(Long.toString(prevLogLength) + " ");
    dos.writeBytes(Long.toString(new File(logLocation, LogName.SYSLOG
        .toString()).length() - prevLogLength)
        + "\n");
    dos.close();
    dos = null;
    bos.close();
    bos = null;
  } finally {
    IOUtils.cleanup(LOG, dos, bos);
  }

  File indexFile = getIndexFile(currentTaskid, isCleanup);
  Path indexFilePath = new Path(indexFile.getAbsolutePath());
  Path tmpIndexFilePath = new Path(tmpIndexFile.getAbsolutePath());

  if (localFS == null) {// set localFS once
    localFS = FileSystem.getLocal(new Configuration());
  }
  localFS.rename (tmpIndexFilePath, indexFilePath);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:51,代码来源:TaskLog.java

示例5: writeToIndexFile

import org.apache.hadoop.io.SecureIOUtils; //导入方法依赖的package包/类
private static synchronized 
void writeToIndexFile(String logLocation,
                      boolean isCleanup) throws IOException {
  // To ensure atomicity of updates to index file, write to temporary index
  // file first and then rename.
  File tmpIndexFile = getTmpIndexFile(currentTaskid, isCleanup);

  BufferedOutputStream bos = 
    new BufferedOutputStream(
      SecureIOUtils.createForWrite(tmpIndexFile, 0644));
  DataOutputStream dos = new DataOutputStream(bos);
  //the format of the index file is
  //LOG_DIR: <the dir where the task logs are really stored>
  //STDOUT: <start-offset in the stdout file> <length>
  //STDERR: <start-offset in the stderr file> <length>
  //SYSLOG: <start-offset in the syslog file> <length>   
  try{
    dos.writeBytes(LogFileDetail.LOCATION + logLocation + "\n"
        + LogName.STDOUT.toString() + ":");
    dos.writeBytes(Long.toString(prevOutLength) + " ");
    dos.writeBytes(Long.toString(new File(logLocation, LogName.STDOUT
        .toString()).length() - prevOutLength)
        + "\n" + LogName.STDERR + ":");
    dos.writeBytes(Long.toString(prevErrLength) + " ");
    dos.writeBytes(Long.toString(new File(logLocation, LogName.STDERR
        .toString()).length() - prevErrLength)
        + "\n" + LogName.SYSLOG.toString() + ":");
    dos.writeBytes(Long.toString(prevLogLength) + " ");
    dos.writeBytes(Long.toString(new File(logLocation, LogName.SYSLOG
        .toString()).length() - prevLogLength)
        + "\n");
    dos.close();
    dos = null;
  } finally {
    IOUtils.cleanup(LOG, dos);
  }

  File indexFile = getIndexFile(currentTaskid, isCleanup);
  Path indexFilePath = new Path(indexFile.getAbsolutePath());
  Path tmpIndexFilePath = new Path(tmpIndexFile.getAbsolutePath());

  if (localFS == null) {// set localFS once
    localFS = FileSystem.getLocal(new Configuration());
  }
  localFS.rename (tmpIndexFilePath, indexFilePath);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:47,代码来源:TaskLog.java

示例6: writeToIndexFile

import org.apache.hadoop.io.SecureIOUtils; //导入方法依赖的package包/类
private static void writeToIndexFile(String logLocation,
                                     boolean isCleanup) 
throws IOException {
  // To ensure atomicity of updates to index file, write to temporary index
  // file first and then rename.
  File tmpIndexFile = getTmpIndexFile(currentTaskid, isCleanup);

  BufferedOutputStream bos = 
    new BufferedOutputStream(
      SecureIOUtils.createForWrite(tmpIndexFile, 0644));
  DataOutputStream dos = new DataOutputStream(bos);
  //the format of the index file is
  //LOG_DIR: <the dir where the task logs are really stored>
  //STDOUT: <start-offset in the stdout file> <length>
  //STDERR: <start-offset in the stderr file> <length>
  //SYSLOG: <start-offset in the syslog file> <length>    
  dos.writeBytes(LogFileDetail.LOCATION + logLocation + "\n"
      + LogName.STDOUT.toString() + ":");
  dos.writeBytes(Long.toString(prevOutLength) + " ");
  dos.writeBytes(Long.toString(new File(logLocation, LogName.STDOUT
      .toString()).length() - prevOutLength)
      + "\n" + LogName.STDERR + ":");
  dos.writeBytes(Long.toString(prevErrLength) + " ");
  dos.writeBytes(Long.toString(new File(logLocation, LogName.STDERR
      .toString()).length() - prevErrLength)
      + "\n" + LogName.SYSLOG.toString() + ":");
  dos.writeBytes(Long.toString(prevLogLength) + " ");
  dos.writeBytes(Long.toString(new File(logLocation, LogName.SYSLOG
      .toString()).length() - prevLogLength)
      + "\n");
  dos.close();

  File indexFile = getIndexFile(currentTaskid, isCleanup);
  Path indexFilePath = new Path(indexFile.getAbsolutePath());
  Path tmpIndexFilePath = new Path(tmpIndexFile.getAbsolutePath());

  if (localFS == null) {// set localFS once
    localFS = FileSystem.getLocal(new Configuration());
  }
  localFS.rename (tmpIndexFilePath, indexFilePath);
}
 
开发者ID:rekhajoshm,项目名称:mapreduce-fork,代码行数:42,代码来源:TaskLog.java


注:本文中的org.apache.hadoop.io.SecureIOUtils.createForWrite方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。