当前位置: 首页>>代码示例>>Java>>正文


Java FileOutputCommitter.getWorkPath方法代码示例

本文整理汇总了Java中org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.getWorkPath方法的典型用法代码示例。如果您正苦于以下问题:Java FileOutputCommitter.getWorkPath方法的具体用法?Java FileOutputCommitter.getWorkPath怎么用?Java FileOutputCommitter.getWorkPath使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter的用法示例。


在下文中一共展示了FileOutputCommitter.getWorkPath方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: createRecordWriter

import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter; //导入方法依赖的package包/类
/**
 * Creates a new {@link RecordWriter} to output temporary data.
 * @param <V> value type
 * @param context current context
 * @param name output name
 * @param dataType value type
 * @return the created writer
 * @throws IOException if failed to create a new {@link RecordWriter}
 * @throws InterruptedException if interrupted
 */
public <V> RecordWriter<NullWritable, V> createRecordWriter(
        TaskAttemptContext context,
        String name,
        Class<V> dataType) throws IOException, InterruptedException {
    Configuration conf = context.getConfiguration();
    FileOutputCommitter committer = (FileOutputCommitter) getOutputCommitter(context);
    Path file = new Path(
            committer.getWorkPath(),
            FileOutputFormat.getUniqueFile(context, name, "")); //$NON-NLS-1$
    ModelOutput<V> out = TemporaryStorage.openOutput(conf, dataType, file);
    return new RecordWriter<NullWritable, V>() {
        @Override
        public void write(NullWritable key, V value) throws IOException {
            out.write(value);
        }
        @Override
        public void close(TaskAttemptContext ignored) throws IOException {
            out.close();
        }
        @Override
        public String toString() {
            return String.format("TemporaryOutput(%s)", file); //$NON-NLS-1$
        }
    };
}
 
开发者ID:asakusafw,项目名称:asakusafw-compiler,代码行数:36,代码来源:TemporaryFileOutputFormat.java

示例2: init

import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter; //导入方法依赖的package包/类
@Override
public void init() throws IOException {
  super.init();

  Configuration taskConf = new Configuration();
  Path stagingResultDir = new Path(stagingDir, TajoConstants.RESULT_DIR_NAME);
  taskConf.set(FileOutputFormat.OUTDIR, stagingResultDir.toString());

  ExecutionBlockId ebId = taskAttemptId.getTaskId().getExecutionBlockId();
  writerContext = new TaskAttemptContextImpl(taskConf,
      new TaskAttemptID(ebId.getQueryId().toString(), ebId.getId(), TaskType.MAP,
          taskAttemptId.getTaskId().getId(), taskAttemptId.getId()));

  HFileOutputFormat2 hFileOutputFormat2 = new HFileOutputFormat2();
  try {
    writer = hFileOutputFormat2.getRecordWriter(writerContext);

    committer = new FileOutputCommitter(FileOutputFormat.getOutputPath(writerContext), writerContext);
    workingFilePath = committer.getWorkPath();
  } catch (InterruptedException e) {
    throw new IOException(e.getMessage(), e);
  }

  LOG.info("Created hbase file writer: " + workingFilePath);
}
 
开发者ID:apache,项目名称:tajo,代码行数:26,代码来源:HFileAppender.java

示例3: getDefaultWorkFile

import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter; //导入方法依赖的package包/类
@Override
public Path getDefaultWorkFile(TaskAttemptContext context,
        String extension) throws IOException {
    FileOutputCommitter committer =
            (FileOutputCommitter) super.getOutputCommitter(context);
    return new Path(committer.getWorkPath(), getUniqueFile(context,
            "part", extension));
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:9,代码来源:TestStore.java

示例4: getDefaultWorkFile

import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter; //导入方法依赖的package包/类
public static <K, V> Path getDefaultWorkFile(FileOutputFormat<K, V> format,
    TaskAttemptContext context) throws IOException {
  FileOutputCommitter committer =
      (FileOutputCommitter) format.getOutputCommitter(context);
  return new Path(committer.getWorkPath(), getOutputFile(context));
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:7,代码来源:ShardNameTemplateHelper.java

示例5: getDefaultWorkFile

import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter; //导入方法依赖的package包/类
/**
 * Get the default path and filename for the output format.
 * @param context the task context
 * @param extension an extension to add to the filename
 * @return a full path $output/_temporary/$task-id/part-[mr]-$id
 * @throws java.io.IOException
 */
@Override
public Path getDefaultWorkFile(TaskAttemptContext context, String extension) throws IOException {
    FileOutputCommitter committer = (FileOutputCommitter) getOutputCommitter(context);
    return new Path(committer.getWorkPath(), getCustomFileName(context, getOutputName(context), extension));
}
 
开发者ID:Netflix,项目名称:aegisthus,代码行数:13,代码来源:CustomFileNameFileOutputFormat.java


注:本文中的org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.getWorkPath方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。