当前位置: 首页>>代码示例>>Java>>正文


Java CombineFileSplit.getStartOffsets方法代码示例

本文整理汇总了Java中org.apache.hadoop.mapreduce.lib.input.CombineFileSplit.getStartOffsets方法的典型用法代码示例。如果您正苦于以下问题:Java CombineFileSplit.getStartOffsets方法的具体用法?Java CombineFileSplit.getStartOffsets怎么用?Java CombineFileSplit.getStartOffsets使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.mapreduce.lib.input.CombineFileSplit的用法示例。


在下文中一共展示了CombineFileSplit.getStartOffsets方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: initialize

import org.apache.hadoop.mapreduce.lib.input.CombineFileSplit; //导入方法依赖的package包/类
@Override
public void initialize(InputSplit split, TaskAttemptContext context)
    throws IOException {
  Configuration conf = context.getConfiguration();
  CombineFileSplit cSplit =  (CombineFileSplit) split;
  Path[] path = cSplit.getPaths();
  long[] start = cSplit.getStartOffsets();
  long[] len = cSplit.getLengths();
  
  FileSystem fs = cSplit.getPath(0).getFileSystem(conf);
  
  long startTS = conf.getLong(RowInputFormat.START_TIME_MILLIS, 0l);
  long endTS = conf.getLong(RowInputFormat.END_TIME_MILLIS, 0l);
  this.splitIterator = HDFSSplitIterator.newInstance(fs, path, start, len, startTS, endTS);

  instantiateGfxdLoner(conf);
}
 
开发者ID:gemxd,项目名称:gemfirexd-oss,代码行数:18,代码来源:RowRecordReader.java

示例2: FileQueue

import org.apache.hadoop.mapreduce.lib.input.CombineFileSplit; //导入方法依赖的package包/类
/**
 * @param split Description of input sources.
 * @param conf Used to resolve FileSystem instances.
 */
public FileQueue(CombineFileSplit split, Configuration conf)
    throws IOException {
  this.conf = conf;
  paths = split.getPaths();
  startoffset = split.getStartOffsets();
  lengths = split.getLengths();
  nextSource();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:FileQueue.java

示例3: initialize

import org.apache.hadoop.mapreduce.lib.input.CombineFileSplit; //导入方法依赖的package包/类
@Override
public void initialize(InputSplit split, TaskAttemptContext context)
throws IOException, InterruptedException {
  CombineFileSplit cSplit = (CombineFileSplit) split;
  Path[] path = cSplit.getPaths();
  long[] start = cSplit.getStartOffsets();
  long[] len = cSplit.getLengths();

  Configuration conf = context.getConfiguration();
  FileSystem fs = cSplit.getPath(0).getFileSystem(conf);
  
  this.splitIterator = HDFSSplitIterator.newInstance(fs, path, start, len, 0l, 0l);
}
 
开发者ID:gemxd,项目名称:gemfirexd-oss,代码行数:14,代码来源:AbstractGFRecordReader.java

示例4: initialize

import org.apache.hadoop.mapreduce.lib.input.CombineFileSplit; //导入方法依赖的package包/类
@Override
public void initialize(InputSplit genericSplit, TaskAttemptContext context)
		throws IOException, InterruptedException {
	CombineFileSplit split = (CombineFileSplit) genericSplit;
	Configuration job = context.getConfiguration();
	this.maxLineLength = job.getInt("mapred.linerecordreader.maxlength",
			Integer.MAX_VALUE);

	this.start = split.getStartOffsets()[idx];
	this.end = start + split.getLength();
	Path file = split.getPath(idx);
	this.compressionCodescs = new CompressionCodecFactory(job);
	final CompressionCodec codec = compressionCodescs.getCodec(file);

	FileSystem fs = file.getFileSystem(job);
	FSDataInputStream fileIn = fs.open(split.getPath(idx));
	boolean skipFirstLine = false;
	if (codec != null) {
		in = new LineReader(codec.createInputStream(fileIn), job);
		end = Long.MAX_VALUE;
	} else {
		if (start != 0) {
			skipFirstLine = true;
			--start;
			fileIn.seek(start);
		}
		in = new LineReader(fileIn, job);
	}
	if (skipFirstLine) {// skip first line and re-establish "start"
		start += in.readLine(new Text(), 0,
				(int) Math.min((long) Integer.MAX_VALUE, end - start));
	}
	this.pos = start;
}
 
开发者ID:makelove,项目名称:book-hadoop-hacks,代码行数:35,代码来源:CombineFileLineRecordReader.java


注:本文中的org.apache.hadoop.mapreduce.lib.input.CombineFileSplit.getStartOffsets方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。