當前位置: 首頁>>代碼示例>>Java>>正文


Java LineRecordReader.initialize方法代碼示例

本文整理匯總了Java中org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize方法的典型用法代碼示例。如果您正苦於以下問題:Java LineRecordReader.initialize方法的具體用法?Java LineRecordReader.initialize怎麽用?Java LineRecordReader.initialize使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.mapreduce.lib.input.LineRecordReader的用法示例。


在下文中一共展示了LineRecordReader.initialize方法的8個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: initialize

import org.apache.hadoop.mapreduce.lib.input.LineRecordReader; //導入方法依賴的package包/類
@Override
public void initialize(InputSplit inputSplit, TaskAttemptContext context) throws IOException
{
  key = new Text();
  value = new MapWritable();
  jsonParser = new JSONParser();

  lineReader = new LineRecordReader();
  lineReader.initialize(inputSplit, context);

  queryString = context.getConfiguration().get("query", "?q=*");

  // Load the data schemas
  FileSystem fs = FileSystem.get(context.getConfiguration());
  try
  {
    SystemConfiguration.setProperty("data.schemas", context.getConfiguration().get("data.schemas"));
    DataSchemaLoader.initialize(true, fs);
  } catch (Exception e)
  {
    e.printStackTrace();
  }
  String dataSchemaName = context.getConfiguration().get("dataSchemaName");
  dataSchema = DataSchemaRegistry.get(dataSchemaName);
}
 
開發者ID:apache,項目名稱:incubator-pirk,代碼行數:26,代碼來源:JSONRecordReader.java

示例2: initialize

import org.apache.hadoop.mapreduce.lib.input.LineRecordReader; //導入方法依賴的package包/類
/**
 * Called once at initialization to initialize the RecordReader.
 *
 * @param genericSplit the split that defines the range of records to read.
 * @param context the information about the task.
 * @throws IOException on IO Error.
 */
@Override
public void initialize(InputSplit genericSplit, TaskAttemptContext context)
    throws IOException {
  if (LOG.isDebugEnabled()) {
    try {
      LOG.debug("initialize('{}', '{}')",
          HadoopToStringUtil.toString(genericSplit), HadoopToStringUtil.toString(context));
    } catch (InterruptedException ie) {
      LOG.debug("InterruptedException during HadoopToStringUtil.toString", ie);
    }
  }
  Preconditions.checkArgument(genericSplit instanceof FileSplit,
      "InputSplit genericSplit should be an instance of FileSplit.");
  // Get FileSplit.
  FileSplit fileSplit = (FileSplit) genericSplit;
  // Create the JsonParser.
  jsonParser = new JsonParser();
  // Initialize the LineRecordReader.
  lineReader = new LineRecordReader();
  lineReader.initialize(fileSplit, context);
}
 
開發者ID:GoogleCloudPlatform,項目名稱:bigdata-interop,代碼行數:29,代碼來源:GsonRecordReader.java

示例3: initialize

import org.apache.hadoop.mapreduce.lib.input.LineRecordReader; //導入方法依賴的package包/類
@Override
public void initialize(InputSplit inputSplit, TaskAttemptContext taskAttemptContext)
  throws IOException, InterruptedException {
  lineRecordReader = new LineRecordReader();
  lineRecordReader.initialize(inputSplit, taskAttemptContext);
  currentKey = new ImmutableBytesWritable();
  parser = new JSONParser();
  skipBadLines = taskAttemptContext.getConfiguration().getBoolean(
    SKIP_LINES_CONF_KEY, true);
}
 
開發者ID:lhfei,項目名稱:hbase-in-action,代碼行數:11,代碼來源:BulkImportJobExample.java

示例4: close

import org.apache.hadoop.mapreduce.lib.input.LineRecordReader; //導入方法依賴的package包/類
@Override
@SuppressWarnings("squid:S2095") // recordReader is closed explictly in the close() method
public void initialize(InputSplit split, TaskAttemptContext context) throws IOException,
    InterruptedException
{
  if (split instanceof FileSplit)
  {
    FileSplit fsplit = (FileSplit) split;
    delimitedParser = getDelimitedParser(fsplit.getPath().toString(),
        context.getConfiguration());
    recordReader = new LineRecordReader();
    recordReader.initialize(fsplit, context);
    // Skip the first
    if (delimitedParser.getSkipFirstLine())
    {
      // Only skip the first line of the first split. The other
      // splits are somewhere in the middle of the original file,
      // so their first lines should not be skipped.
      if (fsplit.getStart() != 0)
      {
        nextKeyValue();
      }
    }
  }
  else
  {
    throw new IOException("input split is not a FileSplit");
  }
}
 
開發者ID:ngageoint,項目名稱:mrgeo,代碼行數:30,代碼來源:DelimitedVectorRecordReader.java

示例5: initializeNextReader

import org.apache.hadoop.mapreduce.lib.input.LineRecordReader; //導入方法依賴的package包/類
private void initializeNextReader() throws IOException {

			rdr = new LineRecordReader();
			rdr.initialize(
					new FileSplit(split.getPath(currentSplit),

					split.getOffset(currentSplit), split
							.getLength(currentSplit), null), context);

			++currentSplit;
		}
 
開發者ID:Pivotal-Field-Engineering,項目名稱:pmr-common,代碼行數:12,代碼來源:CombineTextInputFormat.java

示例6: initialize

import org.apache.hadoop.mapreduce.lib.input.LineRecordReader; //導入方法依賴的package包/類
@Override
public void initialize(InputSplit split, TaskAttemptContext context)
		throws IOException, InterruptedException {

	rdr = new LineRecordReader();
	rdr.initialize(split, context);
}
 
開發者ID:Pivotal-Field-Engineering,項目名稱:pmr-common,代碼行數:8,代碼來源:JsonInputFormat.java

示例7: initialize

import org.apache.hadoop.mapreduce.lib.input.LineRecordReader; //導入方法依賴的package包/類
@Override
public void initialize(InputSplit inputSplit, TaskAttemptContext attempt)
		throws IOException, InterruptedException {
	lineReader = new LineRecordReader();
	lineReader.initialize(inputSplit, attempt);		
			
}
 
開發者ID:willddy,項目名稱:bigdata_pattern,代碼行數:8,代碼來源:LogFileRecordReader.java

示例8: initialize

import org.apache.hadoop.mapreduce.lib.input.LineRecordReader; //導入方法依賴的package包/類
public void initialize(InputSplit genericSplit, TaskAttemptContext context) throws IOException
{
	lineReader = new LineRecordReader();
	lineReader.initialize(genericSplit, context);

	split = (FileSplit)genericSplit;
	value = null;
}
 
開發者ID:ilveroluca,項目名稱:seal,代碼行數:9,代碼來源:SamInputFormat.java


注:本文中的org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。