當前位置: 首頁>>代碼示例>>Java>>正文


Java Mapper類代碼示例

本文整理匯總了Java中org.apache.hadoop.mapreduce.Mapper的典型用法代碼示例。如果您正苦於以下問題:Java Mapper類的具體用法?Java Mapper怎麽用?Java Mapper使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


Mapper類屬於org.apache.hadoop.mapreduce包,在下文中一共展示了Mapper類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: getTmpFile

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
private Path getTmpFile(Path target, Mapper.Context context) {
  Path targetWorkPath = new Path(context.getConfiguration().
      get(DistCpConstants.CONF_LABEL_TARGET_WORK_PATH));

  Path root = target.equals(targetWorkPath)? targetWorkPath.getParent() : targetWorkPath;
  LOG.info("Creating temp file: " +
      new Path(root, ".distcp.tmp." + context.getTaskAttemptID().toString()));
  return new Path(root, ".distcp.tmp." + context.getTaskAttemptID().toString());
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:10,代碼來源:RetriableFileCopyCommand.java

示例2: runRandomInputGenerator

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
public int runRandomInputGenerator(int numMappers, long numNodes, Path tmpOutput,
    Integer width, Integer wrapMuplitplier) throws Exception {
  LOG.info("Running RandomInputGenerator with numMappers=" + numMappers
      + ", numNodes=" + numNodes);
  Job job = Job.getInstance(getConf());

  job.setJobName("Random Input Generator");
  job.setNumReduceTasks(0);
  job.setJarByClass(getClass());

  job.setInputFormatClass(GeneratorInputFormat.class);
  job.setOutputKeyClass(BytesWritable.class);
  job.setOutputValueClass(NullWritable.class);

  setJobConf(job, numMappers, numNodes, width, wrapMuplitplier);

  job.setMapperClass(Mapper.class); //identity mapper

  FileOutputFormat.setOutputPath(job, tmpOutput);
  job.setOutputFormatClass(SequenceFileOutputFormat.class);

  boolean success = jobCompletion(job);

  return success ? 0 : 1;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:26,代碼來源:IntegrationTestBigLinkedList.java

示例3: getMapperClass

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
@Override
protected Class<? extends Mapper> getMapperClass() {
  if (options.getHCatTableName() != null) {
    return SqoopHCatUtilities.getImportMapperClass();
  }
  if (options.getFileLayout() == SqoopOptions.FileLayout.TextFile) {
    return TextImportMapper.class;
  } else if (options.getFileLayout()
      == SqoopOptions.FileLayout.SequenceFile) {
    return SequenceFileImportMapper.class;
  } else if (options.getFileLayout()
      == SqoopOptions.FileLayout.AvroDataFile) {
    return AvroImportMapper.class;
  } else if (options.getFileLayout()
      == SqoopOptions.FileLayout.ParquetFile) {
    return ParquetImportMapper.class;
  }

  return null;
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:21,代碼來源:DataDrivenImportJob.java

示例4: getMapperClass

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
@Override
protected Class<? extends Mapper> getMapperClass() {
  if (isHCatJob) {
    return SqoopHCatUtilities.getExportMapperClass();
  }
  if (options.getOdpsTable() != null) {
    return OdpsExportMapper.class;
  }
  switch (fileType) {
    case SEQUENCE_FILE:
      return SequenceFileExportMapper.class;
    case AVRO_DATA_FILE:
      return AvroExportMapper.class;
    case PARQUET_FILE:
      return ParquetExportMapper.class;
    case UNKNOWN:
    default:
      return TextExportMapper.class;
  }
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:21,代碼來源:JdbcExportJob.java

示例5: getMapperClass

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
@Override
protected Class<? extends Mapper> getMapperClass() {
  if (isHCatJob) {
    return SqoopHCatUtilities.getExportOdpsMapperClass();
  }
  switch (fileType) {
    case SEQUENCE_FILE:
      return SequenceFileExportMapper.class;
    case AVRO_DATA_FILE:
      return AvroExportMapper.class;
    case PARQUET_FILE:
      return ParquetExportMapper.class;
    case UNKNOWN:
    default:
      return TextExportMapper.class;
  }
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:18,代碼來源:HdfsOdpsImportJob.java

示例6: testGetMainframeDatasetImportMapperClass

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
@Test
public void testGetMainframeDatasetImportMapperClass()
    throws SecurityException, NoSuchMethodException,
    IllegalArgumentException, IllegalAccessException,
    InvocationTargetException {
  String jarFile = "dummyJarFile";
  String tableName = "dummyTableName";
  Path path = new Path("dummyPath");
  ImportJobContext context = new ImportJobContext(tableName, jarFile,
      options, path);
  mfImportJob = new MainframeImportJob(options, context);

  // To access protected method by means of reflection
  Class[] types = {};
  Method m_getMapperClass = MainframeImportJob.class.getDeclaredMethod(
      "getMapperClass", types);
  m_getMapperClass.setAccessible(true);
  Class<? extends Mapper> mapper = (Class<? extends Mapper>) m_getMapperClass
      .invoke(mfImportJob);
  assertEquals(mapper,
     org.apache.sqoop.mapreduce.mainframe.MainframeDatasetImportMapper.class);
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:23,代碼來源:TestMainframeImportJob.java

示例7: testSuperMapperClass

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
@Test
public void testSuperMapperClass() throws SecurityException,
    NoSuchMethodException, IllegalArgumentException, IllegalAccessException,
    InvocationTargetException {
  String jarFile = "dummyJarFile";
  String tableName = "dummyTableName";
  Path path = new Path("dummyPath");
  options.setFileLayout(SqoopOptions.FileLayout.AvroDataFile);
  ImportJobContext context = new ImportJobContext(tableName, jarFile,
      options, path);
  avroImportJob = new MainframeImportJob(options, context);

  // To access protected method by means of reflection
  Class[] types = {};
  Method m_getMapperClass = MainframeImportJob.class.getDeclaredMethod(
      "getMapperClass", types);
  m_getMapperClass.setAccessible(true);
  Class<? extends Mapper> mapper = (Class<? extends Mapper>) m_getMapperClass
      .invoke(avroImportJob);
  assertEquals(mapper, org.apache.sqoop.mapreduce.AvroImportMapper.class);
}
 
開發者ID:aliyun,項目名稱:aliyun-maxcompute-data-collectors,代碼行數:22,代碼來源:TestMainframeImportJob.java

示例8: testChainFail

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
/**
 * Tests one of the mappers throwing exception.
 * 
 * @throws Exception
 */
public void testChainFail() throws Exception {

  Configuration conf = createJobConf();

  Job job = MapReduceTestUtil.createJob(conf, inDir, outDir, 1, 0, input);
  job.setJobName("chain");

  ChainMapper.addMapper(job, Mapper.class, LongWritable.class, Text.class,
      LongWritable.class, Text.class, null);

  ChainMapper.addMapper(job, FailMap.class, LongWritable.class, Text.class,
      IntWritable.class, Text.class, null);

  ChainMapper.addMapper(job, Mapper.class, IntWritable.class, Text.class,
      LongWritable.class, Text.class, null);

  job.waitForCompletion(true);
  assertTrue("Job Not failed", !job.isSuccessful());
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:25,代碼來源:TestChainErrors.java

示例9: testReducerFail

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
/**
 * Tests Reducer throwing exception.
 * 
 * @throws Exception
 */
public void testReducerFail() throws Exception {

  Configuration conf = createJobConf();

  Job job = MapReduceTestUtil.createJob(conf, inDir, outDir, 1, 1, input);
  job.setJobName("chain");

  ChainMapper.addMapper(job, Mapper.class, LongWritable.class, Text.class,
      LongWritable.class, Text.class, null);

  ChainReducer.setReducer(job, FailReduce.class, LongWritable.class,
      Text.class, LongWritable.class, Text.class, null);

  ChainReducer.addMapper(job, Mapper.class, LongWritable.class, Text.class,
      LongWritable.class, Text.class, null);

  job.waitForCompletion(true);
  assertTrue("Job Not failed", !job.isSuccessful());
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:25,代碼來源:TestChainErrors.java

示例10: testChainMapNoOuptut

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
/**
 * Tests one of the maps consuming output.
 * 
 * @throws Exception
 */
public void testChainMapNoOuptut() throws Exception {
  Configuration conf = createJobConf();
  String expectedOutput = "";

  Job job = MapReduceTestUtil.createJob(conf, inDir, outDir, 1, 0, input);
  job.setJobName("chain");

  ChainMapper.addMapper(job, ConsumeMap.class, IntWritable.class, Text.class,
      LongWritable.class, Text.class, null);

  ChainMapper.addMapper(job, Mapper.class, LongWritable.class, Text.class,
      LongWritable.class, Text.class, null);

  job.waitForCompletion(true);
  assertTrue("Job failed", job.isSuccessful());
  assertEquals("Outputs doesn't match", expectedOutput, MapReduceTestUtil
      .readOutput(outDir, conf));
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:24,代碼來源:TestChainErrors.java

示例11: testChainReduceNoOuptut

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
/**
 * Tests reducer consuming output.
 * 
 * @throws Exception
 */
public void testChainReduceNoOuptut() throws Exception {
  Configuration conf = createJobConf();
  String expectedOutput = "";

  Job job = MapReduceTestUtil.createJob(conf, inDir, outDir, 1, 1, input);
  job.setJobName("chain");

  ChainMapper.addMapper(job, Mapper.class, IntWritable.class, Text.class,
      LongWritable.class, Text.class, null);

  ChainReducer.setReducer(job, ConsumeReduce.class, LongWritable.class,
      Text.class, LongWritable.class, Text.class, null);

  ChainReducer.addMapper(job, Mapper.class, LongWritable.class, Text.class,
      LongWritable.class, Text.class, null);

  job.waitForCompletion(true);
  assertTrue("Job failed", job.isSuccessful());
  assertEquals("Outputs doesn't match", expectedOutput, MapReduceTestUtil
      .readOutput(outDir, conf));
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:27,代碼來源:TestChainErrors.java

示例12: testAddInputPathWithMapper

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
@SuppressWarnings("unchecked")
public void testAddInputPathWithMapper() throws IOException {
  final Job conf = Job.getInstance();
  MultipleInputs.addInputPath(conf, new Path("/foo"), TextInputFormat.class,
     MapClass.class);
  MultipleInputs.addInputPath(conf, new Path("/bar"),
      KeyValueTextInputFormat.class, KeyValueMapClass.class);
  final Map<Path, InputFormat> inputs = MultipleInputs
     .getInputFormatMap(conf);
  final Map<Path, Class<? extends Mapper>> maps = MultipleInputs
     .getMapperTypeMap(conf);

  assertEquals(TextInputFormat.class, inputs.get(new Path("/foo")).getClass());
  assertEquals(KeyValueTextInputFormat.class, inputs.get(new Path("/bar"))
     .getClass());
  assertEquals(MapClass.class, maps.get(new Path("/foo")));
  assertEquals(KeyValueMapClass.class, maps.get(new Path("/bar")));
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:19,代碼來源:TestMultipleInputs.java

示例13: addMapper

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
/**
 * Add mapper(the first mapper) that reads input from the input
 * context and writes to queue
 */
@SuppressWarnings("unchecked")
void addMapper(TaskInputOutputContext inputContext,
    ChainBlockingQueue<KeyValuePair<?, ?>> output, int index)
    throws IOException, InterruptedException {
  Configuration conf = getConf(index);
  Class<?> keyOutClass = conf.getClass(MAPPER_OUTPUT_KEY_CLASS, Object.class);
  Class<?> valueOutClass = conf.getClass(MAPPER_OUTPUT_VALUE_CLASS,
      Object.class);

  RecordReader rr = new ChainRecordReader(inputContext);
  RecordWriter rw = new ChainRecordWriter(keyOutClass, valueOutClass, output,
      conf);
  Mapper.Context mapperContext = createMapContext(rr, rw,
      (MapContext) inputContext, getConf(index));
  MapRunner runner = new MapRunner(mappers.get(index), mapperContext, rr, rw);
  threads.add(runner);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:22,代碼來源:Chain.java

示例14: testCopyingExistingFiles

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
private void testCopyingExistingFiles(FileSystem fs, CopyMapper copyMapper,
    Mapper<Text, CopyListingFileStatus, Text, Text>.Context context) {
  try {
    for (Path path : pathList) {
      copyMapper.map(new Text(DistCpUtils.getRelativePath(new Path(SOURCE_PATH), path)),
              new CopyListingFileStatus(fs.getFileStatus(path)), context);
    }

    Assert.assertEquals(nFiles,
            context.getCounter(CopyMapper.Counter.SKIP).getValue());
  }
  catch (Exception exception) {
    Assert.assertTrue("Caught unexpected exception:" + exception.getMessage(),
            false);
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:17,代碼來源:TestCopyMapper.java

示例15: enumDirectories

import org.apache.hadoop.mapreduce.Mapper; //導入依賴的package包/類
private void enumDirectories(FileSystem fs, URI rootUri, Path directory, boolean recursive,
    Mapper.Context context) throws IOException, InterruptedException {
  try {
    for (FileStatus status : fs.listStatus(directory, hiddenFileFilter)) {
      if (status.isDirectory()) {
        if (recursive) {
          if (directoryBlackList == null
              || !status.getPath().getName().matches(directoryBlackList)) {
            enumDirectories(fs,rootUri, status.getPath(), recursive, context);
          }
        }
      } else {
        context.write(new Text(rootUri.relativize(directory.toUri()).getPath()),
                new FileStatus(status));
      }
    }
    context.progress();
  } catch (FileNotFoundException e) {
    return;
  }
}
 
開發者ID:airbnb,項目名稱:reair,代碼行數:22,代碼來源:ReplicationJob.java


注:本文中的org.apache.hadoop.mapreduce.Mapper類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。