当前位置: 首页>>代码示例>>Java>>正文


Java MapReduceDriver类代码示例

本文整理汇总了Java中org.apache.hadoop.mrunit.mapreduce.MapReduceDriver的典型用法代码示例。如果您正苦于以下问题:Java MapReduceDriver类的具体用法?Java MapReduceDriver怎么用?Java MapReduceDriver使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


MapReduceDriver类属于org.apache.hadoop.mrunit.mapreduce包,在下文中一共展示了MapReduceDriver类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testFullTableSize

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
@Test
public void testFullTableSize() throws IOException {

    Value value = new Value(new byte[0]);

    Mutation m = new Mutation(new Text("subjectpredicateobject" + DELIM + "FullTableCardinality"));
    m.put(new Text("FullTableCardinality"), new Text("15"), new Value(new byte[0]));

    new MapReduceDriver<Key, Value, Text, IntWritable, Text, Mutation>()
            .withMapper(new FullTableSize.FullTableMapper()).withInput(new Key(new Text("entry1")), value)
            .withInput(new Key(new Text("entry2")), value).withInput(new Key(new Text("entry3")), value)
            .withInput(new Key(new Text("entry4")), value).withInput(new Key(new Text("entry5")), value)
            .withInput(new Key(new Text("entry6")), value).withInput(new Key(new Text("entry7")), value)
            .withInput(new Key(new Text("entry8")), value).withInput(new Key(new Text("entry9")), value)
            .withInput(new Key(new Text("entry10")), value).withInput(new Key(new Text("entry11")), value)
            .withInput(new Key(new Text("entry12")), value).withInput(new Key(new Text("entry13")), value)
            .withInput(new Key(new Text("entry14")), value).withInput(new Key(new Text("entry15")), value)
            .withCombiner(new FullTableSize.FullTableCombiner()).withReducer(new FullTableSize.FullTableReducer())
            .withOutput(new Text(""), m).runTest();

}
 
开发者ID:apache,项目名称:incubator-rya,代码行数:22,代码来源:FullTableSizeTest.java

示例2: runMyTest

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
@Test @SuppressWarnings({ "rawtypes", "unchecked" })
public void runMyTest() {

	List<Pair<LongWritable, Text>> inputs = new ArrayList<>();
	inputs.add(new Pair<>(
			new LongWritable(1), new Text("the quick brown fox jumped over the lazy dog.")));

	MapReduceDriver driver = getTestDriver();
	driver.addAll(inputs);

	try {
		driver.runTest();
	} catch (IOException e) {
		e.printStackTrace();
		fail(e.getMessage());
	}
}
 
开发者ID:conversant,项目名称:mara,代码行数:18,代码来源:DistributedWordCountMapReduceTest.java

示例3: setUp

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
@Before
public void setUp() {

  /*
   * Set up the mapper test harness.
   */
  WordMapper mapper = new WordMapper();
  mapDriver = new MapDriver<LongWritable, Text, Text, IntWritable>();
  mapDriver.setMapper(mapper);

  /*
   * Set up the reducer test harness.
   */
  SumReducer reducer = new SumReducer();
  reduceDriver = new ReduceDriver<Text, IntWritable, Text, IntWritable>();
  reduceDriver.setReducer(reducer);

  /*
   * Set up the mapper/reducer test harness.
   */
  mapReduceDriver = new MapReduceDriver<LongWritable, Text, Text, IntWritable, Text, IntWritable>();
  mapReduceDriver.setMapper(mapper);
  mapReduceDriver.setReducer(reducer);
}
 
开发者ID:mellowonpsx,项目名称:cloudera-homework,代码行数:25,代码来源:TestWordCount.java

示例4: testWithMRUnit

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
@Test
public void testWithMRUnit() throws IOException {
  MapReduceDriver<LongWritable, Text, Text, IntWritable, Text, IntWritable> driver =
      new MapReduceDriver<LongWritable, Text, Text, IntWritable, Text, IntWritable>();

  driver.setMapper(new StartsWithCountMapper());
  driver.setReducer(new StartsWithCountReducer());

  LongWritable k = new LongWritable();
  driver.withInput(k, new Text("This is a line number one"));
  driver.withInput(k, new Text("This is another line"));

  driver.withOutput(new Text("T"), new IntWritable(2));
  driver.withOutput(new Text("a"), new IntWritable(2));
  driver.withOutput(new Text("i"), new IntWritable(2));
  driver.withOutput(new Text("l"), new IntWritable(2));
  driver.withOutput(new Text("n"), new IntWritable(1));
  driver.withOutput(new Text("o"), new IntWritable(1));
  driver.runTest();
}
 
开发者ID:DemandCube,项目名称:NeverwinterDP-Commons,代码行数:21,代码来源:StartsWithCountMapperReducerTest.java

示例5: setUp

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
@Before
public void setUp() {
    AnalyzerBeansConfiguration analyzerBeansConfiguration = buildAnalyzerBeansConfigurationLocalFS(CSV_FILE_PATH);
    analysisJob = buildAnalysisJob(analyzerBeansConfiguration, CSV_FILE_PATH);
    String analyzerBeansConfigurationDatastores = ConfigurationSerializer
            .serializeAnalyzerBeansConfigurationDataStores(analyzerBeansConfiguration);
    String analysisJobXml = ConfigurationSerializer.serializeAnalysisJobToXml(analyzerBeansConfiguration,
            analysisJob);
    FlatFileMapper flatFileMapper = new FlatFileMapper();
    FlatFileReducer flatFileReducer = new FlatFileReducer();
    mapDriver = MapDriver.newMapDriver(flatFileMapper);
    mapDriver.getConfiguration().set(FlatFileTool.ANALYZER_BEANS_CONFIGURATION_DATASTORES_KEY,
            analyzerBeansConfigurationDatastores);
    mapDriver.getConfiguration().set(FlatFileTool.ANALYSIS_JOB_XML_KEY, analysisJobXml);
    reduceDriver = ReduceDriver.newReduceDriver(flatFileReducer);
    reduceDriver.getConfiguration().set(FlatFileTool.ANALYZER_BEANS_CONFIGURATION_DATASTORES_KEY,
            analyzerBeansConfigurationDatastores);
    reduceDriver.getConfiguration().set(FlatFileTool.ANALYSIS_JOB_XML_KEY, analysisJobXml);
    mapReduceDriver = MapReduceDriver.newMapReduceDriver(flatFileMapper, flatFileReducer);
}
 
开发者ID:tomaszguzialek,项目名称:hadoop-datacleaner,代码行数:21,代码来源:FlatFileMapperReducerTest.java

示例6: verifyMapReduce

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
public static void verifyMapReduce(SmartMapper mapper, SmartReducer reducer, Object key, Object input)
    throws Exception
{
  MapDriver mapDriver = new MapDriver();
  mapDriver.setMapper(mapper);
  MapReduceDriver mapReduceDriver = new MapReduceDriver();
  mapReduceDriver.setMapper(mapper);
  Object writableKey = WritableUtils.createWritable(key, mapper.getKeyInType());
  Object writableValue = WritableUtils.createWritable(input, mapper.getValueInType());
  mapDriver.withInput(writableKey, writableValue);
  List results = mapDriver.run();
  Collections.sort(results, PairComparer.INSTANCE);
  mapReduceDriver = new MapReduceDriver<LongWritable, Text, Text, LongWritable, Text, LongWritable>();
  writableKey = WritableUtils.createWritable(key, mapper.getKeyInType());
  writableValue = WritableUtils.createWritable(input, mapper.getValueInType());
  mapReduceDriver.withInput(writableKey, writableValue);
  mapReduceDriver.setMapper(mapper);
  mapReduceDriver.setReducer(reducer);
  List finalResults = mapReduceDriver.run();
  String text = String.format("[%s]\n\n -> maps via %s to -> \n\n%s\n\n -> reduces via %s to -> \n\n%s", input,
      mapper.getClass().getSimpleName(), ArrayUtils.toString(results, Echo.INSTANCE),
      reducer.getClass().getSimpleName(), ArrayUtils.toString(finalResults, Echo.INSTANCE));
  Approvals.verify(text);
}
 
开发者ID:approvals,项目名称:ApprovalTests.Java,代码行数:25,代码来源:HadoopApprovals.java

示例7: setUp

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
@Before
public void setUp() {
  CopyMapper mapper = new Migration.CopyMapper();
  IdentityReducer reducer = new Migration.IdentityReducer();
  mapDriver = MapDriver.newMapDriver(mapper);
  reduceDriver = ReduceDriver.newReduceDriver(reducer);
  mapReduceDriver = MapReduceDriver.newMapReduceDriver(mapper, reducer);
}
 
开发者ID:XiaoMi,项目名称:galaxy-fds-migration-tool,代码行数:9,代码来源:MigrationTest.java

示例8: setUp

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
@Before
public void setUp()
{
    SMSCDRMapper mapper = new SMSCDRMapper();
    SMSCDRReducer reducer = new SMSCDRReducer();
    mapDriver = MapDriver.newMapDriver(mapper);
    reduceDriver = ReduceDriver.newReduceDriver(reducer);
    mapReduceDriver = MapReduceDriver.newMapReduceDriver(mapper, reducer);
}
 
开发者ID:dkpro,项目名称:dkpro-c4corpus,代码行数:10,代码来源:MRUnitTest.java

示例9: testNamedOutputs

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
@Test @SuppressWarnings({ "unchecked", "rawtypes" })
public void testNamedOutputs() {

	String lineStr = "a b c d e f a";

	List<Pair<LongWritable, Text>> inputs = new ArrayList<>();
	LongWritable offset = new LongWritable(0);
	Text line = new Text(lineStr);
	inputs.add(new Pair<>(offset, line));

	LongWritable ONE = new LongWritable(1l);
	List<Pair<Text,LongWritable>> outputs = new ArrayList<>();
	outputs.add(new Pair<>(new Text("a"), new LongWritable(2l)));
	outputs.add(new Pair<>(new Text("b"), ONE));
	outputs.add(new Pair<>(new Text("c"), ONE));
	outputs.add(new Pair<>(new Text("d"), ONE));
	outputs.add(new Pair<>(new Text("e"), ONE));
	outputs.add(new Pair<>(new Text("f"), ONE));

	MapReduceDriver driver = getTestDriver();
	driver.addAll(inputs);
	driver.addAllOutput(outputs);

	try {
		driver.runTest();

		// Check that our DEBUG line was written to the multiout
		verifyNamedOutput("DEBUG", new Text(lineStr.length() + ":"), line);

		// Example of how we can grab the (mock) MultipleOutput directly if needed
		MultipleOutputs multiOut = this.getNamedOutput();
		assertNotNull(multiOut);

	} catch (IOException e) {
		e.printStackTrace();
		fail(e.getMessage());
	}

}
 
开发者ID:conversant,项目名称:mara,代码行数:40,代码来源:NamedOutputsExampleTest.java

示例10: setup

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
@Before
public void setup() {
	TestMapper mapper = new TestMapper();
	TestReducer reducer = new TestReducer();
	driver = MapReduceDriver.newMapReduceDriver(mapper, reducer);

	Configuration conf = driver.getConfiguration();
	Job job = mock(Job.class);
	when(job.getConfiguration()).thenReturn(conf);

	CompositeSortKeySerialization.configureMapOutputKey(job, Text.class, IntWritable.class);

	// MRUnit sets these differently than standard MapReduce:
	driver.setKeyGroupingComparator(new CompositeSortKey.GroupingComparator<Text, IntWritable>());
}
 
开发者ID:conversant,项目名称:mara,代码行数:16,代码来源:CompositeSortKeyTest.java

示例11: before

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
@Before
public void before() {
    PostcodeMapper mapper = new PostcodeMapper();
    PostcodeReducer combiner = new PostcodeReducer();
    PostcodeReducer reducer = new PostcodeReducer();

    mapReduceDriver = MapReduceDriver.newMapReduceDriver(mapper, reducer, combiner);
}
 
开发者ID:ch4mpy,项目名称:hadoop2,代码行数:9,代码来源:PostcodeMRTest.java

示例12: before

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
@Before
public void before() throws URISyntaxException {
    CsvFieldCountMapper mapper = new CsvFieldCountMapper();
    LongSumReducer<Text> combiner = new LongSumReducer<Text>();
    LongSumReducer<Text> reducer = new LongSumReducer<Text>();
    mapReduceDriver = MapReduceDriver.newMapReduceDriver(mapper, reducer, combiner);
    Configuration conf = mapReduceDriver.getConfiguration();
    conf.setInt(CsvFieldCountMapper.CSV_FIELD_IDX, 2);
    conf.set(CsvFieldCountMapper.FILTER_CACHE_FILE_NAME, "fr_urban_postcodes.txt");
    mapReduceDriver.addCacheFile(new File("target/test-classes/referential/fr_urban_postcodes.txt").toURI());

}
 
开发者ID:ch4mpy,项目名称:hadoop2,代码行数:13,代码来源:PostcodeMRTest.java

示例13: before

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
@Before
public void before() throws URISyntaxException {
    CsvFieldCountMapper mapper = new CsvFieldCountMapper();
    LongSumReducer<Text> combiner = new LongSumReducer<Text>();
    LongSumReducer<Text> reducer = new LongSumReducer<Text>();
    mapReduceDriver = MapReduceDriver.newMapReduceDriver(mapper, reducer, combiner);
    Configuration conf = mapReduceDriver.getConfiguration();
    conf.setInt(CsvFieldCountMapper.CSV_FIELD_IDX, 2);
}
 
开发者ID:ch4mpy,项目名称:hadoop2,代码行数:10,代码来源:PostcodeMRTest.java

示例14: runWithGraph

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
public static Map<Long, FaunusVertex> runWithGraph(final Map<Long, FaunusVertex> graph, final MapReduceDriver driver) throws IOException {
    driver.resetOutput();
    driver.resetExpectedCounters();
    driver.getConfiguration().setBoolean(HadoopCompiler.TESTING, true);
    for (final FaunusVertex vertex : graph.values()) {
        driver.withInput(NullWritable.get(), vertex);
    }

    final Map<Long, FaunusVertex> map = new HashMap<Long, FaunusVertex>();
    for (final Object pair : driver.run()) {
        map.put(((Pair<NullWritable, FaunusVertex>) pair).getSecond().getLongId(), ((Pair<NullWritable, FaunusVertex>) pair).getSecond());
    }
    return map;
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:15,代码来源:BaseTest.java

示例15: runWithGraphNoIndex

import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver; //导入依赖的package包/类
public static List runWithGraphNoIndex(final Map<Long, FaunusVertex> graph, final MapReduceDriver driver) throws IOException {
    driver.resetOutput();
    driver.resetExpectedCounters();
    driver.getConfiguration().setBoolean(HadoopCompiler.TESTING, true);
    for (final Vertex vertex : graph.values()) {
        driver.withInput(NullWritable.get(), vertex);
    }
    return driver.run();
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:10,代码来源:BaseTest.java


注:本文中的org.apache.hadoop.mrunit.mapreduce.MapReduceDriver类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。