當前位置: 首頁>>代碼示例>>Java>>正文


Java GenericOptionsParser類代碼示例

本文整理匯總了Java中org.apache.hadoop.util.GenericOptionsParser的典型用法代碼示例。如果您正苦於以下問題:Java GenericOptionsParser類的具體用法?Java GenericOptionsParser怎麽用?Java GenericOptionsParser使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


GenericOptionsParser類屬於org.apache.hadoop.util包,在下文中一共展示了GenericOptionsParser類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: main

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
public static void main(String argv[]) {
  Thread.setDefaultUncaughtExceptionHandler(new YarnUncaughtExceptionHandler());
  StringUtils.startupShutdownMessage(ResourceManager.class, argv, LOG);
  try {
    Configuration conf = new YarnConfiguration();
    GenericOptionsParser hParser = new GenericOptionsParser(conf, argv);
    argv = hParser.getRemainingArgs();
    // If -format-state-store, then delete RMStateStore; else startup normally
    if (argv.length == 1 && argv[0].equals("-format-state-store")) {
      deleteRMStateStore(conf);
    } else {
      ResourceManager resourceManager = new ResourceManager();
      ShutdownHookManager.get().addShutdownHook(
        new CompositeServiceShutdownHook(resourceManager),
        SHUTDOWN_HOOK_PRIORITY);
      resourceManager.init(conf);
      resourceManager.start();
    }
  } catch (Throwable t) {
    LOG.fatal("Error starting ResourceManager", t);
    System.exit(-1);
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:24,代碼來源:ResourceManager.java

示例2: launchAppHistoryServer

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
static ApplicationHistoryServer launchAppHistoryServer(String[] args) {
  Thread
    .setDefaultUncaughtExceptionHandler(new YarnUncaughtExceptionHandler());
  StringUtils.startupShutdownMessage(ApplicationHistoryServer.class, args,
    LOG);
  ApplicationHistoryServer appHistoryServer = null;
  try {
    appHistoryServer = new ApplicationHistoryServer();
    ShutdownHookManager.get().addShutdownHook(
      new CompositeServiceShutdownHook(appHistoryServer),
      SHUTDOWN_HOOK_PRIORITY);
    YarnConfiguration conf = new YarnConfiguration();
    new GenericOptionsParser(conf, args);
    appHistoryServer.init(conf);
    appHistoryServer.start();
  } catch (Throwable t) {
    LOG.fatal("Error starting ApplicationHistoryServer", t);
    ExitUtil.terminate(-1, "Error starting ApplicationHistoryServer");
  }
  return appHistoryServer;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:22,代碼來源:ApplicationHistoryServer.java

示例3: printUsage

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
void printUsage() {
  // The CLI package should do this for us, but I can't figure out how
  // to make it print something reasonable.
  System.out.println("bin/hadoop pipes");
  System.out.println("  [-input <path>] // Input directory");
  System.out.println("  [-output <path>] // Output directory");
  System.out.println("  [-jar <jar file> // jar filename");
  System.out.println("  [-inputformat <class>] // InputFormat class");
  System.out.println("  [-map <class>] // Java Map class");
  System.out.println("  [-partitioner <class>] // Java Partitioner");
  System.out.println("  [-reduce <class>] // Java Reduce class");
  System.out.println("  [-writer <class>] // Java RecordWriter");
  System.out.println("  [-program <executable>] // executable URI");
  System.out.println("  [-reduces <num>] // number of reduces");
  System.out.println("  [-lazyOutput <true/false>] // createOutputLazily");
  System.out.println();
  GenericOptionsParser.printGenericCommandUsage(System.out);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:19,代碼來源:Submitter.java

示例4: launchJobHistoryServer

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
static JobHistoryServer launchJobHistoryServer(String[] args) {
  Thread.
      setDefaultUncaughtExceptionHandler(new YarnUncaughtExceptionHandler());
  StringUtils.startupShutdownMessage(JobHistoryServer.class, args, LOG);
  JobHistoryServer jobHistoryServer = null;
  try {
    jobHistoryServer = new JobHistoryServer();
    ShutdownHookManager.get().addShutdownHook(
        new CompositeServiceShutdownHook(jobHistoryServer),
        SHUTDOWN_HOOK_PRIORITY);
    YarnConfiguration conf = new YarnConfiguration(new JobConf());
    new GenericOptionsParser(conf, args);
    jobHistoryServer.init(conf);
    jobHistoryServer.start();
  } catch (Throwable t) {
    LOG.fatal("Error starting JobHistoryServer", t);
    ExitUtil.terminate(-1, "Error starting JobHistoryServer");
  }
  return jobHistoryServer;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:21,代碼來源:JobHistoryServer.java

示例5: main

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
public static void main(String[] args) throws Exception {
  Configuration conf = new Configuration();
  String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
  if (otherArgs.length < 2) {
    System.err.println("Usage: wordcount <in> [<in>...] <out>");
    System.exit(2);
  }
  Job job = Job.getInstance(conf, "word count");
  job.setJarByClass(WordCount.class);
  job.setMapperClass(TokenizerMapper.class);
  job.setCombinerClass(IntSumReducer.class);
  job.setReducerClass(IntSumReducer.class);
  job.setOutputKeyClass(Text.class);
  job.setOutputValueClass(IntWritable.class);
  for (int i = 0; i < otherArgs.length - 1; ++i) {
    FileInputFormat.addInputPath(job, new Path(otherArgs[i]));
  }
  FileOutputFormat.setOutputPath(job,
    new Path(otherArgs[otherArgs.length - 1]));
  System.exit(job.waitForCompletion(true) ? 0 : 1);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:22,代碼來源:WordCount.java

示例6: main

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
public static void main(String args[])
    throws Exception {
  if (DFSUtil.parseHelpArgument(args, 
      ZKFailoverController.USAGE, System.out, true)) {
    System.exit(0);
  }
  
  GenericOptionsParser parser = new GenericOptionsParser(
      new HdfsConfiguration(), args);
  DFSZKFailoverController zkfc = DFSZKFailoverController.create(
      parser.getConfiguration());
  int retCode = 0;
  try {
    retCode = zkfc.run(parser.getRemainingArgs());
  } catch (Throwable t) {
    LOG.fatal("Got a fatal error, exiting now", t);
  }
  System.exit(retCode);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:20,代碼來源:DFSZKFailoverController.java

示例7: main

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
public static void main(String[] args) throws Exception {
  final Configuration conf = HBaseConfiguration.create();
  final ChoreService choreService = new ChoreService("CANARY_TOOL");
  final ScheduledChore authChore = AuthUtil.getAuthChore(conf);
  if (authChore != null) {
    choreService.scheduleChore(authChore);
  }

  // loading the generic options to conf
  new GenericOptionsParser(conf, args);

  int numThreads = conf.getInt("hbase.canary.threads.num", MAX_THREADS_NUM);
  LOG.info("Number of exection threads " + numThreads);

  ExecutorService executor = new ScheduledThreadPoolExecutor(numThreads);

  Class<? extends Sink> sinkClass =
      conf.getClass("hbase.canary.sink.class", RegionServerStdOutSink.class, Sink.class);
  Sink sink = ReflectionUtils.newInstance(sinkClass);

  int exitCode = ToolRunner.run(conf, new Canary(executor, sink), args);
  choreService.shutdown();
  executor.shutdown();
  System.exit(exitCode);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:26,代碼來源:Canary.java

示例8: main

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
/**
 * Main entry point.
 *
 * @param args The command line parameters.
 * @throws Exception When running the job fails.
 */
public static void main(String[] args) throws Exception {
  Configuration conf = HBaseConfiguration.create();
  String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
  if (otherArgs.length < 2) {
    System.err.println("ERROR: Wrong number of parameters: " + args.length);
    System.err.println("Usage: CellCounter ");
    System.err.println("       <tablename> <outputDir> <reportSeparator> [^[regex pattern] or " +
      "[Prefix] for row filter]] --starttime=[starttime] --endtime=[endtime]");
    System.err.println("  Note: -D properties will be applied to the conf used. ");
    System.err.println("  Additionally, the following SCAN properties can be specified");
    System.err.println("  to get fine grained control on what is counted..");
    System.err.println("   -D " + TableInputFormat.SCAN_COLUMN_FAMILY + "=<familyName>");
    System.err.println(" <reportSeparator> parameter can be used to override the default report separator " +
        "string : used to separate the rowId/column family name and qualifier name.");
    System.err.println(" [^[regex pattern] or [Prefix] parameter can be used to limit the cell counter count " +
        "operation to a limited subset of rows from the table based on regex or prefix pattern.");
    System.exit(-1);
  }
  Job job = createSubmittableJob(conf, otherArgs);
  System.exit(job.waitForCompletion(true) ? 0 : 1);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:28,代碼來源:CellCounter.java

示例9: run

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
@Override
public int run(String[] args) throws Exception {
  String[] otherArgs = new GenericOptionsParser(getConf(), args).getRemainingArgs();
  if (!doCommandLine(otherArgs)) {
    return 1;
  }

  Job job = createSubmittableJob(otherArgs);
  writeTempManifestFile();
  if (!job.waitForCompletion(true)) {
    LOG.info("Map-reduce job failed!");
    return 1;
  }
  completeManifest();
  return 0;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:17,代碼來源:HashTable.java

示例10: parseArgs

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
private int parseArgs(String[] args) throws IOException {
  GenericOptionsParser parser =
    new GenericOptionsParser(getConf(), args);

  String[] remainingArgs = parser.getRemainingArgs();
  if (remainingArgs.length != 3) {
    usage();
    return -1;
  }
  tableName = TableName.valueOf(remainingArgs[0]);

  region1 = Bytes.toBytesBinary(remainingArgs[1]);
  region2 = Bytes.toBytesBinary(remainingArgs[2]);
  int status = 0;
  if (notInTable(tableName, region1) || notInTable(tableName, region2)) {
    status = -1;
  } else if (Bytes.equals(region1, region2)) {
    LOG.error("Can't merge a region with itself");
    status = -1;
  }
  return status;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:23,代碼來源:Merge.java

示例11: testJobConfigurationsWithTsvImporterTextMapper

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
@Test
public void testJobConfigurationsWithTsvImporterTextMapper() throws Exception {
  String table = "test-" + UUID.randomUUID();
  Path bulkOutputPath = new Path(util.getDataTestDirOnTestFS(table),"hfiles");
  String INPUT_FILE = "InputFile1.csv";
  // Prepare the arguments required for the test.
  String[] args =
      new String[] {
          "-D" + ImportTsv.MAPPER_CONF_KEY
              + "=org.apache.hadoop.hbase.mapreduce.TsvImporterTextMapper",
          "-D" + ImportTsv.COLUMNS_CONF_KEY
              + "=HBASE_ROW_KEY,FAM:A,FAM:B",
          "-D" + ImportTsv.SEPARATOR_CONF_KEY + "=,",
          "-D" + ImportTsv.BULK_OUTPUT_CONF_KEY + "=" + bulkOutputPath.toString(), table,
          INPUT_FILE
          };
  GenericOptionsParser opts = new GenericOptionsParser(util.getConfiguration(), args);
  args = opts.getRemainingArgs();
  Job job = ImportTsv.createSubmittableJob(util.getConfiguration(), args);
  assertTrue(job.getMapperClass().equals(TsvImporterTextMapper.class));
  assertTrue(job.getReducerClass().equals(TextSortReducer.class));
  assertTrue(job.getMapOutputValueClass().equals(Text.class));
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:24,代碼來源:TestImportTsv.java

示例12: run

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
public int run(String[] args) throws Exception {
  if (args.length != 0) {
    System.err.println(format("%s [genericOptions]", NAME));
    System.err.println("  Runs ImportTsv integration tests against a distributed cluster.");
    System.err.println();
    GenericOptionsParser.printGenericCommandUsage(System.err);
    return 1;
  }

  // adding more test methods? Don't forget to add them here... or consider doing what
  // IntegrationTestsDriver does.
  provisionCluster();
  testGenerateAndLoad();
  releaseCluster();

  return 0;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:18,代碼來源:IntegrationTestImportTsv.java

示例13: printUsage

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
protected static int printUsage() {
  System.err.println(
      "Usage: [-m <maps>] number of mappers (default: " + NUM_MAPS_DEFAULT +
          ")\n" +
      "     [-v] timeline service version\n" +
      "     [-mtype <mapper type in integer>]\n" +
      "          1. simple entity write mapper\n" +
      "          2. jobhistory files replay mapper\n" +
      "     [-s <(KBs)test>] number of KB per put (mtype=1, default: " +
           SimpleEntityWriterV1.KBS_SENT_DEFAULT + " KB)\n" +
      "     [-t] package sending iterations per mapper (mtype=1, default: " +
           SimpleEntityWriterV1.TEST_TIMES_DEFAULT + ")\n" +
      "     [-d <path>] root path of job history files (mtype=2)\n" +
      "     [-r <replay mode>] (mtype=2)\n" +
      "          1. write all entities for a job in one put (default)\n" +
      "          2. write one entity at a time\n");
  GenericOptionsParser.printGenericCommandUsage(System.err);
  return -1;
}
 
開發者ID:aliyun-beta,項目名稱:aliyun-oss-hadoop-fs,代碼行數:20,代碼來源:TimelineServicePerformance.java

示例14: main

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
public static void main(String[] args) throws Exception {
  final Configuration conf = new Configuration();
  final String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
  if (otherArgs.length != 2) {
    System.err.println("Usage: wordcount <in> <out>");
    System.exit(2);
  }
  final Job job = Job.getInstance(conf,
                                  conf.get(MRJobConfig.JOB_NAME, "word count"));
  job.setJarByClass(WordCount.class);
  job.setMapperClass(TokenizerMapper.class);
  job.setCombinerClass(IntSumReducer.class);
  job.setReducerClass(IntSumReducer.class);
  job.setOutputKeyClass(Text.class);
  job.setOutputValueClass(IntWritable.class);
  FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
  FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
  System.exit(job.waitForCompletion(true) ? 0 : 1);
}
 
開發者ID:aliyun-beta,項目名稱:aliyun-oss-hadoop-fs,代碼行數:20,代碼來源:WordCount.java

示例15: printUsage

import org.apache.hadoop.util.GenericOptionsParser; //導入依賴的package包/類
void printUsage() {
  // The CLI package should do this for us, but I can't figure out how
  // to make it print something reasonable.
  System.out.println("Usage: pipes ");
  System.out.println("  [-input <path>] // Input directory");
  System.out.println("  [-output <path>] // Output directory");
  System.out.println("  [-jar <jar file> // jar filename");
  System.out.println("  [-inputformat <class>] // InputFormat class");
  System.out.println("  [-map <class>] // Java Map class");
  System.out.println("  [-partitioner <class>] // Java Partitioner");
  System.out.println("  [-reduce <class>] // Java Reduce class");
  System.out.println("  [-writer <class>] // Java RecordWriter");
  System.out.println("  [-program <executable>] // executable URI");
  System.out.println("  [-reduces <num>] // number of reduces");
  System.out.println("  [-lazyOutput <true/false>] // createOutputLazily");
  System.out.println();
  GenericOptionsParser.printGenericCommandUsage(System.out);
}
 
開發者ID:aliyun-beta,項目名稱:aliyun-oss-hadoop-fs,代碼行數:19,代碼來源:Submitter.java


注:本文中的org.apache.hadoop.util.GenericOptionsParser類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。