当前位置: 首页>>代码示例>>Java>>正文


Java Configuration.getResource方法代码示例

本文整理汇总了Java中org.apache.hadoop.conf.Configuration.getResource方法的典型用法代码示例。如果您正苦于以下问题:Java Configuration.getResource方法的具体用法?Java Configuration.getResource怎么用?Java Configuration.getResource使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.conf.Configuration的用法示例。


在下文中一共展示了Configuration.getResource方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getConfiguration

import org.apache.hadoop.conf.Configuration; //导入方法依赖的package包/类
static Configuration getConfiguration(String jobTrackerSpec)
{
  Configuration conf = new Configuration();
  if (jobTrackerSpec != null) {        
    if (jobTrackerSpec.indexOf(":") >= 0) {
      conf.set("mapred.job.tracker", jobTrackerSpec);
    } else {
      String classpathFile = "hadoop-" + jobTrackerSpec + ".xml";
      URL validate = conf.getResource(classpathFile);
      if (validate == null) {
        throw new RuntimeException(classpathFile + " not found on CLASSPATH");
      }
      conf.addResource(classpathFile);
    }
  }
  return conf;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:JobClient.java

示例2: addNewConfigResource

import org.apache.hadoop.conf.Configuration; //导入方法依赖的package包/类
private void addNewConfigResource(String rsrcName, String keyGroup,
    String groups, String keyHosts, String hosts)
        throws FileNotFoundException, UnsupportedEncodingException {
  // location for temp resource should be in CLASSPATH
  Configuration conf = new Configuration();
  URL url = conf.getResource("hdfs-site.xml");

  String urlPath = URLDecoder.decode(url.getPath().toString(), "UTF-8");
  Path p = new Path(urlPath);
  Path dir = p.getParent();
  tempResource = dir.toString() + "/" + rsrcName;

  String newResource =
  "<configuration>"+
  "<property><name>" + keyGroup + "</name><value>"+groups+"</value></property>" +
  "<property><name>" + keyHosts + "</name><value>"+hosts+"</value></property>" +
  "</configuration>";
  PrintWriter writer = new PrintWriter(new FileOutputStream(tempResource));
  writer.println(newResource);
  writer.close();

  Configuration.addDefaultResource(rsrcName);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:24,代码来源:TestRefreshUserMappings.java

示例3: call

import org.apache.hadoop.conf.Configuration; //导入方法依赖的package包/类
@Override
public void call(JavaPairRDD<K,M> newData, Time timestamp)
    throws IOException, InterruptedException {

  if (newData.isEmpty()) {
    log.info("No data in current generation's RDD; nothing to do");
    return;
  }

  log.info("Beginning update at {}", timestamp);

  Configuration hadoopConf = sparkContext.hadoopConfiguration();
  if (hadoopConf.getResource("core-site.xml") == null) {
    log.warn("Hadoop config like core-site.xml was not found; " +
             "is the Hadoop config directory on the classpath?");
  }

  JavaPairRDD<K,M> pastData;
  Path inputPathPattern = new Path(dataDirString + "/*/part-*");
  FileSystem fs = FileSystem.get(inputPathPattern.toUri(), hadoopConf);
  FileStatus[] inputPathStatuses = fs.globStatus(inputPathPattern);
  if (inputPathStatuses == null || inputPathStatuses.length == 0) {

    log.info("No past data at path(s) {}", inputPathPattern);
    pastData = null;

  } else {

    log.info("Found past data at path(s) like {}", inputPathStatuses[0].getPath());
    Configuration updatedConf = new Configuration(hadoopConf);
    updatedConf.set(FileInputFormat.INPUT_DIR, joinFSPaths(fs, inputPathStatuses));

    @SuppressWarnings("unchecked")
    JavaPairRDD<Writable,Writable> pastWritableData = (JavaPairRDD<Writable,Writable>)
        sparkContext.newAPIHadoopRDD(updatedConf,
                                     SequenceFileInputFormat.class,
                                     keyWritableClass,
                                     messageWritableClass);

    pastData = pastWritableData.mapToPair(
        new WritableToValueFunction<>(keyClass,
                                      messageClass,
                                      keyWritableClass,
                                      messageWritableClass));
  }

  if (updateTopic == null || updateBroker == null) {
    log.info("Not producing updates to update topic since none was configured");
    updateInstance.runUpdate(sparkContext,
                             timestamp.milliseconds(),
                             newData,
                             pastData,
                             modelDirString,
                             null);
  } else {
    // This TopicProducer should not be async; sends one big model generally and
    // needs to occur before other updates reliably rather than be buffered
    try (TopicProducer<String,U> producer =
             new TopicProducerImpl<>(updateBroker, updateTopic, false)) {
      updateInstance.runUpdate(sparkContext,
                               timestamp.milliseconds(),
                               newData,
                               pastData,
                               modelDirString,
                               producer);
    }
  }
}
 
开发者ID:oncewang,项目名称:oryx2,代码行数:69,代码来源:BatchUpdateFunction.java


注:本文中的org.apache.hadoop.conf.Configuration.getResource方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。