当前位置: 首页>>代码示例>>Java>>正文


Java SecureResources类代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources的典型用法代码示例。如果您正苦于以下问题:Java SecureResources类的具体用法?Java SecureResources怎么用?Java SecureResources使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


SecureResources类属于org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter包,在下文中一共展示了SecureResources类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: restartDataNode

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
/**
 * Restart a datanode, on the same port if requested
 * @param dnprop the datanode to restart
 * @param keepPort whether to use the same port 
 * @return true if restarting is successful
 * @throws IOException
 */
public synchronized boolean restartDataNode(DataNodeProperties dnprop,
    boolean keepPort) throws IOException {
  Configuration conf = dnprop.conf;
  String[] args = dnprop.dnArgs;
  SecureResources secureResources = dnprop.secureResources;
  Configuration newconf = new HdfsConfiguration(conf); // save cloned config
  if (keepPort) {
    InetSocketAddress addr = dnprop.datanode.getXferAddress();
    conf.set(DFS_DATANODE_ADDRESS_KEY, 
        addr.getAddress().getHostAddress() + ":" + addr.getPort());
    conf.set(DFS_DATANODE_IPC_ADDRESS_KEY,
        addr.getAddress().getHostAddress() + ":" + dnprop.ipcPort); 
  }
  DataNode newDn = DataNode.createDataNode(args, conf, secureResources);
  dataNodes.add(new DataNodeProperties(
      newDn, newconf, args, secureResources, newDn.getIpcPort()));
  numDataNodes++;
  return true;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:MiniDFSCluster.java

示例2: checkSecureConfig

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
/**
 * Checks if the DataNode has a secure configuration if security is enabled.
 * There are 2 possible configurations that are considered secure:
 * 1. The server has bound to privileged ports for RPC and HTTP via
 *   SecureDataNodeStarter.
 * 2. The configuration enables SASL on DataTransferProtocol and HTTPS (no
 *   plain HTTP) for the HTTP server.  The SASL handshake guarantees
 *   authentication of the RPC server before a client transmits a secret, such
 *   as a block access token.  Similarly, SSL guarantees authentication of the
 *   HTTP server before a client transmits a secret, such as a delegation
 *   token.
 * It is not possible to run with both privileged ports and SASL on
 * DataTransferProtocol.  For backwards-compatibility, the connection logic
 * must check if the target port is a privileged port, and if so, skip the
 * SASL handshake.
 *
 * @param dnConf DNConf to check
 * @param conf Configuration to check
 * @param resources SecuredResources obtained for DataNode
 * @throws RuntimeException if security enabled, but configuration is insecure
 */
private static void checkSecureConfig(DNConf dnConf, Configuration conf,
    SecureResources resources) throws RuntimeException {
  if (!UserGroupInformation.isSecurityEnabled()) {
    return;
  }
  SaslPropertiesResolver saslPropsResolver = dnConf.getSaslPropsResolver();
  if (resources != null && saslPropsResolver == null) {
    return;
  }
  if (dnConf.getIgnoreSecurePortsForTesting()) {
    return;
  }
  if (saslPropsResolver != null &&
      DFSUtil.getHttpPolicy(conf) == HttpConfig.Policy.HTTPS_ONLY &&
      resources == null) {
    return;
  }
  throw new RuntimeException("Cannot start secure DataNode without " +
    "configuring either privileged resources or SASL RPC data transfer " +
    "protection and SSL for HTTP.  Using privileged resources in " +
    "combination with SASL RPC data transfer protection is not supported.");
}
 
开发者ID:naver,项目名称:hadoop,代码行数:44,代码来源:DataNode.java

示例3: makeInstance

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
/**
 * Make an instance of DataNode after ensuring that at least one of the
 * given data directories (and their parent directories, if necessary)
 * can be created.
 * @param dataDirs List of directories, where the new DataNode instance should
 * keep its files.
 * @param conf Configuration instance to use.
 * @param resources Secure resources needed to run under Kerberos
 * @return DataNode instance for given list of data dirs and conf, or null if
 * no directory from this directory list can be created.
 * @throws IOException
 */
static DataNode makeInstance(Collection<StorageLocation> dataDirs,
    Configuration conf, SecureResources resources) throws IOException {
  LocalFileSystem localFS = FileSystem.getLocal(conf);
  FsPermission permission = new FsPermission(
      conf.get(DFS_DATANODE_DATA_DIR_PERMISSION_KEY,
               DFS_DATANODE_DATA_DIR_PERMISSION_DEFAULT));
  DataNodeDiskChecker dataNodeDiskChecker =
      new DataNodeDiskChecker(permission);
  List<StorageLocation> locations =
      checkStorageLocations(dataDirs, localFS, dataNodeDiskChecker);
  DefaultMetricsSystem.initialize("DataNode");

  assert locations.size() > 0 : "number of data directories should be > 0";
  return new DataNode(conf, locations, resources);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:28,代码来源:DataNode.java

示例4: secureMain

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
public static void secureMain(String args[], SecureResources resources) {
  int errorCode = 0;
  try {
    StringUtils.startupShutdownMessage(DataNode.class, args, LOG);
    DataNode datanode = createDataNode(args, null, resources);
    if (datanode != null) {
      datanode.join();
    } else {
      errorCode = 1;
    }
  } catch (Throwable e) {
    LOG.fatal("Exception in secureMain", e);
    terminate(1, e);
  } finally {
    // We need to terminate the process here because either shutdown was called
    // or some disk related conditions like volumes tolerated or volumes required
    // condition was not met. Also, In secure mode, control will go to Jsvc
    // and Datanode process hangs if it does not exit.
    LOG.warn("Exiting Datanode");
    terminate(errorCode);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:DataNode.java

示例5: instantiateDataNode

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
/** Instantiate a single datanode object, along with its secure resources. 
 * This must be run by invoking{@link DataNode#runDatanodeDaemon()} 
 * subsequently. 
 */
public static DataNode instantiateDataNode(String args [], Configuration conf,
    SecureResources resources) throws IOException {
  if (conf == null)
    conf = new HdfsConfiguration();
  
  if (args != null) {
    // parse generic hadoop options
    GenericOptionsParser hParser = new GenericOptionsParser(conf, args);
    args = hParser.getRemainingArgs();
  }
  
  if (!parseArguments(args, conf)) {
    printUsage(System.err);
    return null;
  }
  Collection<StorageLocation> dataLocations = getStorageLocations(conf);
  UserGroupInformation.setConfiguration(conf);
  SecurityUtil.login(conf, DFS_DATANODE_KEYTAB_FILE_KEY,
      DFS_DATANODE_KERBEROS_PRINCIPAL_KEY, getHostName(conf));
  return makeInstance(dataLocations, conf, resources);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:26,代码来源:DataNode.java

示例6: secureMain

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
public static void secureMain(String args[], SecureResources resources) {
  int errorCode = 0;
  try {
    StringUtils.startupShutdownMessage(DataNode.class, args, LOG);
    DataNode datanode = createDataNode(args, null, resources);
    if (datanode != null) {
      datanode.join();
    } else {
      errorCode = 1;
    }
  } catch (Throwable e) {
    LOG.error("Exception in secureMain", e);
    terminate(1, e);
  } finally {
    // We need to terminate the process here because either shutdown was called
    // or some disk related conditions like volumes tolerated or volumes required
    // condition was not met. Also, In secure mode, control will go to Jsvc
    // and Datanode process hangs if it does not exit.
    LOG.warn("Exiting Datanode");
    terminate(errorCode);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:23,代码来源:DataNode.java

示例7: instantiateDataNode

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
/** Instantiate a single datanode object, along with its secure resources. 
 * This must be run by invoking{@link DataNode#runDatanodeDaemon()} 
 * subsequently. 
 */
public static DataNode instantiateDataNode(String args [], Configuration conf,
    SecureResources resources) throws IOException {
  if (conf == null)
    conf = new HdfsConfiguration();
  
  if (args != null) {
    // parse generic hadoop options
    GenericOptionsParser hParser = new GenericOptionsParser(conf, args);
    args = hParser.getRemainingArgs();
  }
  
  if (!parseArguments(args, conf)) {
    printUsage(System.err);
    return null;
  }
  Collection<StorageLocation> dataLocations = getStorageLocations(conf);
  UserGroupInformation.setConfiguration(conf);
  SecurityUtil.login(conf, DFS_DATANODE_KEYTAB_FILE_KEY,
      DFS_DATANODE_KERBEROS_PRINCIPAL_KEY);
  return makeInstance(dataLocations, conf, resources);
}
 
开发者ID:yncxcw,项目名称:big-c,代码行数:26,代码来源:DataNode.java

示例8: instantiateDataNode

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
/** Instantiate a single datanode object, along with its secure resources. 
 * This must be run by invoking{@link DataNode#runDatanodeDaemon()} 
 * subsequently. 
 */
public static DataNode instantiateDataNode(String args [], Configuration conf,
    SecureResources resources) throws IOException {
  if (conf == null)
    conf = new HdfsConfiguration();   //czhc: 怎么定位到配置文件在哪的?
  
  if (args != null) {
    // parse generic hadoop options
    GenericOptionsParser hParser = new GenericOptionsParser(conf, args);
    args = hParser.getRemainingArgs();
  }
  
  if (!parseArguments(args, conf)) {
    printUsage(System.err);
    return null;
  }
  Collection<StorageLocation> dataLocations = getStorageLocations(conf);
  UserGroupInformation.setConfiguration(conf);
  SecurityUtil.login(conf, DFS_DATANODE_KEYTAB_FILE_KEY,
      DFS_DATANODE_KERBEROS_PRINCIPAL_KEY);
  return makeInstance(dataLocations, conf, resources);
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:26,代码来源:DataNode.java

示例9: instantiateDataNode

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
/** Instantiate a single datanode object, along with its secure resources. 
 * This must be run by invoking{@link DataNode#runDatanodeDaemon()} 
 * subsequently. 
 */
public static DataNode instantiateDataNode(String args [], Configuration conf,
    SecureResources resources) throws IOException {
  if (conf == null)
    conf = new HdfsConfiguration();
  
  if (args != null) {
    // parse generic hadoop options
    GenericOptionsParser hParser = new GenericOptionsParser(conf, args);
    args = hParser.getRemainingArgs();
  }
  
  if (!parseArguments(args, conf)) {
    printUsage(System.err);
    return null;
  }
  Collection<URI> dataDirs = getStorageDirs(conf);
  UserGroupInformation.setConfiguration(conf);
  SecurityUtil.login(conf, DFS_DATANODE_KEYTAB_FILE_KEY,
      DFS_DATANODE_USER_NAME_KEY);
  return makeInstance(dataDirs, conf, resources);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:26,代码来源:DataNode.java

示例10: makeInstance

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
/**
 * Make an instance of DataNode after ensuring that at least one of the
 * given data directories (and their parent directories, if necessary)
 * can be created.
 * @param dataDirs List of directories, where the new DataNode instance should
 * keep its files.
 * @param conf Configuration instance to use.
 * @param resources Secure resources needed to run under Kerberos
 * @return DataNode instance for given list of data dirs and conf, or null if
 * no directory from this directory list can be created.
 * @throws IOException
 */
static DataNode makeInstance(Collection<URI> dataDirs, Configuration conf,
    SecureResources resources) throws IOException {
  LocalFileSystem localFS = FileSystem.getLocal(conf);
  FsPermission permission = new FsPermission(
      conf.get(DFS_DATANODE_DATA_DIR_PERMISSION_KEY,
               DFS_DATANODE_DATA_DIR_PERMISSION_DEFAULT));
  DataNodeDiskChecker dataNodeDiskChecker =
      new DataNodeDiskChecker(permission);
  ArrayList<File> dirs =
      getDataDirsFromURIs(dataDirs, localFS, dataNodeDiskChecker);
  DefaultMetricsSystem.initialize("DataNode");

  assert dirs.size() > 0 : "number of data directories should be > 0";
  return new DataNode(conf, dirs, resources);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:28,代码来源:DataNode.java

示例11: secureMain

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
public static void secureMain(String args[], SecureResources resources) {
  try {
    StringUtils.startupShutdownMessage(DataNode.class, args, LOG);
    DataNode datanode = createDataNode(args, null, resources);
    if (datanode != null)
      datanode.join();
  } catch (Throwable e) {
    LOG.fatal("Exception in secureMain", e);
    terminate(1, e);
  } finally {
    // We need to terminate the process here because either shutdown was called
    // or some disk related conditions like volumes tolerated or volumes required
    // condition was not met. Also, In secure mode, control will go to Jsvc
    // and Datanode process hangs if it does not exit.
    LOG.warn("Exiting Datanode");
    terminate(0);
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:19,代码来源:DataNode.java

示例12: restartDataNode

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
/**
 * Restart a datanode, on the same port if requested
 * @param dnprop the datanode to restart
 * @param keepPort whether to use the same port 
 * @return true if restarting is successful
 * @throws IOException
 */
public synchronized boolean restartDataNode(DataNodeProperties dnprop,
    boolean keepPort) throws IOException {
  Configuration conf = dnprop.conf;
  String[] args = dnprop.dnArgs;
  SecureResources secureResources = dnprop.secureResources;
  Configuration newconf = new HdfsConfiguration(conf); // save cloned config
  if (keepPort) {
    InetSocketAddress addr = dnprop.datanode.getXferAddress();
    conf.set(DFS_DATANODE_ADDRESS_KEY, 
        addr.getAddress().getHostAddress() + ":" + addr.getPort());
  }
  dataNodes.add(new DataNodeProperties(
      DataNode.createDataNode(args, conf, secureResources),
      newconf, args, secureResources));
  numDataNodes++;
  return true;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:25,代码来源:MiniDFSCluster.java

示例13: DataNode

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
/**
 * Create the DataNode given a configuration, an array of dataDirs,
 * and a namenode proxy
 */
DataNode(final Configuration conf, final AbstractList<File> dataDirs,
    final SecureResources resources) throws IOException {
  super(conf);

  this.usersWithLocalPathAccess = Arrays.asList(conf.getTrimmedStrings(
      DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
  this.connectToDnViaHostname =
      conf.getBoolean(DFSConfigKeys.DFS_DATANODE_USE_DN_HOSTNAME,
          DFSConfigKeys.DFS_DATANODE_USE_DN_HOSTNAME_DEFAULT);
  this.getHdfsBlockLocationsEnabled =
      conf.getBoolean(DFSConfigKeys.DFS_HDFS_BLOCKS_METADATA_ENABLED,
          DFSConfigKeys.DFS_HDFS_BLOCKS_METADATA_ENABLED_DEFAULT);
  try {
    hostName = getHostName(conf);
    LOG.info("Configured hostname is " + hostName);
    startDataNode(conf, dataDirs, resources);
  } catch (IOException ie) {
    shutdown();
    throw ie;
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:26,代码来源:DataNode.java

示例14: instantiateDataNode

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
/**
 * Instantiate a single datanode object, along with its secure resources.
 * This must be run by invoking{@link DataNode#runDatanodeDaemon()}
 * subsequently.
 */
public static DataNode instantiateDataNode(String args[], Configuration conf,
    SecureResources resources) throws IOException {
  if (conf == null) {
    conf = new HdfsConfiguration();
  }
  
  if (args != null) {
    // parse generic hadoop options
    GenericOptionsParser hParser = new GenericOptionsParser(conf, args);
    args = hParser.getRemainingArgs();
  }
  
  if (!parseArguments(args, conf)) {
    printUsage(System.err);
    return null;
  }
  Collection<URI> dataDirs = getStorageDirs(conf);
  UserGroupInformation.setConfiguration(conf);
  SecurityUtil
      .login(conf, DFS_DATANODE_KEYTAB_FILE_KEY, DFS_DATANODE_USER_NAME_KEY);
  return makeInstance(dataDirs, conf, resources);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:28,代码来源:DataNode.java

示例15: secureMain

import org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources; //导入依赖的package包/类
public static void secureMain(String args[], SecureResources resources) {
  try {
    StringUtils.startupShutdownMessage(DataNode.class, args, LOG);
    DataNode datanode = createDataNode(args, null, resources);
    if (datanode != null) {
      datanode.join();
    }
  } catch (Throwable e) {
    LOG.fatal("Exception in secureMain", e);
    terminate(1, e);
  } finally {
    // We need to terminate the process here because either shutdown was called
    // or some disk related conditions like volumes tolerated or volumes required
    // condition was not met. Also, In secure mode, control will go to Jsvc
    // and Datanode process hangs if it does not exit.
    LOG.warn("Exiting Datanode");
    terminate(0);
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:20,代码来源:DataNode.java


注:本文中的org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResources类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。