当前位置: 首页>>代码示例>>Java>>正文


Java UserGroupInformation.isSecurityEnabled方法代码示例

本文整理汇总了Java中org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled方法的典型用法代码示例。如果您正苦于以下问题:Java UserGroupInformation.isSecurityEnabled方法的具体用法?Java UserGroupInformation.isSecurityEnabled怎么用?Java UserGroupInformation.isSecurityEnabled使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.security.UserGroupInformation的用法示例。


在下文中一共展示了UserGroupInformation.isSecurityEnabled方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: filters

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
/**
 * Add an internal servlet in the server, specifying whether or not to
 * protect with Kerberos authentication.
 * Note: This method is to be used for adding servlets that facilitate
 * internal communication and not for user facing functionality. For
 +   * servlets added using this method, filters (except internal Kerberos
 * filters) are not enabled.
 *
 * @param name The name of the servlet (can be passed as null)
 * @param pathSpec The path spec for the servlet
 * @param clazz The servlet class
 * @param requireAuth Require Kerberos authenticate to access servlet
 */
public void addInternalServlet(String name, String pathSpec,
    Class<? extends HttpServlet> clazz, boolean requireAuth) {
  ServletHolder holder = new ServletHolder(clazz);
  if (name != null) {
    holder.setName(name);
  }
  webAppContext.addServlet(holder, pathSpec);

  if(requireAuth && UserGroupInformation.isSecurityEnabled()) {
     LOG.info("Adding Kerberos (SPNEGO) filter to " + name);
     ServletHandler handler = webAppContext.getServletHandler();
     FilterMapping fmap = new FilterMapping();
     fmap.setPathSpec(pathSpec);
     fmap.setFilterName(SPNEGO_FILTER);
     fmap.setDispatches(Handler.ALL);
     handler.addFilterMapping(fmap);
  }
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:32,代码来源:HttpServer2.java

示例2: testSimpleAuth

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
@Test
public void testSimpleAuth() throws Exception {

  rm.start();

  // ensure users can access web pages
  // this should work for secure and non-secure clusters
  URL url = new URL("http://localhost:8088/cluster");
  HttpURLConnection conn = (HttpURLConnection) url.openConnection();
  try {
    conn.getInputStream();
    assertEquals(Status.OK.getStatusCode(), conn.getResponseCode());
  } catch (Exception e) {
    fail("Fetching url failed");
  }

  if (UserGroupInformation.isSecurityEnabled()) {
    testAnonymousKerberosUser();
  } else {
    testAnonymousSimpleUser();
  }

  rm.stop();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:TestRMWebappAuthentication.java

示例3: filters

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
/**
 * Add an internal servlet in the server, specifying whether or not to
 * protect with Kerberos authentication. 
 * Note: This method is to be used for adding servlets that facilitate
 * internal communication and not for user facing functionality. For
 +   * servlets added using this method, filters (except internal Kerberos
 * filters) are not enabled. 
 * 
 * @param name The name of the servlet (can be passed as null)
 * @param pathSpec The path spec for the servlet
 * @param clazz The servlet class
 * @param requireAuth Require Kerberos authenticate to access servlet
 */
public void addInternalServlet(String name, String pathSpec, 
    Class<? extends HttpServlet> clazz, boolean requireAuth) {
  ServletHolder holder = new ServletHolder(clazz);
  if (name != null) {
    holder.setName(name);
  }
  webAppContext.addServlet(holder, pathSpec);

  if(requireAuth && UserGroupInformation.isSecurityEnabled()) {
     LOG.info("Adding Kerberos (SPNEGO) filter to " + name);
     ServletHandler handler = webAppContext.getServletHandler();
     FilterMapping fmap = new FilterMapping();
     fmap.setPathSpec(pathSpec);
     fmap.setFilterName(SPNEGO_FILTER);
     fmap.setDispatches(Handler.ALL);
     handler.addFilterMapping(fmap);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:HttpServer.java

示例4: checkRequestorOrSendError

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
private boolean checkRequestorOrSendError(Configuration conf,
    HttpServletRequest request, HttpServletResponse response)
        throws IOException {
  if (UserGroupInformation.isSecurityEnabled()
      && !isValidRequestor(request, conf)) {
    response.sendError(HttpServletResponse.SC_FORBIDDEN,
        "Only Namenode and another JournalNode may access this servlet");
    LOG.warn("Received non-NN/JN request for edits from "
        + request.getRemoteHost());
    return false;
  }
  return true;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:GetJournalEditServlet.java

示例5: verifyUsernamePattern

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
void verifyUsernamePattern(String user) {
  if (!UserGroupInformation.isSecurityEnabled() &&
      !nonsecureLocalUserPattern.matcher(user).matches()) {
    throw new IllegalArgumentException("Invalid user name '" + user + "'," +
        " it must match '" + nonsecureLocalUserPattern.pattern() + "'");
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:LinuxContainerExecutor.java

示例6: serviceStart

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
@Override
protected void serviceStart() throws Exception {
  if (UserGroupInformation.isSecurityEnabled()) {
    loginUGI = UserGroupInformation.getLoginUser();
  } else {
    loginUGI = UserGroupInformation.getCurrentUser();
  }
  clientRpcServer.start();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:HSAdminServer.java

示例7: testAuthorizedAccess

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
@Test
public void testAuthorizedAccess() throws Exception {
  MyContainerManager containerManager = new MyContainerManager();
  rm =
      new MockRMWithAMS(conf, containerManager);
  rm.start();

  MockNM nm1 = rm.registerNode("localhost:1234", 5120);

  Map<ApplicationAccessType, String> acls =
      new HashMap<ApplicationAccessType, String>(2);
  acls.put(ApplicationAccessType.VIEW_APP, "*");
  RMApp app = rm.submitApp(1024, "appname", "appuser", acls);

  nm1.nodeHeartbeat(true);

  int waitCount = 0;
  while (containerManager.containerTokens == null && waitCount++ < 20) {
    LOG.info("Waiting for AM Launch to happen..");
    Thread.sleep(1000);
  }
  Assert.assertNotNull(containerManager.containerTokens);

  RMAppAttempt attempt = app.getCurrentAppAttempt();
  ApplicationAttemptId applicationAttemptId = attempt.getAppAttemptId();
  waitForLaunchedState(attempt);

  // Create a client to the RM.
  final Configuration conf = rm.getConfig();
  final YarnRPC rpc = YarnRPC.create(conf);

  UserGroupInformation currentUser = UserGroupInformation
      .createRemoteUser(applicationAttemptId.toString());
  Credentials credentials = containerManager.getContainerCredentials();
  final InetSocketAddress rmBindAddress =
      rm.getApplicationMasterService().getBindAddress();
  Token<? extends TokenIdentifier> amRMToken =
      MockRMWithAMS.setupAndReturnAMRMToken(rmBindAddress,
        credentials.getAllTokens());
  currentUser.addToken(amRMToken);
  ApplicationMasterProtocol client = currentUser
      .doAs(new PrivilegedAction<ApplicationMasterProtocol>() {
        @Override
        public ApplicationMasterProtocol run() {
          return (ApplicationMasterProtocol) rpc.getProxy(ApplicationMasterProtocol.class, rm
            .getApplicationMasterService().getBindAddress(), conf);
        }
      });

  RegisterApplicationMasterRequest request = Records
      .newRecord(RegisterApplicationMasterRequest.class);
  RegisterApplicationMasterResponse response =
      client.registerApplicationMaster(request);
  Assert.assertNotNull(response.getClientToAMTokenMasterKey());
  if (UserGroupInformation.isSecurityEnabled()) {
    Assert
      .assertTrue(response.getClientToAMTokenMasterKey().array().length > 0);
  }
  Assert.assertEquals("Register response has bad ACLs", "*",
      response.getApplicationACLs().get(ApplicationAccessType.VIEW_APP));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:62,代码来源:TestAMAuthorization.java

示例8: verifyTokenCount

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
private void verifyTokenCount(ApplicationAttemptId appAttemptId, int count) {
  verify(amRMTokenManager, times(count)).applicationMasterFinished(appAttemptId);
  if (UserGroupInformation.isSecurityEnabled()) {
    verify(clientToAMTokenManager, times(count)).unRegisterApplication(appAttemptId);
    if (count > 0) {
      assertNull(applicationAttempt.createClientToken("client"));
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:TestRMAppAttemptTransitions.java

示例9: isAllowedDelegationTokenOp

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
private boolean isAllowedDelegationTokenOp() throws IOException {
  if (UserGroupInformation.isSecurityEnabled()) {
    return EnumSet.of(AuthenticationMethod.KERBEROS,
                      AuthenticationMethod.KERBEROS_SSL,
                      AuthenticationMethod.CERTIFICATE)
        .contains(UserGroupInformation.getCurrentUser()
                .getRealAuthenticationMethod());
  } else {
    return true;
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:ClientRMService.java

示例10: login

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
public static void login(Map conf, Configuration hdfsConfig)
		throws IOException {
	if (UserGroupInformation.isSecurityEnabled()) {
		String keytab = (String) conf.get(STORM_KEYTAB_FILE_KEY);
		if (keytab != null) {
			hdfsConfig.set(STORM_KEYTAB_FILE_KEY, keytab);
		}
		String userName = (String) conf.get(STORM_USER_NAME_KEY);
		if (userName != null) {
			hdfsConfig.set(STORM_USER_NAME_KEY, userName);
		}
		SecurityUtil.login(hdfsConfig, STORM_KEYTAB_FILE_KEY,
				STORM_USER_NAME_KEY);
	}
}
 
开发者ID:PacktPublishing,项目名称:Mastering-Apache-Storm,代码行数:16,代码来源:HdfsSecurityUtil.java

示例11: getSecureResources

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
/**
 * Acquire privileged resources (i.e., the privileged ports) for the data
 * node. The privileged resources consist of the port of the RPC server and
 * the port of HTTP (not HTTPS) server.
 */
@VisibleForTesting
public static SecureResources getSecureResources(Configuration conf)
    throws Exception {
  HttpConfig.Policy policy = DFSUtil.getHttpPolicy(conf);
  boolean isSecure = UserGroupInformation.isSecurityEnabled();

  // Obtain secure port for data streaming to datanode
  InetSocketAddress streamingAddr  = DataNode.getStreamingAddr(conf);
  int socketWriteTimeout = conf.getInt(
      DFSConfigKeys.DFS_DATANODE_SOCKET_WRITE_TIMEOUT_KEY,
      HdfsServerConstants.WRITE_TIMEOUT);

  ServerSocket ss = (socketWriteTimeout > 0) ? 
      ServerSocketChannel.open().socket() : new ServerSocket();
  ss.bind(streamingAddr, 0);

  // Check that we got the port we need
  if (ss.getLocalPort() != streamingAddr.getPort()) {
    throw new RuntimeException(
        "Unable to bind on specified streaming port in secure "
            + "context. Needed " + streamingAddr.getPort() + ", got "
            + ss.getLocalPort());
  }

  if (!SecurityUtil.isPrivilegedPort(ss.getLocalPort()) && isSecure) {
    throw new RuntimeException(
      "Cannot start secure datanode with unprivileged RPC ports");
  }

  System.err.println("Opened streaming server at " + streamingAddr);

  // Bind a port for the web server. The code intends to bind HTTP server to
  // privileged port only, as the client can authenticate the server using
  // certificates if they are communicating through SSL.
  final ServerSocketChannel httpChannel;
  if (policy.isHttpEnabled()) {
    httpChannel = ServerSocketChannel.open();
    InetSocketAddress infoSocAddr = DataNode.getInfoAddr(conf);
    httpChannel.socket().bind(infoSocAddr);
    InetSocketAddress localAddr = (InetSocketAddress) httpChannel.socket()
      .getLocalSocketAddress();

    if (localAddr.getPort() != infoSocAddr.getPort()) {
      throw new RuntimeException("Unable to bind on specified info port in secure " +
          "context. Needed " + streamingAddr.getPort() + ", got " + ss.getLocalPort());
    }
    System.err.println("Successfully obtained privileged resources (streaming port = "
        + ss + " ) (http listener port = " + localAddr.getPort() +")");

    if (localAddr.getPort() > 1023 && isSecure) {
      throw new RuntimeException(
          "Cannot start secure datanode with unprivileged HTTP ports");
    }
    System.err.println("Opened info server at " + infoSocAddr);
  } else {
    httpChannel = null;
  }

  return new SecureResources(ss, httpChannel);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:66,代码来源:SecureDataNodeStarter.java

示例12: initialize

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
@Override
public synchronized void initialize(URI uri, Configuration conf
    ) throws IOException {
  super.initialize(uri, conf);
  setConf(conf);
  /** set user pattern based on configuration file */
  UserParam.setUserPattern(conf.get(
      DFSConfigKeys.DFS_WEBHDFS_USER_PATTERN_KEY,
      DFSConfigKeys.DFS_WEBHDFS_USER_PATTERN_DEFAULT));

  connectionFactory = URLConnectionFactory
      .newDefaultURLConnectionFactory(conf);

  ugi = UserGroupInformation.getCurrentUser();
  this.uri = URI.create(uri.getScheme() + "://" + uri.getAuthority());
  this.nnAddrs = resolveNNAddr();

  boolean isHA = HAUtil.isClientFailoverConfigured(conf, this.uri);
  boolean isLogicalUri = isHA && HAUtil.isLogicalUri(conf, this.uri);
  // In non-HA or non-logical URI case, the code needs to call
  // getCanonicalUri() in order to handle the case where no port is
  // specified in the URI
  this.tokenServiceName = isLogicalUri ?
      HAUtil.buildTokenServiceForLogicalUri(uri, getScheme())
      : SecurityUtil.buildTokenService(getCanonicalUri());

  if (!isHA) {
    this.retryPolicy =
        RetryUtils.getDefaultRetryPolicy(
            conf,
            DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_POLICY_ENABLED_KEY,
            DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_POLICY_ENABLED_DEFAULT,
            DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_POLICY_SPEC_KEY,
            DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_POLICY_SPEC_DEFAULT,
            SafeModeException.class);
  } else {

    int maxFailoverAttempts = conf.getInt(
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_MAX_ATTEMPTS_KEY,
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_MAX_ATTEMPTS_DEFAULT);
    int maxRetryAttempts = conf.getInt(
        DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_MAX_ATTEMPTS_KEY,
        DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_MAX_ATTEMPTS_DEFAULT);
    int failoverSleepBaseMillis = conf.getInt(
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_BASE_KEY,
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_BASE_DEFAULT);
    int failoverSleepMaxMillis = conf.getInt(
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_MAX_KEY,
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_MAX_DEFAULT);

    this.retryPolicy = RetryPolicies
        .failoverOnNetworkException(RetryPolicies.TRY_ONCE_THEN_FAIL,
            maxFailoverAttempts, maxRetryAttempts, failoverSleepBaseMillis,
            failoverSleepMaxMillis);
  }

  this.workingDir = getHomeDirectory();
  this.canRefreshDelegationToken = UserGroupInformation.isSecurityEnabled();
  this.disallowFallbackToInsecureCluster = !conf.getBoolean(
      CommonConfigurationKeys.IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_KEY,
      CommonConfigurationKeys.IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_DEFAULT);
  this.delegationToken = null;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:64,代码来源:WebHdfsFileSystem.java

示例13: URLLog

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
public URLLog(URLConnectionFactory connectionFactory, URL url) {
  this.connectionFactory = connectionFactory;
  this.isSpnegoEnabled = UserGroupInformation.isSecurityEnabled();
  this.url = url;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:6,代码来源:EditLogFileInputStream.java

示例14: receive

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
/**
 * Receives SASL negotiation from a peer on behalf of a server.
 *
 * @param peer connection peer
 * @param underlyingOut connection output stream
 * @param underlyingIn connection input stream
 * @param int xferPort data transfer port of DataNode accepting connection
 * @param datanodeId ID of DataNode accepting connection
 * @return new pair of streams, wrapped after SASL negotiation
 * @throws IOException for any error
 */
public IOStreamPair receive(Peer peer, OutputStream underlyingOut,
    InputStream underlyingIn, int xferPort, DatanodeID datanodeId)
    throws IOException {
  if (dnConf.getEncryptDataTransfer()) {
    LOG.debug(
      "SASL server doing encrypted handshake for peer = {}, datanodeId = {}",
      peer, datanodeId);
    return getEncryptedStreams(peer, underlyingOut, underlyingIn);
  } else if (!UserGroupInformation.isSecurityEnabled()) {
    LOG.debug(
      "SASL server skipping handshake in unsecured configuration for "
      + "peer = {}, datanodeId = {}", peer, datanodeId);
    return new IOStreamPair(underlyingIn, underlyingOut);
  } else if (SecurityUtil.isPrivilegedPort(xferPort)) {
    LOG.debug(
      "SASL server skipping handshake in secured configuration for "
      + "peer = {}, datanodeId = {}", peer, datanodeId);
    return new IOStreamPair(underlyingIn, underlyingOut);
  } else if (dnConf.getSaslPropsResolver() != null) {
    LOG.debug(
      "SASL server doing general handshake for peer = {}, datanodeId = {}",
      peer, datanodeId);
    return getSaslStreams(peer, underlyingOut, underlyingIn);
  } else if (dnConf.getIgnoreSecurePortsForTesting()) {
    // It's a secured cluster using non-privileged ports, but no SASL.  The
    // only way this can happen is if the DataNode has
    // ignore.secure.ports.for.testing configured, so this is a rare edge case.
    LOG.debug(
      "SASL server skipping handshake in secured configuration with no SASL "
      + "protection configured for peer = {}, datanodeId = {}",
      peer, datanodeId);
    return new IOStreamPair(underlyingIn, underlyingOut);
  } else {
    // The error message here intentionally does not mention
    // ignore.secure.ports.for.testing.  That's intended for dev use only.
    // This code path is not expected to execute ever, because DataNode startup
    // checks for invalid configuration and aborts.
    throw new IOException(String.format("Cannot create a secured " +
      "connection if DataNode listens on unprivileged port (%d) and no " +
      "protection is defined in configuration property %s.",
      datanodeId.getXferPort(), DFS_DATA_TRANSFER_PROTECTION_KEY));
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:55,代码来源:SaslDataTransferServer.java

示例15: openForRead

import org.apache.hadoop.security.UserGroupInformation; //导入方法依赖的package包/类
/**
 * Open the given File for read access, verifying the expected user/group
 * constraints if security is enabled.
 *
 * Note that this function provides no additional checks if Hadoop
 * security is disabled, since doing the checks would be too expensive
 * when native libraries are not available.
 *
 * @param f the file that we are trying to open
 * @param expectedOwner the expected user owner for the file
 * @param expectedGroup the expected group owner for the file
 * @throws IOException if an IO Error occurred, or security is enabled and
 * the user/group does not match
 */
public static FileInputStream openForRead(File f, String expectedOwner, 
    String expectedGroup) throws IOException {
  if (!UserGroupInformation.isSecurityEnabled()) {
    return new FileInputStream(f);
  }
  return forceSecureOpenForRead(f, expectedOwner, expectedGroup);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:SecureIOUtils.java


注:本文中的org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。