當前位置: 首頁>>代碼示例>>Java>>正文


Java UserGroupInformation.isSecurityEnabled方法代碼示例

本文整理匯總了Java中org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled方法的典型用法代碼示例。如果您正苦於以下問題:Java UserGroupInformation.isSecurityEnabled方法的具體用法?Java UserGroupInformation.isSecurityEnabled怎麽用?Java UserGroupInformation.isSecurityEnabled使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.security.UserGroupInformation的用法示例。


在下文中一共展示了UserGroupInformation.isSecurityEnabled方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: filters

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
/**
 * Add an internal servlet in the server, specifying whether or not to
 * protect with Kerberos authentication.
 * Note: This method is to be used for adding servlets that facilitate
 * internal communication and not for user facing functionality. For
 +   * servlets added using this method, filters (except internal Kerberos
 * filters) are not enabled.
 *
 * @param name The name of the servlet (can be passed as null)
 * @param pathSpec The path spec for the servlet
 * @param clazz The servlet class
 * @param requireAuth Require Kerberos authenticate to access servlet
 */
public void addInternalServlet(String name, String pathSpec,
    Class<? extends HttpServlet> clazz, boolean requireAuth) {
  ServletHolder holder = new ServletHolder(clazz);
  if (name != null) {
    holder.setName(name);
  }
  webAppContext.addServlet(holder, pathSpec);

  if(requireAuth && UserGroupInformation.isSecurityEnabled()) {
     LOG.info("Adding Kerberos (SPNEGO) filter to " + name);
     ServletHandler handler = webAppContext.getServletHandler();
     FilterMapping fmap = new FilterMapping();
     fmap.setPathSpec(pathSpec);
     fmap.setFilterName(SPNEGO_FILTER);
     fmap.setDispatches(Handler.ALL);
     handler.addFilterMapping(fmap);
  }
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:32,代碼來源:HttpServer2.java

示例2: testSimpleAuth

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
@Test
public void testSimpleAuth() throws Exception {

  rm.start();

  // ensure users can access web pages
  // this should work for secure and non-secure clusters
  URL url = new URL("http://localhost:8088/cluster");
  HttpURLConnection conn = (HttpURLConnection) url.openConnection();
  try {
    conn.getInputStream();
    assertEquals(Status.OK.getStatusCode(), conn.getResponseCode());
  } catch (Exception e) {
    fail("Fetching url failed");
  }

  if (UserGroupInformation.isSecurityEnabled()) {
    testAnonymousKerberosUser();
  } else {
    testAnonymousSimpleUser();
  }

  rm.stop();
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:25,代碼來源:TestRMWebappAuthentication.java

示例3: filters

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
/**
 * Add an internal servlet in the server, specifying whether or not to
 * protect with Kerberos authentication. 
 * Note: This method is to be used for adding servlets that facilitate
 * internal communication and not for user facing functionality. For
 +   * servlets added using this method, filters (except internal Kerberos
 * filters) are not enabled. 
 * 
 * @param name The name of the servlet (can be passed as null)
 * @param pathSpec The path spec for the servlet
 * @param clazz The servlet class
 * @param requireAuth Require Kerberos authenticate to access servlet
 */
public void addInternalServlet(String name, String pathSpec, 
    Class<? extends HttpServlet> clazz, boolean requireAuth) {
  ServletHolder holder = new ServletHolder(clazz);
  if (name != null) {
    holder.setName(name);
  }
  webAppContext.addServlet(holder, pathSpec);

  if(requireAuth && UserGroupInformation.isSecurityEnabled()) {
     LOG.info("Adding Kerberos (SPNEGO) filter to " + name);
     ServletHandler handler = webAppContext.getServletHandler();
     FilterMapping fmap = new FilterMapping();
     fmap.setPathSpec(pathSpec);
     fmap.setFilterName(SPNEGO_FILTER);
     fmap.setDispatches(Handler.ALL);
     handler.addFilterMapping(fmap);
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:32,代碼來源:HttpServer.java

示例4: checkRequestorOrSendError

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
private boolean checkRequestorOrSendError(Configuration conf,
    HttpServletRequest request, HttpServletResponse response)
        throws IOException {
  if (UserGroupInformation.isSecurityEnabled()
      && !isValidRequestor(request, conf)) {
    response.sendError(HttpServletResponse.SC_FORBIDDEN,
        "Only Namenode and another JournalNode may access this servlet");
    LOG.warn("Received non-NN/JN request for edits from "
        + request.getRemoteHost());
    return false;
  }
  return true;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:14,代碼來源:GetJournalEditServlet.java

示例5: verifyUsernamePattern

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
void verifyUsernamePattern(String user) {
  if (!UserGroupInformation.isSecurityEnabled() &&
      !nonsecureLocalUserPattern.matcher(user).matches()) {
    throw new IllegalArgumentException("Invalid user name '" + user + "'," +
        " it must match '" + nonsecureLocalUserPattern.pattern() + "'");
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:8,代碼來源:LinuxContainerExecutor.java

示例6: serviceStart

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
@Override
protected void serviceStart() throws Exception {
  if (UserGroupInformation.isSecurityEnabled()) {
    loginUGI = UserGroupInformation.getLoginUser();
  } else {
    loginUGI = UserGroupInformation.getCurrentUser();
  }
  clientRpcServer.start();
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:10,代碼來源:HSAdminServer.java

示例7: testAuthorizedAccess

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
@Test
public void testAuthorizedAccess() throws Exception {
  MyContainerManager containerManager = new MyContainerManager();
  rm =
      new MockRMWithAMS(conf, containerManager);
  rm.start();

  MockNM nm1 = rm.registerNode("localhost:1234", 5120);

  Map<ApplicationAccessType, String> acls =
      new HashMap<ApplicationAccessType, String>(2);
  acls.put(ApplicationAccessType.VIEW_APP, "*");
  RMApp app = rm.submitApp(1024, "appname", "appuser", acls);

  nm1.nodeHeartbeat(true);

  int waitCount = 0;
  while (containerManager.containerTokens == null && waitCount++ < 20) {
    LOG.info("Waiting for AM Launch to happen..");
    Thread.sleep(1000);
  }
  Assert.assertNotNull(containerManager.containerTokens);

  RMAppAttempt attempt = app.getCurrentAppAttempt();
  ApplicationAttemptId applicationAttemptId = attempt.getAppAttemptId();
  waitForLaunchedState(attempt);

  // Create a client to the RM.
  final Configuration conf = rm.getConfig();
  final YarnRPC rpc = YarnRPC.create(conf);

  UserGroupInformation currentUser = UserGroupInformation
      .createRemoteUser(applicationAttemptId.toString());
  Credentials credentials = containerManager.getContainerCredentials();
  final InetSocketAddress rmBindAddress =
      rm.getApplicationMasterService().getBindAddress();
  Token<? extends TokenIdentifier> amRMToken =
      MockRMWithAMS.setupAndReturnAMRMToken(rmBindAddress,
        credentials.getAllTokens());
  currentUser.addToken(amRMToken);
  ApplicationMasterProtocol client = currentUser
      .doAs(new PrivilegedAction<ApplicationMasterProtocol>() {
        @Override
        public ApplicationMasterProtocol run() {
          return (ApplicationMasterProtocol) rpc.getProxy(ApplicationMasterProtocol.class, rm
            .getApplicationMasterService().getBindAddress(), conf);
        }
      });

  RegisterApplicationMasterRequest request = Records
      .newRecord(RegisterApplicationMasterRequest.class);
  RegisterApplicationMasterResponse response =
      client.registerApplicationMaster(request);
  Assert.assertNotNull(response.getClientToAMTokenMasterKey());
  if (UserGroupInformation.isSecurityEnabled()) {
    Assert
      .assertTrue(response.getClientToAMTokenMasterKey().array().length > 0);
  }
  Assert.assertEquals("Register response has bad ACLs", "*",
      response.getApplicationACLs().get(ApplicationAccessType.VIEW_APP));
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:62,代碼來源:TestAMAuthorization.java

示例8: verifyTokenCount

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
private void verifyTokenCount(ApplicationAttemptId appAttemptId, int count) {
  verify(amRMTokenManager, times(count)).applicationMasterFinished(appAttemptId);
  if (UserGroupInformation.isSecurityEnabled()) {
    verify(clientToAMTokenManager, times(count)).unRegisterApplication(appAttemptId);
    if (count > 0) {
      assertNull(applicationAttempt.createClientToken("client"));
    }
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:10,代碼來源:TestRMAppAttemptTransitions.java

示例9: isAllowedDelegationTokenOp

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
private boolean isAllowedDelegationTokenOp() throws IOException {
  if (UserGroupInformation.isSecurityEnabled()) {
    return EnumSet.of(AuthenticationMethod.KERBEROS,
                      AuthenticationMethod.KERBEROS_SSL,
                      AuthenticationMethod.CERTIFICATE)
        .contains(UserGroupInformation.getCurrentUser()
                .getRealAuthenticationMethod());
  } else {
    return true;
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:12,代碼來源:ClientRMService.java

示例10: login

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
public static void login(Map conf, Configuration hdfsConfig)
		throws IOException {
	if (UserGroupInformation.isSecurityEnabled()) {
		String keytab = (String) conf.get(STORM_KEYTAB_FILE_KEY);
		if (keytab != null) {
			hdfsConfig.set(STORM_KEYTAB_FILE_KEY, keytab);
		}
		String userName = (String) conf.get(STORM_USER_NAME_KEY);
		if (userName != null) {
			hdfsConfig.set(STORM_USER_NAME_KEY, userName);
		}
		SecurityUtil.login(hdfsConfig, STORM_KEYTAB_FILE_KEY,
				STORM_USER_NAME_KEY);
	}
}
 
開發者ID:PacktPublishing,項目名稱:Mastering-Apache-Storm,代碼行數:16,代碼來源:HdfsSecurityUtil.java

示例11: getSecureResources

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
/**
 * Acquire privileged resources (i.e., the privileged ports) for the data
 * node. The privileged resources consist of the port of the RPC server and
 * the port of HTTP (not HTTPS) server.
 */
@VisibleForTesting
public static SecureResources getSecureResources(Configuration conf)
    throws Exception {
  HttpConfig.Policy policy = DFSUtil.getHttpPolicy(conf);
  boolean isSecure = UserGroupInformation.isSecurityEnabled();

  // Obtain secure port for data streaming to datanode
  InetSocketAddress streamingAddr  = DataNode.getStreamingAddr(conf);
  int socketWriteTimeout = conf.getInt(
      DFSConfigKeys.DFS_DATANODE_SOCKET_WRITE_TIMEOUT_KEY,
      HdfsServerConstants.WRITE_TIMEOUT);

  ServerSocket ss = (socketWriteTimeout > 0) ? 
      ServerSocketChannel.open().socket() : new ServerSocket();
  ss.bind(streamingAddr, 0);

  // Check that we got the port we need
  if (ss.getLocalPort() != streamingAddr.getPort()) {
    throw new RuntimeException(
        "Unable to bind on specified streaming port in secure "
            + "context. Needed " + streamingAddr.getPort() + ", got "
            + ss.getLocalPort());
  }

  if (!SecurityUtil.isPrivilegedPort(ss.getLocalPort()) && isSecure) {
    throw new RuntimeException(
      "Cannot start secure datanode with unprivileged RPC ports");
  }

  System.err.println("Opened streaming server at " + streamingAddr);

  // Bind a port for the web server. The code intends to bind HTTP server to
  // privileged port only, as the client can authenticate the server using
  // certificates if they are communicating through SSL.
  final ServerSocketChannel httpChannel;
  if (policy.isHttpEnabled()) {
    httpChannel = ServerSocketChannel.open();
    InetSocketAddress infoSocAddr = DataNode.getInfoAddr(conf);
    httpChannel.socket().bind(infoSocAddr);
    InetSocketAddress localAddr = (InetSocketAddress) httpChannel.socket()
      .getLocalSocketAddress();

    if (localAddr.getPort() != infoSocAddr.getPort()) {
      throw new RuntimeException("Unable to bind on specified info port in secure " +
          "context. Needed " + streamingAddr.getPort() + ", got " + ss.getLocalPort());
    }
    System.err.println("Successfully obtained privileged resources (streaming port = "
        + ss + " ) (http listener port = " + localAddr.getPort() +")");

    if (localAddr.getPort() > 1023 && isSecure) {
      throw new RuntimeException(
          "Cannot start secure datanode with unprivileged HTTP ports");
    }
    System.err.println("Opened info server at " + infoSocAddr);
  } else {
    httpChannel = null;
  }

  return new SecureResources(ss, httpChannel);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:66,代碼來源:SecureDataNodeStarter.java

示例12: initialize

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
@Override
public synchronized void initialize(URI uri, Configuration conf
    ) throws IOException {
  super.initialize(uri, conf);
  setConf(conf);
  /** set user pattern based on configuration file */
  UserParam.setUserPattern(conf.get(
      DFSConfigKeys.DFS_WEBHDFS_USER_PATTERN_KEY,
      DFSConfigKeys.DFS_WEBHDFS_USER_PATTERN_DEFAULT));

  connectionFactory = URLConnectionFactory
      .newDefaultURLConnectionFactory(conf);

  ugi = UserGroupInformation.getCurrentUser();
  this.uri = URI.create(uri.getScheme() + "://" + uri.getAuthority());
  this.nnAddrs = resolveNNAddr();

  boolean isHA = HAUtil.isClientFailoverConfigured(conf, this.uri);
  boolean isLogicalUri = isHA && HAUtil.isLogicalUri(conf, this.uri);
  // In non-HA or non-logical URI case, the code needs to call
  // getCanonicalUri() in order to handle the case where no port is
  // specified in the URI
  this.tokenServiceName = isLogicalUri ?
      HAUtil.buildTokenServiceForLogicalUri(uri, getScheme())
      : SecurityUtil.buildTokenService(getCanonicalUri());

  if (!isHA) {
    this.retryPolicy =
        RetryUtils.getDefaultRetryPolicy(
            conf,
            DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_POLICY_ENABLED_KEY,
            DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_POLICY_ENABLED_DEFAULT,
            DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_POLICY_SPEC_KEY,
            DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_POLICY_SPEC_DEFAULT,
            SafeModeException.class);
  } else {

    int maxFailoverAttempts = conf.getInt(
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_MAX_ATTEMPTS_KEY,
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_MAX_ATTEMPTS_DEFAULT);
    int maxRetryAttempts = conf.getInt(
        DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_MAX_ATTEMPTS_KEY,
        DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_MAX_ATTEMPTS_DEFAULT);
    int failoverSleepBaseMillis = conf.getInt(
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_BASE_KEY,
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_BASE_DEFAULT);
    int failoverSleepMaxMillis = conf.getInt(
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_MAX_KEY,
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_MAX_DEFAULT);

    this.retryPolicy = RetryPolicies
        .failoverOnNetworkException(RetryPolicies.TRY_ONCE_THEN_FAIL,
            maxFailoverAttempts, maxRetryAttempts, failoverSleepBaseMillis,
            failoverSleepMaxMillis);
  }

  this.workingDir = getHomeDirectory();
  this.canRefreshDelegationToken = UserGroupInformation.isSecurityEnabled();
  this.disallowFallbackToInsecureCluster = !conf.getBoolean(
      CommonConfigurationKeys.IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_KEY,
      CommonConfigurationKeys.IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_DEFAULT);
  this.delegationToken = null;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:64,代碼來源:WebHdfsFileSystem.java

示例13: URLLog

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
public URLLog(URLConnectionFactory connectionFactory, URL url) {
  this.connectionFactory = connectionFactory;
  this.isSpnegoEnabled = UserGroupInformation.isSecurityEnabled();
  this.url = url;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:6,代碼來源:EditLogFileInputStream.java

示例14: receive

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
/**
 * Receives SASL negotiation from a peer on behalf of a server.
 *
 * @param peer connection peer
 * @param underlyingOut connection output stream
 * @param underlyingIn connection input stream
 * @param int xferPort data transfer port of DataNode accepting connection
 * @param datanodeId ID of DataNode accepting connection
 * @return new pair of streams, wrapped after SASL negotiation
 * @throws IOException for any error
 */
public IOStreamPair receive(Peer peer, OutputStream underlyingOut,
    InputStream underlyingIn, int xferPort, DatanodeID datanodeId)
    throws IOException {
  if (dnConf.getEncryptDataTransfer()) {
    LOG.debug(
      "SASL server doing encrypted handshake for peer = {}, datanodeId = {}",
      peer, datanodeId);
    return getEncryptedStreams(peer, underlyingOut, underlyingIn);
  } else if (!UserGroupInformation.isSecurityEnabled()) {
    LOG.debug(
      "SASL server skipping handshake in unsecured configuration for "
      + "peer = {}, datanodeId = {}", peer, datanodeId);
    return new IOStreamPair(underlyingIn, underlyingOut);
  } else if (SecurityUtil.isPrivilegedPort(xferPort)) {
    LOG.debug(
      "SASL server skipping handshake in secured configuration for "
      + "peer = {}, datanodeId = {}", peer, datanodeId);
    return new IOStreamPair(underlyingIn, underlyingOut);
  } else if (dnConf.getSaslPropsResolver() != null) {
    LOG.debug(
      "SASL server doing general handshake for peer = {}, datanodeId = {}",
      peer, datanodeId);
    return getSaslStreams(peer, underlyingOut, underlyingIn);
  } else if (dnConf.getIgnoreSecurePortsForTesting()) {
    // It's a secured cluster using non-privileged ports, but no SASL.  The
    // only way this can happen is if the DataNode has
    // ignore.secure.ports.for.testing configured, so this is a rare edge case.
    LOG.debug(
      "SASL server skipping handshake in secured configuration with no SASL "
      + "protection configured for peer = {}, datanodeId = {}",
      peer, datanodeId);
    return new IOStreamPair(underlyingIn, underlyingOut);
  } else {
    // The error message here intentionally does not mention
    // ignore.secure.ports.for.testing.  That's intended for dev use only.
    // This code path is not expected to execute ever, because DataNode startup
    // checks for invalid configuration and aborts.
    throw new IOException(String.format("Cannot create a secured " +
      "connection if DataNode listens on unprivileged port (%d) and no " +
      "protection is defined in configuration property %s.",
      datanodeId.getXferPort(), DFS_DATA_TRANSFER_PROTECTION_KEY));
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:55,代碼來源:SaslDataTransferServer.java

示例15: openForRead

import org.apache.hadoop.security.UserGroupInformation; //導入方法依賴的package包/類
/**
 * Open the given File for read access, verifying the expected user/group
 * constraints if security is enabled.
 *
 * Note that this function provides no additional checks if Hadoop
 * security is disabled, since doing the checks would be too expensive
 * when native libraries are not available.
 *
 * @param f the file that we are trying to open
 * @param expectedOwner the expected user owner for the file
 * @param expectedGroup the expected group owner for the file
 * @throws IOException if an IO Error occurred, or security is enabled and
 * the user/group does not match
 */
public static FileInputStream openForRead(File f, String expectedOwner, 
    String expectedGroup) throws IOException {
  if (!UserGroupInformation.isSecurityEnabled()) {
    return new FileInputStream(f);
  }
  return forceSecureOpenForRead(f, expectedOwner, expectedGroup);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:22,代碼來源:SecureIOUtils.java


注:本文中的org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。