当前位置: 首页>>代码示例>>Java>>正文


Java JspHelper.getUGI方法代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.server.common.JspHelper.getUGI方法的典型用法代码示例。如果您正苦于以下问题:Java JspHelper.getUGI方法的具体用法?Java JspHelper.getUGI怎么用?Java JspHelper.getUGI使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hdfs.server.common.JspHelper的用法示例。


在下文中一共展示了JspHelper.getUGI方法的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getValue

import org.apache.hadoop.hdfs.server.common.JspHelper; //导入方法依赖的package包/类
@Override
public UserGroupInformation getValue(final HttpContext context) {
  final Configuration conf = (Configuration) servletcontext
      .getAttribute(JspHelper.CURRENT_CONF);
  try {
    return JspHelper.getUGI(servletcontext, request, conf,
        AuthenticationMethod.KERBEROS, false);
  } catch (IOException e) {
    throw new SecurityException(
        SecurityUtil.FAILED_TO_GET_UGI_MSG_HEADER + " " + e, e);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:UserProvider.java

示例2: getValue

import org.apache.hadoop.hdfs.server.common.JspHelper; //导入方法依赖的package包/类
@Override
public UserGroupInformation getValue(final HttpContext context) {
  final Configuration conf = (Configuration) servletcontext
      .getAttribute(JspHelper.CURRENT_CONF);
  try {
    return JspHelper.getUGI(servletcontext, request, conf,
        AuthenticationMethod.KERBEROS, false);
  } catch (IOException e) {
    throw new SecurityException(
        "Failed to obtain user group information: " + e, e);
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:13,代码来源:UserProvider.java

示例3: getValue

import org.apache.hadoop.hdfs.server.common.JspHelper; //导入方法依赖的package包/类
@Override
public UserGroupInformation getValue(final HttpContext context) {
  final Configuration conf =
      (Configuration) servletcontext.getAttribute(JspHelper.CURRENT_CONF);
  try {
    return JspHelper
        .getUGI(servletcontext, request, conf, AuthenticationMethod.KERBEROS,
            false);
  } catch (IOException e) {
    throw new SecurityException(
        "Failed to obtain user group information: " + e, e);
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:14,代码来源:UserProvider.java

示例4: getDelegationToken

import org.apache.hadoop.hdfs.server.common.JspHelper; //导入方法依赖的package包/类
static String getDelegationToken(final NameNode nn,
    HttpServletRequest request, Configuration conf) throws IOException,
    InterruptedException {
  final UserGroupInformation ugi = JspHelper.getUGI(request, conf);

  Token<DelegationTokenIdentifier> token = ugi
      .doAs(new PrivilegedExceptionAction<Token<DelegationTokenIdentifier>>() {
        public Token<DelegationTokenIdentifier> run() throws IOException {
          return nn.getDelegationToken(new Text(ugi.getUserName()));
        }
      });

  return token == null ? null : token.encodeToUrlString();
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:15,代码来源:NamenodeJspHelper.java

示例5: getUGI

import org.apache.hadoop.hdfs.server.common.JspHelper; //导入方法依赖的package包/类
protected UserGroupInformation getUGI(HttpServletRequest request,
                                      Configuration conf) throws IOException {
  return JspHelper.getUGI(getServletContext(), request, conf);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:5,代码来源:DfsServlet.java

示例6: redirectToRandomDataNode

import org.apache.hadoop.hdfs.server.common.JspHelper; //导入方法依赖的package包/类
static void redirectToRandomDataNode(ServletContext context,
    HttpServletRequest request, HttpServletResponse resp) throws IOException,
    InterruptedException {
  final NameNode nn = NameNodeHttpServer.getNameNodeFromContext(context);
  final Configuration conf = (Configuration) context
      .getAttribute(JspHelper.CURRENT_CONF);
  // We can't redirect if there isn't a DN to redirect to.
  // Lets instead show a proper error message.
  FSNamesystem fsn = nn.getNamesystem();

  DatanodeID datanode = null;
  if (fsn != null && fsn.getNumLiveDataNodes() >= 1) {
    datanode = getRandomDatanode(nn);
  }

  if (datanode == null) {
    throw new IOException("Can't browse the DFS since there are no " +
        "live nodes available to redirect to.");
  }

  UserGroupInformation ugi = JspHelper.getUGI(context, request, conf);
  // if the user is defined, get a delegation token and stringify it
  String tokenString = getDelegationToken(
      nn.getRpcServer(), request, conf, ugi);

  InetSocketAddress rpcAddr = nn.getNameNodeAddress();
  String rpcHost = rpcAddr.getAddress().isAnyLocalAddress()
    ? URI.create(request.getRequestURL().toString()).getHost()
    : rpcAddr.getAddress().getHostAddress();
  String addr = rpcHost + ":" + rpcAddr.getPort();

  final String redirectLocation =
      JspHelper.Url.url(request.getScheme(), datanode)
      + "/browseDirectory.jsp?namenodeInfoPort="
      + request.getServerPort() + "&dir=/"
      + (tokenString == null ? "" :
         JspHelper.getDelegationTokenUrlParam(tokenString))
      + JspHelper.getUrlParam(JspHelper.NAMENODE_ADDRESS, addr);

  resp.sendRedirect(redirectLocation);
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:42,代码来源:NamenodeJspHelper.java

示例7: redirectToRandomDataNode

import org.apache.hadoop.hdfs.server.common.JspHelper; //导入方法依赖的package包/类
static void redirectToRandomDataNode(ServletContext context,
    HttpServletRequest request, HttpServletResponse resp) throws IOException,
    InterruptedException {
  final NameNode nn = NameNodeHttpServer.getNameNodeFromContext(context);
  final Configuration conf = (Configuration) context
      .getAttribute(JspHelper.CURRENT_CONF);
  // We can't redirect if there isn't a DN to redirect to.
  // Lets instead show a proper error message.
  if (nn.getNamesystem().getNumLiveDataNodes() < 1) {
    throw new IOException("Can't browse the DFS since there are no " +
        "live nodes available to redirect to.");
  }
  final DatanodeID datanode = getRandomDatanode(nn);;
  UserGroupInformation ugi = JspHelper.getUGI(context, request, conf);
  String tokenString = getDelegationToken(
      nn.getRpcServer(), request, conf, ugi);
  // if the user is defined, get a delegation token and stringify it
  final String redirectLocation;
  final String nodeToRedirect;
  int redirectPort;
  if (datanode != null) {
    nodeToRedirect = datanode.getIpAddr();
    redirectPort = datanode.getInfoPort();
  } else {
    nodeToRedirect = nn.getHttpAddress().getHostName();
    redirectPort = nn.getHttpAddress().getPort();
  }

  InetSocketAddress rpcAddr = nn.getNameNodeAddress();
  String rpcHost = rpcAddr.getAddress().isAnyLocalAddress()
    ? URI.create(request.getRequestURL().toString()).getHost()
    : rpcAddr.getAddress().getHostAddress();
  String addr = rpcHost + ":" + rpcAddr.getPort();

  String fqdn = InetAddress.getByName(nodeToRedirect).getCanonicalHostName();
  redirectLocation = HttpConfig.getSchemePrefix() + fqdn + ":" + redirectPort
      + "/browseDirectory.jsp?namenodeInfoPort="
      + nn.getHttpAddress().getPort() + "&dir=/"
      + (tokenString == null ? "" :
         JspHelper.getDelegationTokenUrlParam(tokenString))
      + JspHelper.getUrlParam(JspHelper.NAMENODE_ADDRESS, addr);
  resp.sendRedirect(redirectLocation);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:44,代码来源:NamenodeJspHelper.java

示例8: getUGI

import org.apache.hadoop.hdfs.server.common.JspHelper; //导入方法依赖的package包/类
protected UserGroupInformation getUGI(HttpServletRequest request,
    Configuration conf) throws IOException {
  return JspHelper.getUGI(getServletContext(), request, conf);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:5,代码来源:DfsServlet.java

示例9: redirectToRandomDataNode

import org.apache.hadoop.hdfs.server.common.JspHelper; //导入方法依赖的package包/类
static void redirectToRandomDataNode(ServletContext context,
    HttpServletRequest request, HttpServletResponse resp)
    throws IOException, InterruptedException {
  final NameNode nn = NameNodeHttpServer.getNameNodeFromContext(context);
  final Configuration conf =
      (Configuration) context.getAttribute(JspHelper.CURRENT_CONF);
  // We can't redirect if there isn't a DN to redirect to.
  // Lets instead show a proper error message.
  if (nn.getNamesystem().getNumLiveDataNodes() < 1) {
    throw new IOException("Can't browse the DFS since there are no " +
        "live nodes available to redirect to.");
  }
  final DatanodeID datanode = getRandomDatanode(nn);
  ;
  UserGroupInformation ugi = JspHelper.getUGI(context, request, conf);
  String tokenString =
      getDelegationToken(nn.getRpcServer(), request, conf, ugi);
  // if the user is defined, get a delegation token and stringify it
  final String redirectLocation;
  final String nodeToRedirect;
  int redirectPort;
  if (datanode != null) {
    nodeToRedirect = datanode.getIpAddr();
    redirectPort = datanode.getInfoPort();
  } else {
    nodeToRedirect = nn.getHttpAddress().getHostName();
    redirectPort = nn.getHttpAddress().getPort();
  }

  InetSocketAddress rpcAddr = nn.getNameNodeAddress();
  String rpcHost = rpcAddr.getAddress().isAnyLocalAddress() ?
      URI.create(request.getRequestURL().toString()).getHost() :
      rpcAddr.getAddress().getHostAddress();
  String addr = rpcHost + ":" + rpcAddr.getPort();

  String fqdn = InetAddress.getByName(nodeToRedirect).getCanonicalHostName();
  int httpPort = -1;
  if (nn.conf.getBoolean(DFSConfigKeys.DFS_HTTPS_ENABLE_KEY, DFSConfigKeys.DFS_HTTPS_ENABLE_DEFAULT)) {
    httpPort = nn.conf.getInt(DFSConfigKeys.DFS_HTTPS_PORT_KEY, DFSConfigKeys.DFS_DATANODE_HTTPS_DEFAULT_PORT);
  } else {
    httpPort = nn.getHttpAddress().getPort();
  }
  redirectLocation =
      HttpConfig2.getSchemePrefix() + fqdn + ":" + redirectPort +
          "/browseDirectory.jsp?namenodeInfoPort=" +
          httpPort + "&dir=/" +
          (tokenString == null ? "" :
              JspHelper.getDelegationTokenUrlParam(tokenString)) +
          JspHelper.getUrlParam(JspHelper.NAMENODE_ADDRESS, addr);
  resp.sendRedirect(redirectLocation);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:52,代码来源:NamenodeJspHelper.java

示例10: redirectToRandomDataNode

import org.apache.hadoop.hdfs.server.common.JspHelper; //导入方法依赖的package包/类
static void redirectToRandomDataNode(ServletContext context,
    HttpServletRequest request, HttpServletResponse resp) throws IOException,
    InterruptedException {
  final NameNode nn = NameNodeHttpServer.getNameNodeFromContext(context);
  final Configuration conf = (Configuration) context
      .getAttribute(JspHelper.CURRENT_CONF);
  // We can't redirect if there isn't a DN to redirect to.
  // Lets instead show a proper error message.
  FSNamesystem fsn = nn.getNamesystem();
  if (fsn == null || fsn.getNumLiveDataNodes() < 1) {
    throw new IOException("Can't browse the DFS since there are no " +
        "live nodes available to redirect to.");
  }
  final DatanodeID datanode = getRandomDatanode(nn);;
  UserGroupInformation ugi = JspHelper.getUGI(context, request, conf);
  String tokenString = getDelegationToken(
      nn.getRpcServer(), request, conf, ugi);
  // if the user is defined, get a delegation token and stringify it
  final String redirectLocation;
  final String nodeToRedirect;
  int redirectPort;
  if (datanode != null) {
    nodeToRedirect = datanode.getIpAddr();
    redirectPort = datanode.getInfoPort();
  } else {
    nodeToRedirect = nn.getHttpAddress().getHostName();
    redirectPort = nn.getHttpAddress().getPort();
  }

  InetSocketAddress rpcAddr = nn.getNameNodeAddress();
  String rpcHost = rpcAddr.getAddress().isAnyLocalAddress()
    ? URI.create(request.getRequestURL().toString()).getHost()
    : rpcAddr.getAddress().getHostAddress();
  String addr = rpcHost + ":" + rpcAddr.getPort();

  String fqdn = InetAddress.getByName(nodeToRedirect).getCanonicalHostName();
  redirectLocation = HttpConfig.getSchemePrefix() + fqdn + ":" + redirectPort
      + "/browseDirectory.jsp?namenodeInfoPort="
      + nn.getHttpAddress().getPort() + "&dir=/"
      + (tokenString == null ? "" :
         JspHelper.getDelegationTokenUrlParam(tokenString))
      + JspHelper.getUrlParam(JspHelper.NAMENODE_ADDRESS, addr);
  resp.sendRedirect(redirectLocation);
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:45,代码来源:NamenodeJspHelper.java

示例11: getUGI

import org.apache.hadoop.hdfs.server.common.JspHelper; //导入方法依赖的package包/类
protected UserGroupInformation getUGI(HttpServletRequest request,
                                      Configuration conf) throws IOException {
  return JspHelper.getUGI(request, conf);
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:5,代码来源:DfsServlet.java

示例12: generateFileChunksForTail

import org.apache.hadoop.hdfs.server.common.JspHelper; //导入方法依赖的package包/类
static void generateFileChunksForTail(JspWriter out, HttpServletRequest req,
                                      Configuration conf
                                      ) throws IOException,
                                               InterruptedException {
  final String referrer = JspHelper.validateURL(req.getParameter("referrer"));
  boolean noLink = false;
  if (referrer == null) {
    noLink = true;
  }

  final String filename = JspHelper
      .validatePath(StringEscapeUtils.unescapeHtml(req.getParameter("filename")));
  if (filename == null) {
    out.print("Invalid input (file name absent)");
    return;
  }
  String tokenString = req.getParameter(JspHelper.DELEGATION_PARAMETER_NAME);
  UserGroupInformation ugi = JspHelper.getUGI(req, conf);

  String namenodeInfoPortStr = req.getParameter("namenodeInfoPort");
  int namenodeInfoPort = -1;
  if (namenodeInfoPortStr != null)
    namenodeInfoPort = Integer.parseInt(namenodeInfoPortStr);

  final int chunkSizeToView = JspHelper.string2ChunkSizeToView(req
      .getParameter("chunkSizeToView"), getDefaultChunkSize(conf));

  if (!noLink) {
    out.print("<h3>Tail of File: ");
    JspHelper.printPathWithLinks(filename, out, namenodeInfoPort, 
                                 tokenString);
    out.print("</h3><hr>");
    out.print("<a href=\"" + referrer + "\">Go Back to File View</a><hr>");
  } else {
    out.print("<h3>" + filename + "</h3>");
  }
  out.print("<b>Chunk size to view (in bytes, up to file's DFS block size): </b>");
  out.print("<input type=\"text\" name=\"chunkSizeToView\" value="
      + chunkSizeToView + " size=10 maxlength=10>");
  out.print("&nbsp;&nbsp;<input type=\"submit\" name=\"submit\" value=\"Refresh\"><hr>");
  out.print("<input type=\"hidden\" name=\"filename\" value=\"" + filename
      + "\">");
  out.print("<input type=\"hidden\" name=\"namenodeInfoPort\" value=\""
      + namenodeInfoPort + "\">");
  if (!noLink)
    out.print("<input type=\"hidden\" name=\"referrer\" value=\"" + referrer
        + "\">");

  // fetch the block from the datanode that has the last block for this file
  final DFSClient dfs = getDFSClient(ugi, datanode.getNameNodeAddrForClient(), conf);
  List<LocatedBlock> blocks = dfs.getNamenode().getBlockLocations(filename, 0,
      Long.MAX_VALUE).getLocatedBlocks();
  if (blocks == null || blocks.size() == 0) {
    out.print("No datanodes contain blocks of file " + filename);
    dfs.close();
    return;
  }
  LocatedBlock lastBlk = blocks.get(blocks.size() - 1);
  long blockSize = lastBlk.getBlock().getNumBytes();
  long blockId = lastBlk.getBlock().getBlockId();
  Token<BlockTokenIdentifier> accessToken = lastBlk.getBlockToken();
  long genStamp = lastBlk.getBlock().getGenerationStamp();
  DatanodeInfo chosenNode;
  try {
    chosenNode = JspHelper.bestNode(lastBlk);
  } catch (IOException e) {
    out.print(e.toString());
    dfs.close();
    return;
  }
  InetSocketAddress addr = NetUtils.createSocketAddr(chosenNode.getName());
  // view the last chunkSizeToView bytes while Tailing
  final long startOffset = blockSize >= chunkSizeToView ? blockSize
      - chunkSizeToView : 0;

  out.print("<textarea cols=\"100\" rows=\"25\" wrap=\"virtual\" style=\"width:100%\" READONLY>");
  JspHelper.streamBlockInAscii(addr, blockId, accessToken, genStamp,
      blockSize, startOffset, chunkSizeToView, out, conf);
  out.print("</textarea>");
  dfs.close();
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:82,代码来源:DatanodeJspHelper.java


注:本文中的org.apache.hadoop.hdfs.server.common.JspHelper.getUGI方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。