当前位置: 首页>>代码示例>>Java>>正文


Java SaslDataTransferClient类代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient的典型用法代码示例。如果您正苦于以下问题:Java SaslDataTransferClient类的具体用法?Java SaslDataTransferClient怎么用?Java SaslDataTransferClient使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


SaslDataTransferClient类属于org.apache.hadoop.hdfs.protocol.datatransfer.sasl包,在下文中一共展示了SaslDataTransferClient类的13个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: connectToDN

import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient; //导入依赖的package包/类
/**
 * Connect to the given datanode's datantrasfer port, and return
 * the resulting IOStreamPair. This includes encryption wrapping, etc.
 */
public static IOStreamPair connectToDN(DatanodeInfo dn, int timeout,
                                       Configuration conf,
                                       SaslDataTransferClient saslClient,
                                       SocketFactory socketFactory,
                                       boolean connectToDnViaHostname,
                                       DataEncryptionKeyFactory dekFactory,
                                       Token<BlockTokenIdentifier> blockToken)
    throws IOException {

  boolean success = false;
  Socket sock = null;
  try {
    sock = socketFactory.createSocket();
    String dnAddr = dn.getXferAddr(connectToDnViaHostname);
    LOG.debug("Connecting to datanode {}", dnAddr);
    NetUtils.connect(sock, NetUtils.createSocketAddr(dnAddr), timeout);
    sock.setSoTimeout(timeout);

    OutputStream unbufOut = NetUtils.getOutputStream(sock);
    InputStream unbufIn = NetUtils.getInputStream(sock);
    IOStreamPair pair = saslClient.newSocketSend(sock, unbufOut,
        unbufIn, dekFactory, blockToken, dn);

    IOStreamPair result = new IOStreamPair(
        new DataInputStream(pair.in),
        new DataOutputStream(new BufferedOutputStream(pair.out,
            NuCypherExtUtilClient.getSmallBufferSize(conf)))
    );

    success = true;
    return result;
  } finally {
    if (!success) {
      IOUtils.closeSocket(sock);
    }
  }
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:42,代码来源:NuCypherExtUtilClient.java

示例2: peerFromSocketAndKey

import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient; //导入依赖的package包/类
public static Peer peerFromSocketAndKey(
    SaslDataTransferClient saslClient, Socket s,
    DataEncryptionKeyFactory keyFactory,
    Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
    throws IOException {
  Peer peer = null;
  boolean success = false;
  try {
    peer = peerFromSocket(s);
    peer = saslClient.peerSend(peer, keyFactory, blockToken, datanodeId);
    success = true;
    return peer;
  } finally {
    if (!success) {
      IOUtilsClient.cleanup(null, peer);
    }
  }
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:19,代码来源:NuCypherExtUtilClient.java

示例3: Dispatcher

import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient; //导入依赖的package包/类
public Dispatcher(NameNodeConnector nnc, Set<String> includedNodes,
    Set<String> excludedNodes, long movedWinWidth, int moverThreads,
    int dispatcherThreads, int maxConcurrentMovesPerNode, Configuration conf) {
  this.nnc = nnc;
  this.excludedNodes = excludedNodes;
  this.includedNodes = includedNodes;
  this.movedBlocks = new MovedBlocks<StorageGroup>(movedWinWidth);

  this.cluster = NetworkTopology.getInstance(conf);

  this.moveExecutor = Executors.newFixedThreadPool(moverThreads);
  this.dispatchExecutor = dispatcherThreads == 0? null
      : Executors.newFixedThreadPool(dispatcherThreads);
  this.maxConcurrentMovesPerNode = maxConcurrentMovesPerNode;

  this.saslClient = new SaslDataTransferClient(conf,
      DataTransferSaslUtil.getSaslPropertiesResolver(conf),
      TrustedChannelResolver.getInstance(conf), nnc.fallbackToSimpleAuth);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:Dispatcher.java

示例4: peerFromSocketAndKey

import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient; //导入依赖的package包/类
public static Peer peerFromSocketAndKey(
      SaslDataTransferClient saslClient, Socket s,
      DataEncryptionKeyFactory keyFactory,
      Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
      throws IOException {
  Peer peer = null;
  boolean success = false;
  try {
    peer = peerFromSocket(s);
    peer = saslClient.peerSend(peer, keyFactory, blockToken, datanodeId);
    success = true;
    return peer;
  } finally {
    if (!success) {
      IOUtils.cleanup(null, peer);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:TcpPeerServer.java

示例5: peerFromSocketAndKey

import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient; //导入依赖的package包/类
public static Peer peerFromSocketAndKey(
      SaslDataTransferClient saslClient, Socket s,
      DataEncryptionKeyFactory keyFactory,
      Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
      throws IOException {
  Peer peer = null;
  boolean success = false;
  try {
    peer = peerFromSocket(s);
    peer = saslClient.peerSend(peer, keyFactory, blockToken, datanodeId);
    success = true;
    return peer;
  } finally {
    if (!success) {
      IOUtilsClient.cleanup(null, peer);
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:19,代码来源:DFSUtilClient.java

示例6: Dispatcher

import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient; //导入依赖的package包/类
Dispatcher(NameNodeConnector nnc, Set<String> includedNodes,
    Set<String> excludedNodes, long movedWinWidth, int moverThreads,
    int dispatcherThreads, int maxConcurrentMovesPerNode,
    long getBlocksSize, long getBlocksMinBlockSize, Configuration conf) {
  this.nnc = nnc;
  this.excludedNodes = excludedNodes;
  this.includedNodes = includedNodes;
  this.movedBlocks = new MovedBlocks<StorageGroup>(movedWinWidth);

  this.cluster = NetworkTopology.getInstance(conf);

  this.dispatchExecutor = dispatcherThreads == 0? null
      : Executors.newFixedThreadPool(dispatcherThreads);
  this.moverThreadAllocator = new Allocator(moverThreads);
  this.maxConcurrentMovesPerNode = maxConcurrentMovesPerNode;

  this.getBlocksSize = getBlocksSize;
  this.getBlocksMinBlockSize = getBlocksMinBlockSize;

  this.saslClient = new SaslDataTransferClient(conf,
      DataTransferSaslUtil.getSaslPropertiesResolver(conf),
      TrustedChannelResolver.getInstance(conf), nnc.fallbackToSimpleAuth);
  this.ioFileBufferSize = DFSUtilClient.getIoFileBufferSize(conf);
  this.connectToDnViaHostname = conf.getBoolean(
      HdfsClientConfigKeys.DFS_CLIENT_USE_DN_HOSTNAME,
      HdfsClientConfigKeys.DFS_CLIENT_USE_DN_HOSTNAME_DEFAULT);
  placementPolicies = new BlockPlacementPolicies(conf, null, cluster, null);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:29,代码来源:Dispatcher.java

示例7: getSaslDataTransferClient

import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient; //导入依赖的package包/类
/**
 * Gets the {@link SaslDataTransferClient} from the {@link DataNode} attached
 * to the servlet context.
 *
 * @return SaslDataTransferClient from DataNode
 */
private static SaslDataTransferClient getSaslDataTransferClient(
    HttpServletRequest req) {
  DataNode dataNode = (DataNode)req.getSession().getServletContext()
    .getAttribute("datanode");
  return dataNode.saslClient;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:13,代码来源:DatanodeJspHelper.java

示例8: getSaslClient

import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient; //导入依赖的package包/类
public SaslDataTransferClient getSaslClient() {
  return saslClient;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:4,代码来源:DataNode.java

示例9: trySaslNegotiate

import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient; //导入依赖的package包/类
static void trySaslNegotiate(Configuration conf, Channel channel, DatanodeInfo dnInfo,
    int timeoutMs, DFSClient client, Token<BlockTokenIdentifier> accessToken,
    Promise<Void> saslPromise) throws IOException {
  SaslDataTransferClient saslClient = client.getSaslDataTransferClient();
  SaslPropertiesResolver saslPropsResolver = SASL_ADAPTOR.getSaslPropsResolver(saslClient);
  TrustedChannelResolver trustedChannelResolver =
      SASL_ADAPTOR.getTrustedChannelResolver(saslClient);
  AtomicBoolean fallbackToSimpleAuth = SASL_ADAPTOR.getFallbackToSimpleAuth(saslClient);
  InetAddress addr = ((InetSocketAddress) channel.remoteAddress()).getAddress();
  if (trustedChannelResolver.isTrusted() || trustedChannelResolver.isTrusted(addr)) {
    saslPromise.trySuccess(null);
    return;
  }
  DataEncryptionKey encryptionKey = client.newDataEncryptionKey();
  if (encryptionKey != null) {
    if (LOG.isDebugEnabled()) {
      LOG.debug(
        "SASL client doing encrypted handshake for addr = " + addr + ", datanodeId = " + dnInfo);
    }
    doSaslNegotiation(conf, channel, timeoutMs, getUserNameFromEncryptionKey(encryptionKey),
      encryptionKeyToPassword(encryptionKey.encryptionKey),
      createSaslPropertiesForEncryption(encryptionKey.encryptionAlgorithm), saslPromise);
  } else if (!UserGroupInformation.isSecurityEnabled()) {
    if (LOG.isDebugEnabled()) {
      LOG.debug("SASL client skipping handshake in unsecured configuration for addr = " + addr
          + ", datanodeId = " + dnInfo);
    }
    saslPromise.trySuccess(null);
  } else if (dnInfo.getXferPort() < 1024) {
    if (LOG.isDebugEnabled()) {
      LOG.debug("SASL client skipping handshake in secured configuration with "
          + "privileged port for addr = " + addr + ", datanodeId = " + dnInfo);
    }
    saslPromise.trySuccess(null);
  } else if (fallbackToSimpleAuth != null && fallbackToSimpleAuth.get()) {
    if (LOG.isDebugEnabled()) {
      LOG.debug("SASL client skipping handshake in secured configuration with "
          + "unsecured cluster for addr = " + addr + ", datanodeId = " + dnInfo);
    }
    saslPromise.trySuccess(null);
  } else if (saslPropsResolver != null) {
    if (LOG.isDebugEnabled()) {
      LOG.debug(
        "SASL client doing general handshake for addr = " + addr + ", datanodeId = " + dnInfo);
    }
    doSaslNegotiation(conf, channel, timeoutMs, buildUsername(accessToken),
      buildClientPassword(accessToken), saslPropsResolver.getClientProperties(addr), saslPromise);
  } else {
    // It's a secured cluster using non-privileged ports, but no SASL. The only way this can
    // happen is if the DataNode has ignore.secure.ports.for.testing configured, so this is a rare
    // edge case.
    if (LOG.isDebugEnabled()) {
      LOG.debug("SASL client skipping handshake in secured configuration with no SASL "
          + "protection configured for addr = " + addr + ", datanodeId = " + dnInfo);
    }
    saslPromise.trySuccess(null);
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:59,代码来源:FanOutOneBlockAsyncDFSOutputSaslHelper.java

示例10: getSaslDataTransferClient

import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient; //导入依赖的package包/类
/**
 * Returns the SaslDataTransferClient configured for this DFSClient.
 *
 * @return SaslDataTransferClient configured for this DFSClient
 */
public SaslDataTransferClient getSaslDataTransferClient() {
  return saslClient;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:9,代码来源:DFSClient.java

示例11: getTrustedChannelResolver

import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient; //导入依赖的package包/类
TrustedChannelResolver getTrustedChannelResolver(SaslDataTransferClient saslClient); 
开发者ID:apache,项目名称:hbase,代码行数:2,代码来源:FanOutOneBlockAsyncDFSOutputSaslHelper.java

示例12: getSaslPropsResolver

import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient; //导入依赖的package包/类
SaslPropertiesResolver getSaslPropsResolver(SaslDataTransferClient saslClient); 
开发者ID:apache,项目名称:hbase,代码行数:2,代码来源:FanOutOneBlockAsyncDFSOutputSaslHelper.java

示例13: getFallbackToSimpleAuth

import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient; //导入依赖的package包/类
AtomicBoolean getFallbackToSimpleAuth(SaslDataTransferClient saslClient); 
开发者ID:apache,项目名称:hbase,代码行数:2,代码来源:FanOutOneBlockAsyncDFSOutputSaslHelper.java


注:本文中的org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。