當前位置: 首頁>>代碼示例>>Java>>正文


Java DNS類代碼示例

本文整理匯總了Java中org.apache.hadoop.net.DNS的典型用法代碼示例。如果您正苦於以下問題:Java DNS類的具體用法?Java DNS怎麽用?Java DNS使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


DNS類屬於org.apache.hadoop.net包,在下文中一共展示了DNS類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: getLocalHostName

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
/**
 * Retrieve the name of the current host. Multihomed hosts may restrict the
 * hostname lookup to a specific interface and nameserver with {@link
 * org.apache.hadoop.fs.CommonConfigurationKeysPublic#HADOOP_SECURITY_DNS_INTERFACE_KEY}
 * and {@link org.apache.hadoop.fs.CommonConfigurationKeysPublic#HADOOP_SECURITY_DNS_NAMESERVER_KEY}
 *
 * @param conf Configuration object. May be null.
 * @return
 * @throws UnknownHostException
 */
static String getLocalHostName(@Nullable Configuration conf)
    throws UnknownHostException {
  if (conf != null) {
    String dnsInterface = conf.get(HADOOP_SECURITY_DNS_INTERFACE_KEY);
    String nameServer = conf.get(HADOOP_SECURITY_DNS_NAMESERVER_KEY);

    if (dnsInterface != null) {
      return DNS.getDefaultHost(dnsInterface, nameServer, true);
    } else if (nameServer != null) {
      throw new IllegalArgumentException(HADOOP_SECURITY_DNS_NAMESERVER_KEY +
          " requires " + HADOOP_SECURITY_DNS_INTERFACE_KEY + ". Check your" +
          "configuration.");
    }
  }

  // Fallback to querying the default hostname as we did before.
  return InetAddress.getLocalHost().getCanonicalHostName();
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:29,代碼來源:SecurityUtil.java

示例2: init

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
@Override
public void init(String contextName, ContextFactory factory) {
  super.init(contextName, factory);

  LOG.debug("Initializing the GangliaContext31 for Ganglia 3.1 metrics.");

  // Take the hostname from the DNS class.

  Configuration conf = new Configuration();

  if (conf.get("slave.host.name") != null) {
    hostName = conf.get("slave.host.name");
  } else {
    try {
      hostName = DNS.getDefaultHost(
        conf.get("dfs.datanode.dns.interface","default"),
        conf.get("dfs.datanode.dns.nameserver","default"));
    } catch (UnknownHostException uhe) {
      LOG.error(uhe);
  	hostName = "UNKNOWN.example.com";
    }
  }
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:24,代碼來源:GangliaContext31.java

示例3: getLocalInterfaceAddrs

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
/**
 * Return the socket addresses to use with each configured
 * local interface. Local interfaces may be specified by IP
 * address, IP address range using CIDR notation, interface
 * name (e.g. eth0) or sub-interface name (e.g. eth0:0).
 * The socket addresses consist of the IPs for the interfaces
 * and the ephemeral port (port 0). If an IP, IP range, or
 * interface name matches an interface with sub-interfaces
 * only the IP of the interface is used. Sub-interfaces can
 * be used by specifying them explicitly (by IP or name).
 *
 * @return SocketAddresses for the configured local interfaces,
 *    or an empty array if none are configured
 * @throws UnknownHostException if a given interface name is invalid
 */
private static SocketAddress[] getLocalInterfaceAddrs(
    String interfaceNames[]) throws UnknownHostException {
  List<SocketAddress> localAddrs = new ArrayList<>();
  for (String interfaceName : interfaceNames) {
    if (InetAddresses.isInetAddress(interfaceName)) {
      localAddrs.add(new InetSocketAddress(interfaceName, 0));
    } else if (NetUtils.isValidSubnet(interfaceName)) {
      for (InetAddress addr : NetUtils.getIPs(interfaceName, false)) {
        localAddrs.add(new InetSocketAddress(addr, 0));
      }
    } else {
      for (String ip : DNS.getIPs(interfaceName, false)) {
        localAddrs.add(new InetSocketAddress(ip, 0));
      }
    }
  }
  return localAddrs.toArray(new SocketAddress[localAddrs.size()]);
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:34,代碼來源:NuCypherExtClient.java

示例4: getLocalInterfaceAddrs

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
/**
 * Return the socket addresses to use with each configured
 * local interface. Local interfaces may be specified by IP
 * address, IP address range using CIDR notation, interface
 * name (e.g. eth0) or sub-interface name (e.g. eth0:0).
 * The socket addresses consist of the IPs for the interfaces
 * and the ephemeral port (port 0). If an IP, IP range, or
 * interface name matches an interface with sub-interfaces
 * only the IP of the interface is used. Sub-interfaces can
 * be used by specifying them explicitly (by IP or name).
 * 
 * @return SocketAddresses for the configured local interfaces,
 *    or an empty array if none are configured
 * @throws UnknownHostException if a given interface name is invalid
 */
private static SocketAddress[] getLocalInterfaceAddrs(
    String interfaceNames[]) throws UnknownHostException {
  List<SocketAddress> localAddrs = new ArrayList<SocketAddress>();
  for (String interfaceName : interfaceNames) {
    if (InetAddresses.isInetAddress(interfaceName)) {
      localAddrs.add(new InetSocketAddress(interfaceName, 0));
    } else if (NetUtils.isValidSubnet(interfaceName)) {
      for (InetAddress addr : NetUtils.getIPs(interfaceName, false)) {
        localAddrs.add(new InetSocketAddress(addr, 0));
      }
    } else {
      for (String ip : DNS.getIPs(interfaceName, false)) {
        localAddrs.add(new InetSocketAddress(ip, 0));
      }
    }
  }
  return localAddrs.toArray(new SocketAddress[localAddrs.size()]);
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:34,代碼來源:DFSClient.java

示例5: register

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
void register() throws IOException {
  // get versions from the namenode
  nsInfo = nameNodeProto.versionRequest();
  dnRegistration = new DatanodeRegistration(
      new DatanodeID(DNS.getDefaultIP("default"),
          DNS.getDefaultHost("default", "default"),
          DataNode.generateUuid(), getNodePort(dnIdx),
          DFSConfigKeys.DFS_DATANODE_HTTP_DEFAULT_PORT,
          DFSConfigKeys.DFS_DATANODE_HTTPS_DEFAULT_PORT,
          DFSConfigKeys.DFS_DATANODE_IPC_DEFAULT_PORT),
      new DataStorage(nsInfo),
      new ExportedBlockKeys(), VersionInfo.getVersion());
  // register datanode
  dnRegistration = nameNodeProto.registerDatanode(dnRegistration);
  //first block reports
  storage = new DatanodeStorage(DatanodeStorage.generateUuid());
  final StorageBlockReport[] reports = {
      new StorageBlockReport(storage, BlockListAsLongs.EMPTY)
  };
  nameNodeProto.blockReport(dnRegistration, 
      nameNode.getNamesystem().getBlockPoolId(), reports,
          new BlockReportContext(1, 0, System.nanoTime()));
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:24,代碼來源:NNThroughputBenchmark.java

示例6: init

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
public void init(String contextName, ContextFactory factory) {
  super.init(contextName, factory);

  LOG.debug("Initializing the GangliaContext31 for Ganglia 3.1 metrics.");

  // Take the hostname from the DNS class.

  Configuration conf = new Configuration();

  if (conf.get("slave.host.name") != null) {
    hostName = conf.get("slave.host.name");
  } else {
    try {
      hostName = DNS.getDefaultHost(
        conf.get("dfs.datanode.dns.interface","default"),
        conf.get("dfs.datanode.dns.nameserver","default"));
    } catch (UnknownHostException uhe) {
      LOG.error(uhe);
  	hostName = "UNKNOWN.example.com";
    }
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:23,代碼來源:GangliaContext31.java

示例7: reverseDNS

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
/**
 * @deprecated mistakenly made public in 0.98.7. scope will change to package-private
 */
@Deprecated
public String reverseDNS(InetAddress ipAddress) throws NamingException, UnknownHostException {
  String hostName = this.reverseDNSCacheMap.get(ipAddress);
  if (hostName == null) {
    String ipAddressString = null;
    try {
      ipAddressString = DNS.reverseDns(ipAddress, null);
    } catch (Exception e) {
      // We can use InetAddress in case the jndi failed to pull up the reverse DNS entry from the
      // name service. Also, in case of ipv6, we need to use the InetAddress since resolving
      // reverse DNS using jndi doesn't work well with ipv6 addresses.
      ipAddressString = InetAddress.getByName(ipAddress.getHostAddress()).getHostName();
    }
    if (ipAddressString == null) throw new UnknownHostException("No host found for " + ipAddress);
    hostName = Strings.domainNamePointerToHostName(ipAddressString);
    this.reverseDNSCacheMap.put(ipAddress, hostName);
  }
  return hostName;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:23,代碼來源:TableInputFormatBase.java

示例8: getHostName

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
/**
 * Returns the hostname for this datanode. If the hostname is not
 * explicitly configured in the given config, then it is determined
 * via the DNS class.
 *
 * @param config configuration
 * @return the hostname (NB: may not be a FQDN)
 * @throws UnknownHostException if the dfs.datanode.dns.interface
 *    option is used and the hostname can not be determined
 */
private static String getHostName(Configuration config)
    throws UnknownHostException {
  String name = config.get(DFS_DATANODE_HOST_NAME_KEY);
  if (name == null) {
    String dnsInterface = config.get(
        CommonConfigurationKeys.HADOOP_SECURITY_DNS_INTERFACE_KEY);
    String nameServer = config.get(
        CommonConfigurationKeys.HADOOP_SECURITY_DNS_NAMESERVER_KEY);
    boolean fallbackToHosts = false;

    if (dnsInterface == null) {
      // Try the legacy configuration keys.
      dnsInterface = config.get(DFS_DATANODE_DNS_INTERFACE_KEY);
      nameServer = config.get(DFS_DATANODE_DNS_NAMESERVER_KEY);
    } else {
      // If HADOOP_SECURITY_DNS_* is set then also attempt hosts file
      // resolution if DNS fails. We will not use hosts file resolution
      // by default to avoid breaking existing clusters.
      fallbackToHosts = true;
    }

    name = DNS.getDefaultHost(dnsInterface, nameServer, fallbackToHosts);
  }
  return name;
}
 
開發者ID:aliyun-beta,項目名稱:aliyun-oss-hadoop-fs,代碼行數:36,代碼來源:DataNode.java

示例9: register

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
void register() throws IOException {
  // get versions from the namenode
  nsInfo = nameNodeProto.versionRequest();
  dnRegistration = new DatanodeRegistration(
      new DatanodeID(DNS.getDefaultIP("default"),
          DNS.getDefaultHost("default", "default"),
          DataNode.generateUuid(), getNodePort(dnIdx),
          DFSConfigKeys.DFS_DATANODE_HTTP_DEFAULT_PORT,
          DFSConfigKeys.DFS_DATANODE_HTTPS_DEFAULT_PORT,
          DFSConfigKeys.DFS_DATANODE_IPC_DEFAULT_PORT),
      new DataStorage(nsInfo),
      new ExportedBlockKeys(), VersionInfo.getVersion());
  // register datanode
  dnRegistration = dataNodeProto.registerDatanode(dnRegistration);
  dnRegistration.setNamespaceInfo(nsInfo);
  //first block reports
  storage = new DatanodeStorage(DatanodeStorage.generateUuid());
  final StorageBlockReport[] reports = {
      new StorageBlockReport(storage, BlockListAsLongs.EMPTY)
  };
  dataNodeProto.blockReport(dnRegistration, bpid, reports,
          new BlockReportContext(1, 0, System.nanoTime(), 0L));
}
 
開發者ID:aliyun-beta,項目名稱:aliyun-oss-hadoop-fs,代碼行數:24,代碼來源:NNThroughputBenchmark.java

示例10: reverseDNS

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
private static String reverseDNS(InetAddress ipAddress) throws NamingException, UnknownHostException {
	String hostName = reverseDNSCacheMap.get(ipAddress);
	
	if (hostName == null) {
		String ipAddressString = null;
		try {
			ipAddressString = DNS.reverseDns(ipAddress, null);
		} catch (Exception e) {
			// We can use InetAddress in case the jndi failed to pull up the reverse DNS entry from the
			// name service. Also, in case of ipv6, we need to use the InetAddress since resolving
			// reverse DNS using jndi doesn't work well with ipv6 addresses.
			ipAddressString = InetAddress.getByName(ipAddress.getHostAddress()).getHostName();
		}
		
		if (ipAddressString == null) {
			throw new UnknownHostException("No host found for " + ipAddress);
		}
		
		hostName = Strings.domainNamePointerToHostName(ipAddressString);
		reverseDNSCacheMap.put(ipAddress, hostName);
	}
	
	return hostName;
}
 
開發者ID:mini666,項目名稱:hive-phoenix-handler,代碼行數:25,代碼來源:PhoenixStorageHandlerUtil.java

示例11: reverseDNS

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
/**
 * This method might seem alien, but we do this in order to resolve the hostnames the same way
 * Hadoop does. This ensures we get locality if Kudu is running along MR/YARN.
 * @param host hostname we got from the master
 * @param port port we got from the master
 * @return reverse DNS'd address
 */
private String reverseDNS(String host, Integer port) {
    LOG.warn("I was called : reverseDNS");
    String location = this.reverseDNSCacheMap.get(host);
    if (location != null) {
        return location;
    }
    // The below InetSocketAddress creation does a name resolution.
    InetSocketAddress isa = new InetSocketAddress(host, port);
    if (isa.isUnresolved()) {
        LOG.warn("Failed address resolve for: " + isa);
    }
    InetAddress tabletInetAddress = isa.getAddress();
    try {
        location = domainNamePointerToHostName(
                DNS.reverseDns(tabletInetAddress, this.nameServer));
        this.reverseDNSCacheMap.put(host, location);
    } catch (NamingException e) {
        LOG.warn("Cannot resolve the host name for " + tabletInetAddress + " because of " + e);
        location = host;
    }
    return location;
}
 
開發者ID:BimalTandel,項目名稱:HiveKudu-Handler,代碼行數:30,代碼來源:HiveKuduTableInputFormat.java

示例12: register

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
void register() throws IOException {
  // get versions from the namenode
  nsInfo = nameNodeProto.versionRequest();
  dnRegistration = new DatanodeRegistration(
      new DatanodeID(DNS.getDefaultIP("default"),
          DNS.getDefaultHost("default", "default"),
          DataNode.generateUuid(), getNodePort(dnIdx),
          DFSConfigKeys.DFS_DATANODE_HTTP_DEFAULT_PORT,
          DFSConfigKeys.DFS_DATANODE_HTTPS_DEFAULT_PORT,
          DFSConfigKeys.DFS_DATANODE_IPC_DEFAULT_PORT),
      new DataStorage(nsInfo),
      new ExportedBlockKeys(), VersionInfo.getVersion());
  // register datanode
  dnRegistration = nameNodeProto.registerDatanode(dnRegistration);
  //first block reports
  storage = new DatanodeStorage(DatanodeStorage.generateUuid());
  final StorageBlockReport[] reports = {
      new StorageBlockReport(storage,
          new BlockListAsLongs(null, null).getBlockListAsLongs())
  };
  nameNodeProto.blockReport(dnRegistration, 
      nameNode.getNamesystem().getBlockPoolId(), reports,
          new BlockReportContext(1, 0, System.nanoTime()));
}
 
開發者ID:Nextzero,項目名稱:hadoop-2.6.0-cdh5.4.3,代碼行數:25,代碼來源:NNThroughputBenchmark.java

示例13: getUniqueRackPrefix

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
static private String getUniqueRackPrefix() {

  String ip = "unknownIP";
  try {
    ip = DNS.getDefaultIP("default");
  } catch (UnknownHostException ignored) {
    System.out.println("Could not find ip address of \"default\" inteface.");
  }
  
  int rand = 0;
  try {
    rand = SecureRandom.getInstance("SHA1PRNG").nextInt(Integer.MAX_VALUE);
  } catch (NoSuchAlgorithmException e) {
    rand = (new Random()).nextInt(Integer.MAX_VALUE);
  }
  return "/Rack-" + rand + "-"+ ip  + "-" + 
                    System.currentTimeMillis();
}
 
開發者ID:rhli,項目名稱:hadoop-EAR,代碼行數:19,代碼來源:DataNodeCluster.java

示例14: createNewStorageId

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
public static String createNewStorageId(int port) {
  /* Return 
   * "DS-randInt-ipaddr-currentTimeMillis"
   * It is considered extermely rare for all these numbers to match
   * on a different machine accidentally for the following 
   * a) SecureRandom(INT_MAX) is pretty much random (1 in 2 billion), and
   * b) Good chance ip address would be different, and
   * c) Even on the same machine, Datanode is designed to use different ports.
   * d) Good chance that these are started at different times.
   * For a confict to occur all the 4 above have to match!.
   * The format of this string can be changed anytime in future without
   * affecting its functionality.
   */
  String ip = "unknownIP";
  try {
    ip = DNS.getDefaultIP("default");
  } catch (UnknownHostException ignored) {
    LOG.warn("Could not find ip address of \"default\" inteface.");
  }

  int rand = getSecureRandom().nextInt(Integer.MAX_VALUE);
  return "DS-" + rand + "-"+ ip + "-" + port + "-" + 
                    System.currentTimeMillis();
}
 
開發者ID:rhli,項目名稱:hadoop-EAR,代碼行數:25,代碼來源:DataNode.java

示例15: createNewStorageId

import org.apache.hadoop.net.DNS; //導入依賴的package包/類
/**
 * @return a unique storage ID of form "DS-randInt-ipaddr-port-timestamp"
 */
static String createNewStorageId(int port) {
  // It is unlikely that we will create a non-unique storage ID
  // for the following reasons:
  // a) SecureRandom is a cryptographically strong random number generator
  // b) IP addresses will likely differ on different hosts
  // c) DataNode xfer ports will differ on the same host
  // d) StorageIDs will likely be generated at different times (in ms)
  // A conflict requires that all four conditions are violated.
  // NB: The format of this string can be changed in the future without
  // requiring that old SotrageIDs be updated.
  String ip = "unknownIP";
  try {
    ip = DNS.getDefaultIP("default");
  } catch (UnknownHostException ignored) {
    LOG.warn("Could not find an IP address for the \"default\" inteface.");
  }
  int rand = DFSUtil.getSecureRandom().nextInt(Integer.MAX_VALUE);
  return "DS-" + rand + "-" + ip + "-" + port + "-" + Time.now();
}
 
開發者ID:ict-carch,項目名稱:hadoop-plus,代碼行數:23,代碼來源:DataNode.java


注:本文中的org.apache.hadoop.net.DNS類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。