当前位置: 首页>>代码示例>>Java>>正文


Java HATestUtil.getLogicalHostname方法代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil.getLogicalHostname方法的典型用法代码示例。如果您正苦于以下问题:Java HATestUtil.getLogicalHostname方法的具体用法?Java HATestUtil.getLogicalHostname怎么用?Java HATestUtil.getLogicalHostname使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil的用法示例。


在下文中一共展示了HATestUtil.getLogicalHostname方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testDfsClientFailover

import org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil; //导入方法依赖的package包/类
/**
 * Make sure that client failover works when an active NN dies and the standby
 * takes over.
 */
@Test
public void testDfsClientFailover() throws IOException, URISyntaxException {
  FileSystem fs = HATestUtil.configureFailoverFs(cluster, conf);
  
  DFSTestUtil.createFile(fs, TEST_FILE,
      FILE_LENGTH_TO_VERIFY, (short)1, 1L);
  
  assertEquals(fs.getFileStatus(TEST_FILE).getLen(), FILE_LENGTH_TO_VERIFY);
  cluster.shutdownNameNode(0);
  cluster.transitionToActive(1);
  assertEquals(fs.getFileStatus(TEST_FILE).getLen(), FILE_LENGTH_TO_VERIFY);
  
  // Check that it functions even if the URL becomes canonicalized
  // to include a port number.
  Path withPort = new Path("hdfs://" +
      HATestUtil.getLogicalHostname(cluster) + ":" +
      NameNode.DEFAULT_PORT + "/" + TEST_FILE.toUri().getPath());
  FileSystem fs2 = withPort.getFileSystem(fs.getConf());
  assertTrue(fs2.exists(withPort));

  fs.close();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:TestDFSClientFailover.java

示例2: testWrappedFailoverProxyProvider

import org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil; //导入方法依赖的package包/类
/**
 * Test to verify legacy proxy providers are correctly wrapped.
 */
@Test
public void testWrappedFailoverProxyProvider() throws Exception {
  // setup the config with the dummy provider class
  Configuration config = new HdfsConfiguration(conf);
  String logicalName = HATestUtil.getLogicalHostname(cluster);
  HATestUtil.setFailoverConfigurations(cluster, config, logicalName);
  config.set(DFS_CLIENT_FAILOVER_PROXY_PROVIDER_KEY_PREFIX + "." + logicalName,
      DummyLegacyFailoverProxyProvider.class.getName());
  Path p = new Path("hdfs://" + logicalName + "/");

  // not to use IP address for token service
  SecurityUtil.setTokenServiceUseIp(false);

  // Logical URI should be used.
  assertTrue("Legacy proxy providers should use logical URI.",
      HAUtil.useLogicalUri(config, p.toUri()));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestDFSClientFailover.java

示例3: testDfsClientFailover

import org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil; //导入方法依赖的package包/类
/**
 * Make sure that client failover works when an active NN dies and the standby
 * takes over.
 */
@Test
public void testDfsClientFailover() throws IOException, URISyntaxException {
  FileSystem fs = HATestUtil.configureFailoverFs(cluster, conf);
  
  DFSTestUtil.createFile(fs, TEST_FILE,
      FILE_LENGTH_TO_VERIFY, (short)1, 1L);
  
  assertEquals(fs.getFileStatus(TEST_FILE).getLen(), FILE_LENGTH_TO_VERIFY);
  cluster.shutdownNameNode(0);
  cluster.transitionToActive(1);
  assertEquals(fs.getFileStatus(TEST_FILE).getLen(), FILE_LENGTH_TO_VERIFY);
  
  // Check that it functions even if the URL becomes canonicalized
  // to include a port number.
  Path withPort = new Path("hdfs://" +
      HATestUtil.getLogicalHostname(cluster) + ":" +
      HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT + "/" +
      TEST_FILE.toUri().getPath());
  FileSystem fs2 = withPort.getFileSystem(fs.getConf());
  assertTrue(fs2.exists(withPort));

  fs.close();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:28,代码来源:TestDFSClientFailover.java

示例4: testWrappedFailoverProxyProvider

import org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil; //导入方法依赖的package包/类
/**
 * Test to verify legacy proxy providers are correctly wrapped.
 */
@Test
public void testWrappedFailoverProxyProvider() throws Exception {
  // setup the config with the dummy provider class
  Configuration config = new HdfsConfiguration(conf);
  String logicalName = HATestUtil.getLogicalHostname(cluster);
  HATestUtil.setFailoverConfigurations(cluster, config, logicalName);
  config.set(HdfsClientConfigKeys.Failover.PROXY_PROVIDER_KEY_PREFIX + "." + logicalName,
      DummyLegacyFailoverProxyProvider.class.getName());
  Path p = new Path("hdfs://" + logicalName + "/");

  // not to use IP address for token service
  SecurityUtil.setTokenServiceUseIp(false);

  // Logical URI should be used.
  assertTrue("Legacy proxy providers should use logical URI.",
      HAUtil.useLogicalUri(config, p.toUri()));
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:21,代码来源:TestDFSClientFailover.java

示例5: testLogicalUriShouldNotHavePorts

import org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil; //导入方法依赖的package包/类
/**
 * Regression test for HDFS-2683.
 */
@Test
public void testLogicalUriShouldNotHavePorts() {
  Configuration config = new HdfsConfiguration(conf);
  String logicalName = HATestUtil.getLogicalHostname(cluster);
  HATestUtil.setFailoverConfigurations(cluster, config, logicalName);
  Path p = new Path("hdfs://" + logicalName + ":12345/");
  try {
    p.getFileSystem(config).exists(p);
    fail("Did not fail with fake FS");
  } catch (IOException ioe) {
    GenericTestUtils.assertExceptionContains(
        "does not use port information", ioe);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:TestDFSClientFailover.java


注:本文中的org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil.getLogicalHostname方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。