当前位置: 首页>>代码示例>>Java>>正文


Java FsVolumeSpi.getBasePath方法代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi.getBasePath方法的典型用法代码示例。如果您正苦于以下问题:Java FsVolumeSpi.getBasePath方法的具体用法?Java FsVolumeSpi.getBasePath怎么用?Java FsVolumeSpi.getBasePath使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi的用法示例。


在下文中一共展示了FsVolumeSpi.getBasePath方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testLocalDirs

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入方法依赖的package包/类
/**
 * Check that the permissions of the local DN directories are as expected.
 */
@Test
public void testLocalDirs() throws Exception {
  Configuration conf = new Configuration();
  final String permStr = conf.get(
    DFSConfigKeys.DFS_DATANODE_DATA_DIR_PERMISSION_KEY);
  FsPermission expected = new FsPermission(permStr);

  // Check permissions on directories in 'dfs.datanode.data.dir'
  FileSystem localFS = FileSystem.getLocal(conf);
  for (DataNode dn : cluster.getDataNodes()) {
    for (FsVolumeSpi v : dn.getFSDataset().getVolumes()) {
      String dir = v.getBasePath();
      Path dataDir = new Path(dir);
      FsPermission actual = localFS.getFileStatus(dataDir).getPermission();
        assertEquals("Permission for dir: " + dataDir + ", is " + actual +
            ", while expected is " + expected, expected, actual);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:TestDiskError.java

示例2: testLocalDirs

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入方法依赖的package包/类
/**
 * Check that the permissions of the local DN directories are as expected.
 */
@Test
public void testLocalDirs() throws Exception {
  Configuration conf = new Configuration();
  final String permStr = conf.get(
    DFSConfigKeys.DFS_DATANODE_DATA_DIR_PERMISSION_KEY);
  FsPermission expected = new FsPermission(permStr);

  // Check permissions on directories in 'dfs.datanode.data.dir'
  FileSystem localFS = FileSystem.getLocal(conf);
  for (DataNode dn : cluster.getDataNodes()) {
    try (FsDatasetSpi.FsVolumeReferences volumes =
        dn.getFSDataset().getFsVolumeReferences()) {
      for (FsVolumeSpi vol : volumes) {
        String dir = vol.getBasePath();
        Path dataDir = new Path(dir);
        FsPermission actual = localFS.getFileStatus(dataDir).getPermission();
        assertEquals("Permission for dir: " + dataDir + ", is " + actual +
            ", while expected is " + expected, expected, actual);
      }
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:26,代码来源:TestDiskError.java

示例3: duplicateBlock

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入方法依赖的package包/类
/**
 * Duplicate the given block on all volumes.
 * @param blockId
 * @throws IOException
 */
private void duplicateBlock(long blockId) throws IOException {
  synchronized (fds) {
    ReplicaInfo b = FsDatasetTestUtil.fetchReplicaInfo(fds, bpid, blockId);
    for (FsVolumeSpi v : fds.getVolumes()) {
      if (v.getStorageID().equals(b.getVolume().getStorageID())) {
        continue;
      }

      // Volume without a copy of the block. Make a copy now.
      File sourceBlock = b.getBlockFile();
      File sourceMeta = b.getMetaFile();
      String sourceRoot = b.getVolume().getBasePath();
      String destRoot = v.getBasePath();

      String relativeBlockPath = new File(sourceRoot).toURI().relativize(sourceBlock.toURI()).getPath();
      String relativeMetaPath = new File(sourceRoot).toURI().relativize(sourceMeta.toURI()).getPath();

      File destBlock = new File(destRoot, relativeBlockPath);
      File destMeta = new File(destRoot, relativeMetaPath);

      destBlock.getParentFile().mkdirs();
      FileUtils.copyFile(sourceBlock, destBlock);
      FileUtils.copyFile(sourceMeta, destMeta);

      if (destBlock.exists() && destMeta.exists()) {
        LOG.info("Copied " + sourceBlock + " ==> " + destBlock);
        LOG.info("Copied " + sourceMeta + " ==> " + destMeta);
      }
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:37,代码来源:TestDirectoryScanner.java

示例4: duplicateBlock

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入方法依赖的package包/类
/**
 * Duplicate the given block on all volumes.
 * @param blockId
 * @throws IOException
 */
private void duplicateBlock(long blockId) throws IOException {
  synchronized (fds) {
    ReplicaInfo b = FsDatasetTestUtil.fetchReplicaInfo(fds, bpid, blockId);
    try (FsDatasetSpi.FsVolumeReferences volumes =
        fds.getFsVolumeReferences()) {
      for (FsVolumeSpi v : volumes) {
        if (v.getStorageID().equals(b.getVolume().getStorageID())) {
          continue;
        }

        // Volume without a copy of the block. Make a copy now.
        File sourceBlock = b.getBlockFile();
        File sourceMeta = b.getMetaFile();
        String sourceRoot = b.getVolume().getBasePath();
        String destRoot = v.getBasePath();

        String relativeBlockPath =
            new File(sourceRoot).toURI().relativize(sourceBlock.toURI())
                .getPath();
        String relativeMetaPath =
            new File(sourceRoot).toURI().relativize(sourceMeta.toURI())
                .getPath();

        File destBlock = new File(destRoot, relativeBlockPath);
        File destMeta = new File(destRoot, relativeMetaPath);

        destBlock.getParentFile().mkdirs();
        FileUtils.copyFile(sourceBlock, destBlock);
        FileUtils.copyFile(sourceMeta, destMeta);

        if (destBlock.exists() && destMeta.exists()) {
          LOG.info("Copied " + sourceBlock + " ==> " + destBlock);
          LOG.info("Copied " + sourceMeta + " ==> " + destMeta);
        }
      }
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:44,代码来源:TestDirectoryScanner.java


注:本文中的org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi.getBasePath方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。