當前位置: 首頁>>代碼示例>>Java>>正文


Java BucketUtils.copyFile方法代碼示例

本文整理匯總了Java中org.broadinstitute.hellbender.utils.gcs.BucketUtils.copyFile方法的典型用法代碼示例。如果您正苦於以下問題:Java BucketUtils.copyFile方法的具體用法?Java BucketUtils.copyFile怎麽用?Java BucketUtils.copyFile使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.broadinstitute.hellbender.utils.gcs.BucketUtils的用法示例。


在下文中一共展示了BucketUtils.copyFile方法的5個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: testReferenceSourceQuery

import org.broadinstitute.hellbender.utils.gcs.BucketUtils; //導入方法依賴的package包/類
@Test
public void testReferenceSourceQuery() throws IOException {
    MiniDFSCluster cluster = null;
    try {
        cluster = new MiniDFSCluster.Builder(new Configuration()).build();
        String staging = cluster.getFileSystem().getWorkingDirectory().toString();

        String fasta = new Path(staging, hg19MiniReference).toString();
        String fai = new Path(staging, hg19MiniReference + ".fai").toString();
        String dict = new Path(staging, hg19MiniReference.replaceFirst("\\.fasta$", ".dict")).toString();
        BucketUtils.copyFile(hg19MiniReference, null, fasta);
        BucketUtils.copyFile(hg19MiniReference + ".fai", null, fai);
        BucketUtils.copyFile(hg19MiniReference.replaceFirst("\\.fasta$", ".dict"), null, dict);

        final ReferenceHadoopSource refSource = new ReferenceHadoopSource(fasta);
        final ReferenceBases bases = refSource.getReferenceBases(PipelineOptionsFactory.create(),
                new SimpleInterval("2", 10001, 10010));

        Assert.assertNotNull(bases);
        Assert.assertEquals(bases.getBases().length, 10, "Wrong number of bases returned");
        Assert.assertEquals(new String(bases.getBases()), "CGTATCCCAC", "Wrong bases returned");
        Assert.assertNotNull(refSource.getReferenceSequenceDictionary(null));
    } finally {
        if (cluster != null) {
            cluster.shutdown();
        }
    }
}
 
開發者ID:broadinstitute,項目名稱:gatk-dataflow,代碼行數:29,代碼來源:ReferenceHadoopSourceUnitTest.java

示例2: hackilyCopyFromGCSIfNecessary

import org.broadinstitute.hellbender.utils.gcs.BucketUtils; //導入方法依賴的package包/類
private ArrayList<String> hackilyCopyFromGCSIfNecessary(List<String> localVariants) {
    int i=0;
    Stopwatch hacking = Stopwatch.createStarted();
    boolean copied = false;
    ArrayList<String> ret = new ArrayList<>();
    for (String v : localVariants) {
        if (BucketUtils.isCloudStorageUrl(v)) {
            if (!copied) {
                logger.info("(HACK): copying the GCS variant file to local just so we can read it back.");
                copied=true;
            }
            // this only works with the API_KEY, but then again it's a hack so there's no point in polishing it. Please don't make me.
            String d = IOUtils.createTempFile("knownVariants-"+i,".vcf").getAbsolutePath();
            try {
                BucketUtils.copyFile(v, d);
            } catch (IOException x) {
                throw new UserException.CouldNotReadInputFile(v,x);
            }
            ret.add(d);
        } else {
            ret.add(v);
        }
    }
    hacking.stop();
    if (copied) {
        logger.info("Copying the vcf took "+hacking.elapsed(TimeUnit.MILLISECONDS)+" ms.");
    }
    return ret;
}
 
開發者ID:broadinstitute,項目名稱:gatk,代碼行數:30,代碼來源:BaseRecalibratorSparkSharded.java

示例3: copyFileToLocalTmpFile

import org.broadinstitute.hellbender.utils.gcs.BucketUtils; //導入方法依賴的package包/類
private File copyFileToLocalTmpFile(String outputPath) throws IOException {
    File localCopy = createTempFile("local_metrics_copy",".txt");
    BucketUtils.copyFile(outputPath, localCopy.getAbsolutePath());
    return localCopy;
}
 
開發者ID:broadinstitute,項目名稱:gatk,代碼行數:6,代碼來源:MetricsUtilsTest.java

示例4: testCopyLargeFile

import org.broadinstitute.hellbender.utils.gcs.BucketUtils; //導入方法依賴的package包/類
@Test(groups = {"spark", "bucket"})
public void testCopyLargeFile() throws Exception {
    MiniDFSCluster cluster = null;
    try {
        final Configuration conf = new Configuration();
        // set the minicluster to have a very low block size so that we can test transferring a file in chunks without actually needing to move a big file
        conf.set("dfs.blocksize", "1048576");
        cluster = MiniClusterUtils.getMiniCluster(conf);

        // copy a multi-block file
        final Path tempPath = MiniClusterUtils.getTempPath(cluster, "test", "dir");
        final String gcpInputPath = getGCPTestInputPath() + "huge/CEUTrio.HiSeq.WGS.b37.NA12878.chr1_4.bam.bai";
        String args =
                "--" + ParallelCopyGCSDirectoryIntoHDFSSpark.INPUT_GCS_PATH_LONG_NAME + " " + gcpInputPath +
                        " --" + ParallelCopyGCSDirectoryIntoHDFSSpark.OUTPUT_HDFS_DIRECTORY_LONG_NAME + " " + tempPath;
        ArgumentsBuilder ab = new ArgumentsBuilder().add(args);
        IntegrationTestSpec spec = new IntegrationTestSpec(
                ab.getString(),
                Collections.emptyList());
        spec.executeTest("testCopyLargeFile-" + args, this);

        final long fileSizeOnGCS = Files.size(IOUtils.getPath(gcpInputPath));


        final String hdfsPath = tempPath + "/" + "CEUTrio.HiSeq.WGS.b37.NA12878.chr1_4.bam.bai";

        org.apache.hadoop.fs.Path outputHdfsDirectoryPath = new org.apache.hadoop.fs.Path(tempPath.toUri());

        try(FileSystem fs = outputHdfsDirectoryPath.getFileSystem(conf)) {
            long chunkSize = ParallelCopyGCSDirectoryIntoHDFSSpark.getChunkSize(fs);
            Assert.assertTrue(fileSizeOnGCS > chunkSize);
        }

        Assert.assertEquals(BucketUtils.fileSize(hdfsPath),
                fileSizeOnGCS);

        final File tempDir = createTempDir("ParallelCopy");

        BucketUtils.copyFile(hdfsPath, tempDir + "fileFromHDFS.bam.bai");
        Assert.assertEquals(Utils.calculateFileMD5(new File(tempDir + "fileFromHDFS.bam.bai")), "1a6baa5332e98ef1358ac0fb36f46aaf");
    } finally {
        MiniClusterUtils.stopCluster(cluster);
    }
}
 
開發者ID:broadinstitute,項目名稱:gatk,代碼行數:45,代碼來源:ParallelCopyGCSDirectoryIntoHDFSSparkIntegrationTest.java

示例5: testCopyDirectory

import org.broadinstitute.hellbender.utils.gcs.BucketUtils; //導入方法依賴的package包/類
@Test(groups = {"spark", "bucket"})
public void testCopyDirectory() throws Exception {
    MiniDFSCluster cluster = null;
    try {
        final Configuration conf = new Configuration();
        // set the minicluster to have a very low block size so that we can test transferring a file in chunks without actually needing to move a big file
        conf.set("dfs.blocksize", "1048576");
        cluster = MiniClusterUtils.getMiniCluster(conf);

        // copy a directory
        final Path tempPath = MiniClusterUtils.getTempPath(cluster, "test", "dir");

        // directory contains two small files named foo.txt and bar.txt
        final String gcpInputPath = getGCPTestInputPath() + "parallel_copy/";
        String args =
                "--" + ParallelCopyGCSDirectoryIntoHDFSSpark.INPUT_GCS_PATH_LONG_NAME + " " + gcpInputPath +
                        " --" + ParallelCopyGCSDirectoryIntoHDFSSpark.OUTPUT_HDFS_DIRECTORY_LONG_NAME + " " + tempPath;
        ArgumentsBuilder ab = new ArgumentsBuilder().add(args);
        IntegrationTestSpec spec = new IntegrationTestSpec(
                ab.getString(),
                Collections.emptyList());
        spec.executeTest("testCopyDirectory-" + args, this);

        org.apache.hadoop.fs.Path outputHdfsDirectoryPath = new org.apache.hadoop.fs.Path(tempPath.toUri());

        final File tempDir = createTempDir("ParallelCopyDir");

        int filesFound = 0;
        try(FileSystem fs = outputHdfsDirectoryPath.getFileSystem(conf)) {
            final RemoteIterator<LocatedFileStatus> hdfsCopies = fs.listFiles(outputHdfsDirectoryPath, false);
            while (hdfsCopies.hasNext()) {
                final FileStatus next =  hdfsCopies.next();
                final Path path = next.getPath();
                BucketUtils.copyFile(path.toString(), tempDir + "/" + path.getName());
                filesFound ++;
            }
        }

        Assert.assertEquals(filesFound, 2);


        Assert.assertEquals(Utils.calculateFileMD5(new File(tempDir + "/foo.txt")), "d3b07384d113edec49eaa6238ad5ff00");
        Assert.assertEquals(Utils.calculateFileMD5(new File(tempDir + "/bar.txt")), "c157a79031e1c40f85931829bc5fc552");
    } finally {
        MiniClusterUtils.stopCluster(cluster);
    }
}
 
開發者ID:broadinstitute,項目名稱:gatk,代碼行數:48,代碼來源:ParallelCopyGCSDirectoryIntoHDFSSparkIntegrationTest.java


注:本文中的org.broadinstitute.hellbender.utils.gcs.BucketUtils.copyFile方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。