当前位置: 首页>>代码示例>>Java>>正文


Java ValidationStringency.DEFAULT_STRINGENCY属性代码示例

本文整理汇总了Java中htsjdk.samtools.ValidationStringency.DEFAULT_STRINGENCY属性的典型用法代码示例。如果您正苦于以下问题:Java ValidationStringency.DEFAULT_STRINGENCY属性的具体用法?Java ValidationStringency.DEFAULT_STRINGENCY怎么用?Java ValidationStringency.DEFAULT_STRINGENCY使用的例子?那么恭喜您, 这里精选的属性代码示例或许可以为您提供帮助。您也可以进一步了解该属性所在htsjdk.samtools.ValidationStringency的用法示例。


在下文中一共展示了ValidationStringency.DEFAULT_STRINGENCY属性的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: initialize

@Override
public void initialize(InputSplit split, TaskAttemptContext context)
		throws IOException {
	if (isInitialized) {
		close();
	}
	isInitialized = true;

	final Configuration conf = context.getConfiguration();
	final FileSplit fileSplit = (FileSplit) split;
	final Path file = fileSplit.getPath();

	String refSourcePath = conf.get(INPUTFORMAT_REFERENCE);

	ReferenceSource refSource = new ReferenceSource(new File(refSourcePath));
	
	seekableStream = WrapSeekable.openPath(conf, file);
	start = getStart(fileSplit, conf);
	if (start == 0) {
		samFileHeader = CramIO.readCramHeader(seekableStream).getSamFileHeader();
		start = seekableStream.position();
		seekableStream.seek(0);
	}

	length = getLength(fileSplit, conf, seekableStream.length());
	long end = start + length;
	if (end > seekableStream.length())
		end = seekableStream.length();

	long[] boundaries = new long[] { start << 16, (end - 1) << 16 };
	cramIterator = new CRAMIterator(seekableStream, refSource, boundaries,ValidationStringency.DEFAULT_STRINGENCY);
	ValidationStringency stringency = SAMHeaderReader
			.getValidationStringency(conf);
	if (stringency != null) {
		cramIterator.setValidationStringency(stringency);
	}
}
 
开发者ID:BGI-flexlab,项目名称:SOAPgaea,代码行数:37,代码来源:GaeaCramRecordReader.java

示例2: getReadsFromBAMFile

private static PCollection<Read> getReadsFromBAMFile() throws IOException, URISyntaxException {
  /**
   * Policy used to shard Reads.
   * By default we are using the default sharding supplied by the policy class.
   * If you want custom sharding, use the following pattern:
   * <pre>
   *    BAM_FILE_READ_SHARDING_POLICY = new ShardingPolicy() {
   *     @Override
   *     public boolean shardBigEnough(BAMShard shard) {
   *       return shard.sizeInLoci() > 50000000;
   *     }
   *   };
   * </pre>
   */
  final ShardingPolicy BAM_FILE_READ_SHARDING_POLICY = ShardingPolicy.BYTE_SIZE_POLICY_10MB;

  LOG.info("Sharded reading of " + pipelineOptions.getBAMFilePath());

  final ReaderOptions readerOptions = new ReaderOptions(
      ValidationStringency.DEFAULT_STRINGENCY,
      true);

  // TODO: change this to ReadBAMTransform.getReadsFromBAMFilesSharded when
  // https://github.com/googlegenomics/dataflow-java/issues/214 is fixed.
  return ReadBAMTransform.getReadsFromBAMFileSharded(pipeline,
      pipelineOptions,
      auth,
      contigs,
      readerOptions,
      pipelineOptions.getBAMFilePath(),
      BAM_FILE_READ_SHARDING_POLICY);
}
 
开发者ID:googlegenomics,项目名称:dataflow-java,代码行数:32,代码来源:ShardedBAMWriting.java

示例3: IntegrationTestSpec

public IntegrationTestSpec(String args, List<String> expectedFileNames) {
    this.args = args;
    this.nOutputFiles = expectedFileNames.size();
    this.expectedException = null;
    this.expectedFileNames = expectedFileNames;
    this.compareBamFilesSorted = false;
    this.validationStringency = ValidationStringency.DEFAULT_STRINGENCY;
}
 
开发者ID:broadinstitute,项目名称:gatk,代码行数:8,代码来源:IntegrationTestSpec.java

示例4: Basic2SAMRecordTransfer

public Basic2SAMRecordTransfer(SAMFileHeader header) {
    this.mFileHeader = header;
    this.samRecordFactory = new DefaultSAMRecordFactory();
    this.validationStringency = ValidationStringency.DEFAULT_STRINGENCY;
}
 
开发者ID:PAA-NCIC,项目名称:SparkSeq,代码行数:5,代码来源:Basic2SAMRecordTransfer.java

示例5: test

@Test(dataProvider="metricsfiles", groups="spark")
public void test(
        final String fileName,
        final String referenceName,
        final boolean allLevels,
        final String expectedResultsFile) throws IOException {

    final String inputPath = new File(TEST_DATA_DIR, fileName).getAbsolutePath();
    final String referencePath = referenceName != null ? new File(referenceName).getAbsolutePath() : null;

    final File outfile = GATKBaseTest.createTempFile("test", ".insert_size_metrics");

    JavaSparkContext ctx = SparkContextFactory.getTestSparkContext();
    ReadsSparkSource readSource = new ReadsSparkSource(ctx, ValidationStringency.DEFAULT_STRINGENCY);

    SAMFileHeader samHeader = readSource.getHeader(inputPath, referencePath);
    JavaRDD<GATKRead> rddParallelReads = readSource.getParallelReads(inputPath, referencePath);

    InsertSizeMetricsArgumentCollection isArgs = new InsertSizeMetricsArgumentCollection();
    isArgs.output = outfile.getAbsolutePath();
    if (allLevels) {
        isArgs.metricAccumulationLevel.accumulationLevels = new HashSet<>();
        isArgs.metricAccumulationLevel.accumulationLevels.add(MetricAccumulationLevel.ALL_READS);
        isArgs.metricAccumulationLevel.accumulationLevels.add(MetricAccumulationLevel.SAMPLE);
        isArgs.metricAccumulationLevel.accumulationLevels.add(MetricAccumulationLevel.LIBRARY);
        isArgs.metricAccumulationLevel.accumulationLevels.add(MetricAccumulationLevel.READ_GROUP);
    }

    InsertSizeMetricsCollectorSpark isSpark = new InsertSizeMetricsCollectorSpark();
    isSpark.initialize(isArgs, samHeader, null);

    // Since we're bypassing the framework in order to force this test to run on multiple partitions, we
    // need to make the read filter manually since we don't have the plugin descriptor to do it for us; so
    // remove the (default) FirstOfPairReadFilter filter and add in the SECOND_IN_PAIR manually since thats
    // required for our tests to pass
    List<ReadFilter> readFilters = isSpark.getDefaultReadFilters();
    readFilters.stream().filter(
            f -> !f.getClass().getSimpleName().equals(ReadFilterLibrary.FirstOfPairReadFilter.class.getSimpleName()));
    ReadFilter rf = ReadFilter.fromList(readFilters, samHeader);

    // Force the input RDD to be split into two partitions to ensure that the
    // reduce/combiners run
    rddParallelReads = rddParallelReads.repartition(2);
    isSpark.collectMetrics(rddParallelReads.filter(r -> rf.test(r)), samHeader);

    isSpark.saveMetrics(fileName);

    IntegrationTestSpec.assertEqualTextFiles(
            outfile,
            new File(TEST_DATA_DIR, expectedResultsFile),
            "#"
    );
}
 
开发者ID:broadinstitute,项目名称:gatk,代码行数:53,代码来源:InsertSizeMetricsCollectorSparkUnitTest.java


注:本文中的htsjdk.samtools.ValidationStringency.DEFAULT_STRINGENCY属性示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。