当前位置: 首页>>代码示例>>Java>>正文


Java GenotypesContext.size方法代码示例

本文整理汇总了Java中htsjdk.variant.variantcontext.GenotypesContext.size方法的典型用法代码示例。如果您正苦于以下问题:Java GenotypesContext.size方法的具体用法?Java GenotypesContext.size怎么用?Java GenotypesContext.size使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在htsjdk.variant.variantcontext.GenotypesContext的用法示例。


在下文中一共展示了GenotypesContext.size方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: fixADFromSubsettedAlleles

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
/**
 * Fix the AD for the GenotypesContext of a VariantContext that has been subset
 *
 * @param originalGs       the original GenotypesContext
 * @param originalVC       the original VariantContext
 * @param allelesToUse     the new (sub)set of alleles to use
 * @return a new non-null GenotypesContext
 */
static private GenotypesContext fixADFromSubsettedAlleles(final GenotypesContext originalGs, final VariantContext originalVC, final List<Allele> allelesToUse) {

    // the bitset representing the allele indexes we want to keep
    final boolean[] alleleIndexesToUse = getAlleleIndexBitset(originalVC, allelesToUse);

    // the new genotypes to create
    final GenotypesContext newGTs = GenotypesContext.create(originalGs.size());

    // the samples
    final List<String> sampleIndices = originalGs.getSampleNamesOrderedByName();

    // create the new genotypes
    for ( int k = 0; k < originalGs.size(); k++ ) {
        final Genotype g = originalGs.get(sampleIndices.get(k));
        newGTs.add(fixAD(g, alleleIndexesToUse, allelesToUse.size()));
    }

    return newGTs;
}
 
开发者ID:PAA-NCIC,项目名称:SparkSeq,代码行数:28,代码来源:GATKVariantContextUtils.java

示例2: getGLs

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
/**
 * Unpack GenotypesContext into arraylist of doubel values
 * @param GLs            Input genotype context
 * @return               ArrayList of doubles corresponding to GL vectors
 */
protected static ArrayList<double[]> getGLs(final GenotypesContext GLs, final boolean includeDummy) {
    final ArrayList<double[]> genotypeLikelihoods = new ArrayList<>(GLs.size() + 1);

    if ( includeDummy ) genotypeLikelihoods.add(new double[]{0.0,0.0,0.0}); // dummy
    for ( Genotype sample : GLs.iterateInSampleNameOrder() ) {
        if ( sample.hasLikelihoods() ) {
            final double[] gls = sample.getLikelihoods().getAsVector();

            if ( MathUtils.sum(gls) < GATKVariantContextUtils.SUM_GL_THRESH_NOCALL )
                genotypeLikelihoods.add(gls);
        }
    }

    return genotypeLikelihoods;
}
 
开发者ID:PAA-NCIC,项目名称:SparkSeq,代码行数:21,代码来源:ExactAFCalculator.java

示例3: getGLs

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
/**
 * Unpack GenotypesContext into arraylist of double values
 * @param GLs            Input genotype context
 * @param keepUninformative Don't filter out uninformative genotype likelihoods (i.e. all log likelihoods near 0)
 *                          This is useful for VariantContexts with a NON_REF allele
 * @return               ArrayList of doubles corresponding to GL vectors
 */
protected static ArrayList<double[]> getGLs(final GenotypesContext GLs, final boolean includeDummy, final boolean keepUninformative) {
    final ArrayList<double[]> genotypeLikelihoods = new ArrayList<>(GLs.size() + 1);

    if ( includeDummy ) genotypeLikelihoods.add(new double[]{0.0,0.0,0.0}); // dummy
    for ( Genotype sample : GLs.iterateInSampleNameOrder() ) {
        if ( sample.hasLikelihoods() ) {
            final double[] gls = sample.getLikelihoods().getAsVector();
            
            if ( MathUtils.sum(gls) < GaeaGvcfVariantContextUtils.SUM_GL_THRESH_NOCALL || keepUninformative )
                genotypeLikelihoods.add(gls);
        }
    }

    return genotypeLikelihoods;
}
 
开发者ID:BGI-flexlab,项目名称:SOAPgaea,代码行数:23,代码来源:ExactAFCalculator.java

示例4: getADcounts

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
private Map<Allele, Integer> getADcounts(final VariantContext vc) {
    final GenotypesContext genotypes = vc.getGenotypes();
    if ( genotypes == null || genotypes.size() == 0 ) {
        logger.warn("VC does not have genotypes -- annotations were calculated in wrong order");
        return null;
    }

    final Map<Allele, Integer> variantADs = new HashMap<>();
    for(final Allele a : vc.getAlleles())
        variantADs.put(a,0);

    for (final Genotype gt : vc.getGenotypes()) {
        if(gt.hasAD()) {
            final int[] ADs = gt.getAD();
            for (int i = 1; i < vc.getNAlleles(); i++) {
                variantADs.put(vc.getAlternateAllele(i - 1), variantADs.get(vc.getAlternateAllele(i - 1)) + ADs[i]); //here -1 is to reconcile allele index with alt allele index
            }
        }
    }
    return variantADs;
}
 
开发者ID:broadinstitute,项目名称:gatk,代码行数:22,代码来源:AS_RMSMappingQuality.java

示例5: annotate

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
@Override
public Map<String, Object> annotate(final ReferenceContext ref,
                                    final VariantContext vc,
                                    final ReadLikelihoods<Allele> likelihoods) {
    Utils.nonNull(vc);
    final GenotypesContext genotypes = getFounderGenotypes(vc);
    if (genotypes == null || genotypes.size() < MIN_SAMPLES || !vc.isVariant()) {
        return Collections.emptyMap();
    }
    final Pair<Integer, Double> sampleCountCoeff = calculateIC(vc, genotypes);
    final int sampleCount = sampleCountCoeff.getLeft();
    final double F = sampleCountCoeff.getRight();
    if (sampleCount < MIN_SAMPLES) {
        logger.warn("Annotation will not be calculated, must provide at least " + MIN_SAMPLES + " samples");
        return Collections.emptyMap();
    }
    return Collections.singletonMap(getKeyNames().get(0), String.format("%.4f", F));
}
 
开发者ID:broadinstitute,项目名称:gatk,代码行数:19,代码来源:InbreedingCoeff.java

示例6: makeCoeffAnnotation

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
protected Map<String, Object> makeCoeffAnnotation(final VariantContext vc) {
    final GenotypesContext genotypes = (founderIds == null || founderIds.isEmpty()) ? vc.getGenotypes() : vc.getGenotypes(founderIds);
    if (genotypes == null || genotypes.size() < MIN_SAMPLES || !vc.isVariant())
        return null;
    double F = calculateIC(vc, genotypes);
    if (sampleCount < MIN_SAMPLES)
        return null;
    return Collections.singletonMap(getKeyNames().get(0), (Object) String.format("%.4f", F));
}
 
开发者ID:PAA-NCIC,项目名称:SparkSeq,代码行数:10,代码来源:InbreedingCoeff.java

示例7: getGLs

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
/**
 * Unpack GenotypesContext into arraylist of doubel values
 * @param GLs            Input genotype context
 * @return               ArrayList of doubles corresponding to GL vectors
 */
protected static ArrayList<double[]> getGLs(final GenotypesContext GLs, final boolean includeDummy) {
    ArrayList<double[]> genotypeLikelihoods = new ArrayList<double[]>(GLs.size() + 1);
    if ( includeDummy ) genotypeLikelihoods.add(new double[]{0.0,0.0,0.0}); // dummy
    for ( Genotype sample : GLs.iterateInSampleNameOrder() ) {
    	if ( sample.hasLikelihoods() ) {
            double[] gls = sample.getLikelihoods().getAsVector();
            if ( MathUtils.sum(gls) < GaeaVariantContextUtils.SUM_GL_THRESH_NOCALL )
                genotypeLikelihoods.add(gls);
        }
    }

    return genotypeLikelihoods;
}
 
开发者ID:BGI-flexlab,项目名称:SOAPgaea,代码行数:19,代码来源:ExactAFCalc.java

示例8: annotate

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
@Override
public Map<String, Object> annotate(RefMetaDataTracker tracker, ChromosomeInformationShare ref, VariantContext vc) {
	if ( !vc.hasLog10PError() )
           return null;

       final GenotypesContext genotypes = vc.getGenotypes();
       if ( genotypes == null || genotypes.size() == 0 )
           return null;

       final int standardDepth = getDepth(genotypes);

       if ( standardDepth == 0 )
           return null;

       final double altAlleleLength = GaeaGvcfVariantContextUtils.getMeanAltAlleleLength(vc);
       	
       // Hack: UnifiedGenotyper (but not HaplotypeCaller or GenotypeGVCFs) over-estimates the quality of long indels
       //       Penalize the QD calculation for UG indels to compensate for this
       double QD = -10.0 * vc.getLog10PError() / ((double)standardDepth * indelNormalizationFactor(altAlleleLength, false));

       // Hack: see note in the fixTooHighQD method below
       QD = fixTooHighQD(QD);

       final Map<String, Object> map = new HashMap<>();
       map.put(getKeyNames().get(0), String.format("%.2f", QD));
       return map;
}
 
开发者ID:BGI-flexlab,项目名称:SOAPgaea,代码行数:28,代码来源:QualByDepth.java

示例9: makeCoeffAnnotation

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
protected Map<String, Object> makeCoeffAnnotation(final VariantContext vc) {
    final GenotypesContext genotypes = (founderIds == null || founderIds.isEmpty()) ? vc.getGenotypes() : vc.getGenotypes(founderIds);
    if (genotypes == null || genotypes.size() < MIN_SAMPLES || !vc.isVariant())
        return null;
    final double F = calculateIC(vc, genotypes);
    if (heterozygosityUtils.getSampleCount() < MIN_SAMPLES)
        return null;
    return Collections.singletonMap(getKeyNames().get(0), (Object)String.format("%.4f", F));
}
 
开发者ID:BGI-flexlab,项目名称:SOAPgaea,代码行数:10,代码来源:InbreedingCoeff.java

示例10: fixADFromSubsettedAlleles

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
/**
 * Fix the AD for the GenotypesContext of a VariantContext that has been
 * subset
 *
 * @param originalGs
 *            the original GenotypesContext
 * @param originalVC
 *            the original VariantContext
 * @param allelesToUse
 *            the new (sub)set of alleles to use
 * @return a new non-null GenotypesContext
 */
public static GenotypesContext fixADFromSubsettedAlleles(final GenotypesContext originalGs,
		final VariantContext originalVC, final List<Allele> allelesToUse) {
	if (originalGs == null)
		throw new IllegalArgumentException("the original Gs cannot be null");
	if (originalVC == null)
		throw new IllegalArgumentException("the original VC cannot be null");
	if (allelesToUse == null)
		throw new IllegalArgumentException("the alleles to use list cannot be null");

	// the bitset representing the allele indexes we want to keep
	final BitSet alleleIndexesToUse = getAlleleIndexBitset(originalVC, allelesToUse);

	// the new genotypes to create
	final GenotypesContext newGTs = GenotypesContext.create(originalGs.size());

	// the samples
	final List<String> sampleIndices = originalGs.getSampleNamesOrderedByName();

	// create the new genotypes
	for (int k = 0; k < originalGs.size(); k++) {
		final Genotype g = originalGs.get(sampleIndices.get(k));
		newGTs.add(fixAD(g, alleleIndexesToUse));
	}

	return newGTs;
}
 
开发者ID:BGI-flexlab,项目名称:SOAPgaea,代码行数:39,代码来源:GaeaGvcfVariantContextUtils.java

示例11: createGenotypesWithSubsettedLikelihoods

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
/**
 * Create the new GenotypesContext with the subsetted PLs and ADs
 *
 * @param originalGs               the original GenotypesContext
 * @param vc                       the original VariantContext
 * @param allelesToUse             the actual alleles to use with the new Genotypes
 * @param likelihoodIndexesToUse   the indexes in the PL to use given the allelesToUse (@see #determineLikelihoodIndexesToUse())
 * @param assignGenotypes          assignment strategy for the (subsetted) PLs
 * @return a new non-null GenotypesContext
 */
private static GenotypesContext createGenotypesWithSubsettedLikelihoods(final GenotypesContext originalGs,
                                                                        final VariantContext vc,
                                                                        final List<Allele> allelesToUse,
                                                                        final List<Integer> likelihoodIndexesToUse,
                                                                        final GenotypeAssignmentMethod assignGenotypes) {
    // the new genotypes to create
    final GenotypesContext newGTs = GenotypesContext.create(originalGs.size());

    // make sure we are seeing the expected number of likelihoods per sample
    final int expectedNumLikelihoods = GenotypeLikelihoods.numLikelihoods(vc.getNAlleles(), 2);

    // the samples
    final List<String> sampleIndices = originalGs.getSampleNamesOrderedByName();

    // create the new genotypes
    for ( int k = 0; k < originalGs.size(); k++ ) {
        final Genotype g = originalGs.get(sampleIndices.get(k));
        final GenotypeBuilder gb = new GenotypeBuilder(g);

        // create the new likelihoods array from the alleles we are allowed to use
        double[] newLikelihoods;
        if ( !g.hasLikelihoods() ) {
            // we don't have any likelihoods, so we null out PLs and make G ./.
            newLikelihoods = null;
            gb.noPL();
        } else {
            final double[] originalLikelihoods = g.getLikelihoods().getAsVector();
            if ( likelihoodIndexesToUse == null ) {
                newLikelihoods = originalLikelihoods;
            } else if ( originalLikelihoods.length != expectedNumLikelihoods ) {
                newLikelihoods = null;
            } else {
                newLikelihoods = new double[likelihoodIndexesToUse.size()];
                int newIndex = 0;
                for ( final int oldIndex : likelihoodIndexesToUse )
                    newLikelihoods[newIndex++] = originalLikelihoods[oldIndex];

                // might need to re-normalize
                newLikelihoods = MathUtils.normalizeFromLog10(newLikelihoods, false, true);
            }

            if ( newLikelihoods == null || likelihoodsAreUninformative(newLikelihoods) )
                gb.noPL();
            else
                gb.PL(newLikelihoods);
        }

        updateGenotypeAfterSubsetting(g.getAlleles(), gb, assignGenotypes, newLikelihoods, allelesToUse);
        newGTs.add(gb.make());
    }

    return fixADFromSubsettedAlleles(newGTs, vc, allelesToUse);
}
 
开发者ID:PAA-NCIC,项目名称:SparkSeq,代码行数:64,代码来源:GATKVariantContextUtils.java

示例12: cleanupGenotypeAnnotations

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
/**
 * Cleans up genotype-level annotations that need to be updated.
 * 1. move MIN_DP to DP if present
 * 2. propagate DP to AD if not present
 * 3. remove SB if present
 * 4. change the PGT value from "0|1" to "1|1" for homozygous variant genotypes
 *
 * @param VC            the VariantContext with the Genotypes to fix
 * @param createRefGTs  if true we will also create proper hom ref genotypes since we assume the site is monomorphic
 * @return a new set of Genotypes
 */
private List<Genotype> cleanupGenotypeAnnotations(final VariantContext VC, final boolean createRefGTs) {
    final GenotypesContext oldGTs = VC.getGenotypes();
    final List<Genotype> recoveredGs = new ArrayList<>(oldGTs.size());
    for ( final Genotype oldGT : oldGTs ) {
        final Map<String, Object> attrs = new HashMap<>(oldGT.getExtendedAttributes());

        final GenotypeBuilder builder = new GenotypeBuilder(oldGT);
        int depth = oldGT.hasDP() ? oldGT.getDP() : 0;

        // move the MIN_DP to DP
        if ( oldGT.hasExtendedAttribute("MIN_DP") ) {
            depth = Integer.parseInt((String)oldGT.getAnyAttribute("MIN_DP"));
            builder.DP(depth);
            attrs.remove("MIN_DP");
        }

        // remove SB
        attrs.remove("SB");

        // update PGT for hom vars
        if ( oldGT.isHomVar() && oldGT.hasExtendedAttribute(HaplotypeCaller.HAPLOTYPE_CALLER_PHASING_GT_KEY) ) {
            attrs.put(HaplotypeCaller.HAPLOTYPE_CALLER_PHASING_GT_KEY, "1|1");
        }

        // create AD if it's not there
        if ( !oldGT.hasAD() && VC.isVariant() ) {
            final int[] AD = new int[VC.getNAlleles()];
            AD[0] = depth;
            builder.AD(AD);
        }

        if ( createRefGTs ) {
            final int ploidy = oldGT.getPloidy();
            final List<Allele> refAlleles = Collections.nCopies(ploidy,VC.getReference());

            //keep 0 depth samples as no-call
            if (depth > 0) {
                builder.alleles(refAlleles);
            }

            // also, the PLs are technically no longer usable
            builder.noPL();
        }

        recoveredGs.add(builder.noAttributes().attributes(attrs).make());
    }
    return recoveredGs;
}
 
开发者ID:PAA-NCIC,项目名称:SparkSeq,代码行数:60,代码来源:GenotypeGVCFs.java

示例13: subsetAlleles

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
/**
 * From a given variant context, extract a given subset of alleles, and update genotype context accordingly,
 * including updating the PL's, and assign genotypes accordingly
 *
 * @param vc              variant context with alleles and genotype likelihoods
 * @param defaultPloidy   ploidy to assume in case that {@code vc} does not contain that information
 *                        for a sample.
 * @param allelesToUse    alleles to subset
 * @param assignGenotypes true: assign hard genotypes, false: leave as no-call
 * @return GenotypesContext with new PLs
 */
public GenotypesContext subsetAlleles(final VariantContext vc, final int defaultPloidy,
                                      final List<Allele> allelesToUse,
                                      final boolean assignGenotypes) {
    // the genotypes with PLs
    final GenotypesContext oldGTs = vc.getGenotypes();

    // samples
    final List<String> sampleIndices = oldGTs.getSampleNamesOrderedByName();

    // the new genotypes to create
    final GenotypesContext newGTs = GenotypesContext.create();

    // we need to determine which of the alternate alleles (and hence the likelihoods) to use and carry forward
    final int numOriginalAltAlleles = vc.getAlternateAlleles().size();
    final int numNewAltAlleles = allelesToUse.size() - 1;


    // create the new genotypes
    for (int k = 0; k < oldGTs.size(); k++) {
        final Genotype g = oldGTs.get(sampleIndices.get(k));
        final int declaredPloidy = g.getPloidy();
        final int ploidy = declaredPloidy <= 0 ? defaultPloidy : declaredPloidy;
        if (!g.hasLikelihoods()) {
            newGTs.add(GenotypeBuilder.create(g.getSampleName(), GATKVariantContextUtils.noCallAlleles(ploidy)));
            continue;
        }

        // create the new likelihoods array from the alleles we are allowed to use
        final double[] originalLikelihoods = g.getLikelihoods().getAsVector();
        double[] newLikelihoods;

        // Optimization: if # of new alt alleles = 0 (pure ref call), keep original likelihoods so we skip normalization
        // and subsetting
        if (numOriginalAltAlleles == numNewAltAlleles || numNewAltAlleles == 0) {
            newLikelihoods = originalLikelihoods;
        } else {
            newLikelihoods = GeneralPloidyGenotypeLikelihoods.subsetToAlleles(originalLikelihoods, ploidy, vc.getAlleles(), allelesToUse);

            // might need to re-normalize
            newLikelihoods = MathUtils.normalizeFromLog10(newLikelihoods, false, true);
        }

        // if there is no mass on the (new) likelihoods, then just no-call the sample
        if (MathUtils.sum(newLikelihoods) > GATKVariantContextUtils.SUM_GL_THRESH_NOCALL) {
            newGTs.add(GenotypeBuilder.create(g.getSampleName(), GATKVariantContextUtils.noCallAlleles(ploidy)));
        } else {
            final GenotypeBuilder gb = new GenotypeBuilder(g);

            if (numNewAltAlleles == 0)
                gb.noPL();
            else
                gb.PL(newLikelihoods);

            // if we weren't asked to assign a genotype, then just no-call the sample
            if (!assignGenotypes || MathUtils.sum(newLikelihoods) > GATKVariantContextUtils.SUM_GL_THRESH_NOCALL)
                gb.alleles(GATKVariantContextUtils.noCallAlleles(ploidy));
            else
                assignGenotype(gb, newLikelihoods, allelesToUse, ploidy);
            newGTs.add(gb.make());
        }
    }

    return newGTs;

}
 
开发者ID:PAA-NCIC,项目名称:SparkSeq,代码行数:77,代码来源:GeneralPloidyExactAFCalculator.java

示例14: annotate

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
public Map<String, Object> annotate(final RefMetaDataTracker tracker,
                                    final AnnotatorCompatible walker,
                                    final ReferenceContext ref,
                                    final Map<String, AlignmentContext> stratifiedContexts,
                                    final VariantContext vc,
                                    final Map<String, PerReadAlleleLikelihoodMap> perReadAlleleLikelihoodMap ) {
    if ( !vc.hasLog10PError() )
        return null;

    final GenotypesContext genotypes = vc.getGenotypes();
    if ( genotypes == null || genotypes.size() == 0 )
        return null;

    int standardDepth = 0;
    int ADrestrictedDepth = 0;

    for ( final Genotype genotype : genotypes ) {

        // we care only about variant calls with likelihoods
        if ( !genotype.isHet() && !genotype.isHomVar() )
            continue;

        // if we have the AD values for this sample, let's make sure that the variant depth is greater than 1!
        // TODO -- If we like how this is working and want to apply it to a situation other than the single sample HC pipeline,
        // TODO --  then we will need to modify the annotateContext() - and related - routines in the VariantAnnotatorEngine
        // TODO --  so that genotype-level annotations are run first (to generate AD on the samples) and then the site-level
        // TODO --  annotations must come afterwards (so that QD can use the AD).
        if ( genotype.hasAD() ) {
            final int[] AD = genotype.getAD();
            final int totalADdepth = (int) MathUtils.sum(AD);
            if ( totalADdepth - AD[0] > 1 )
                ADrestrictedDepth += totalADdepth;
            standardDepth += totalADdepth;
            continue;
        }

        if (stratifiedContexts!= null && !stratifiedContexts.isEmpty()) {
            final AlignmentContext context = stratifiedContexts.get(genotype.getSampleName());
            if ( context == null )
                continue;
            standardDepth += context.getBasePileup().depthOfCoverage();

        } else if (perReadAlleleLikelihoodMap != null) {
            final PerReadAlleleLikelihoodMap perReadAlleleLikelihoods = perReadAlleleLikelihoodMap.get(genotype.getSampleName());
            if (perReadAlleleLikelihoods == null || perReadAlleleLikelihoods.isEmpty())
                continue;

            standardDepth += perReadAlleleLikelihoods.getNumberOfStoredElements();
        } else if ( genotype.hasDP() ) {
            standardDepth += genotype.getDP();
        }
    }

    // if the AD-restricted depth is a usable value (i.e. not zero), then we should use that one going forward
    if ( ADrestrictedDepth > 0 )
        standardDepth = ADrestrictedDepth;

    if ( standardDepth == 0 )
        return null;

    final double altAlleleLength = GATKVariantContextUtils.getMeanAltAlleleLength(vc);
    // Hack: when refContext == null then we know we are coming from the HaplotypeCaller and do not want to do a
    //  full length-based normalization (because the indel length problem is present only in the UnifiedGenotyper)
    double QD = -10.0 * vc.getLog10PError() / ((double)standardDepth * indelNormalizationFactor(altAlleleLength, ref != null));

    // Hack: see note in the fixTooHighQD method below
    QD = fixTooHighQD(QD);

    final Map<String, Object> map = new HashMap<>();
    map.put(getKeyNames().get(0), String.format("%.2f", QD));
    return map;
}
 
开发者ID:PAA-NCIC,项目名称:SparkSeq,代码行数:73,代码来源:QualByDepth.java

示例15: annotate

import htsjdk.variant.variantcontext.GenotypesContext; //导入方法依赖的package包/类
public Map<String, Object> annotate(final VariantDataTracker tracker,
                                    final ChromosomeInformationShare ref,
                                    final Mpileup mpileup,
                                    final VariantContext vc,
                                    final Map<String, PerReadAlleleLikelihoodMap> perReadAlleleLikelihoodMap ) {
    if ( !vc.hasLog10PError() )
        return null;

    final GenotypesContext genotypes = vc.getGenotypes();
    if ( genotypes == null || genotypes.size() == 0 )
        return null;

    int depth = 0;

    for ( final Genotype genotype : genotypes ) {

        // we care only about variant calls with likelihoods
        if ( !genotype.isHet() && !genotype.isHomVar() )
            continue;

        if (mpileup!= null) {
            Pileup pileup = mpileup.getCurrentPosPileup().get(genotype.getSampleName());
            if ( pileup == null )
                continue;
            depth += pileup.depthOfCoverage(false);

        }
        else if (perReadAlleleLikelihoodMap != null) {
            PerReadAlleleLikelihoodMap perReadAlleleLikelihoods = perReadAlleleLikelihoodMap.get(genotype.getSampleName());
            if (perReadAlleleLikelihoods == null || perReadAlleleLikelihoods.isEmpty())
                continue;

            depth += perReadAlleleLikelihoods.getNumberOfStoredElements();
        }
    }

    if ( depth == 0 )
        return null;

    double QD = -10.0 * vc.getLog10PError() / (double)depth;

    Map<String, Object> map = new HashMap<String, Object>();
    map.put(getKeyNames().get(0), String.format("%.2f", QD));
    return map;
}
 
开发者ID:BGI-flexlab,项目名称:SOAPgaea,代码行数:46,代码来源:QualByDepth.java


注:本文中的htsjdk.variant.variantcontext.GenotypesContext.size方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。