当前位置: 首页>>代码示例>>Java>>正文


Java Transforms类代码示例

本文整理汇总了Java中org.nd4j.linalg.ops.transforms.Transforms的典型用法代码示例。如果您正苦于以下问题:Java Transforms类的具体用法?Java Transforms怎么用?Java Transforms使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


Transforms类属于org.nd4j.linalg.ops.transforms包,在下文中一共展示了Transforms类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: apply

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
@Override
public Publisher<PositionFormation> apply(final Publisher<G> inputPublisher) {
  return Flux.from(inputPublisher)
      .map(game ->
          new PositionFormation(
              game.getPlayers().stream()
                  .filter(player -> player.getTeamColor().equals(this.getTeamColor()))
                  .collect(Collectors.toMap(
                      Player::getIdentity,
                      player ->
                          Nd4j.vstack(
                              game.getBall().getXY()
                                  .add(Transforms
                                      .unitVec(game.getBall().getXY().sub(player.getXY()))
                                      .mul(this.getDistanceFromBall())),
                              Nd4j.create(new double[]{
                                  Math.acos(Transforms.cosineSim(
                                      player.getXY(),
                                      game.getBall().getXY()))}))))));
}
 
开发者ID:delta-leonis,项目名称:subra,代码行数:21,代码来源:BallTrackerFormationDeducer.java

示例2: apply

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
@Override
public Publisher<Set<MovingBall>> apply(final Publisher<I> iPublisher) {
  return Flux.from(iPublisher)
      .scan(Collections.emptySet(), (previousGame, currentGame) ->
          currentGame.getBalls().stream()
              .map(currentBall ->
                  previousGame.stream()
                      .reduce((closerBall, newBall) ->
                          Transforms.euclideanDistance(newBall.getXY(), currentBall.getXY())
                              > Transforms.euclideanDistance(
                              closerBall.getXY(), currentBall.getXY())
                              ? newBall
                              : closerBall)
                      .map(closestBall -> this.calculateVelocity(currentBall, closestBall))
                      .orElse(
                          new MovingBall.State(
                              currentBall.getTimestamp(),
                              currentBall.getX(),
                              currentBall.getY(),
                              currentBall.getZ(),
                              0d,
                              0d,
                              0d)))
              .collect(Collectors.toSet()));
}
 
开发者ID:delta-leonis,项目名称:subra,代码行数:26,代码来源:BallsVelocityDeducer.java

示例3: computeSimilarity

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
@Override
public double computeSimilarity(Concept c1, Concept c2) {
	if (c1.name.toLowerCase().equals(c2.name.toLowerCase()))
		return 1;

	if (wordVectors == null) {
		this.loadWordVectors(type, dimension);
		int[] shape = wordVectors.lookupTable().getWeights().shape();
		System.out.println("word embeddings loaded, " + shape[0] + " " + shape[1]);
	}

	INDArray cVector1 = this.getConceptVector(c1);
	INDArray cVector2 = this.getConceptVector(c2);
	if (cVector1 == null || cVector2 == null)
		return Double.NaN;

	double dist = Transforms.cosineSim(cVector1, cVector2);

	if (Double.isNaN(dist))
		System.err.println("Embedding NaN");

	return dist;
}
 
开发者ID:UKPLab,项目名称:ijcnlp2017-cmaps,代码行数:24,代码来源:WordEmbeddingDistance.java

示例4: getSimilarity

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
public double getSimilarity(String sentence1, String sentence2){
    double predictedScore = 0;
    if (PARAGRAPHVECS != null) {
        try {
            INDArray inferredVectorA = produceParagraphVectorOfGivenSentence(sentence1);
            INDArray inferredVectorB = produceParagraphVectorOfGivenSentence(sentence2);
            predictedScore = Transforms.cosineSim(inferredVectorA, inferredVectorB);
        } catch (Exception e) {
            logger.error("No word is matched with the given sentence and any sentence in training set - model file. " + sentence1
                    + ";" + sentence2);
            System.out.println("No word is matched with the given sentence and any sentence in training set - model file. " + sentence1
                    + ";" + sentence2);
            StringMetric metric = StringMetrics.qGramsDistance();
            predictedScore = metric.compare(sentence1, sentence2);
        }
    }

    return predictedScore;
}
 
开发者ID:gizemsogancioglu,项目名称:biosses,代码行数:20,代码来源:SentenceVectorsBasedSimilarity.java

示例5: lexicalSubstituteMult

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
/**
 * Lexical Substitute task by Multiple Method
 * @param word target word
 * @param contexts list of given contexts
 * @param average average the context vectors of given contexts
 * @param top number of results return
 * @return a list of {@link Pair}
 */
public List<Pair<String, Double>> lexicalSubstituteMult (String word, List<String> contexts, boolean average, Integer top) {
	top = MoreObjects.firstNonNull(top, 10);
	INDArray targetVec = getWordVector(word);
	INDArray scores = wordPosSimilarity(targetVec);
	for (String context : contexts) {
		if (hasContext(context)) {
			INDArray multScores = wordPosSimilarity(getContextVector(context));
			if (average) multScores = Transforms.pow(multScores, 1.0 / contexts.size());
			scores.muli(multScores);
		}
	}
	List<Pair<String, Double>> list = new ArrayList<>(wordVocab.size());
	for (int i = 0; i < wordVocab.size(); i++) { list.add(new Pair<>(wordVocab.get(i), scores.getDouble(i))); }
	return list.stream().sorted((e1, e2) -> Double.valueOf(e2.getValue()).compareTo(Double.valueOf(e1.getValue()))).limit(top).collect(Collectors.toCollection(LinkedList::new));
}
 
开发者ID:IsaacChanghau,项目名称:Word2VecfJava,代码行数:24,代码来源:Word2Vecf.java

示例6: parseLanguage

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
public HashMap<String, INDArray> parseLanguage(Language language) {
    HashMap<String, INDArray> tweet2vecs = new HashMap<>();
    File path = new File(pathName +
            language.getName() + "-tweet2vec.txt");
    try {
        List<String> lines = Files.readAllLines(path.toPath());
        for(String l: lines) {
            String[] vec = l.split(",");
            if (vec.length == VEC_LENGTH+1) {
                String tweet = vec[0];
                double[] data = new double[vec.length-1];
                for(int i = 1; i < vec.length; i++) {
                    data[i-1] = Double.parseDouble(vec[i]);
                }
                INDArray normalized = Transforms.normalizeZeroMeanAndUnitVariance(Nd4j.create(data));
                tweet2vecs.put(tweet, normalized);
            }
        }
    } catch (IOException e) {
        e.printStackTrace();
    }
    return tweet2vecs;
}
 
开发者ID:madeleine789,项目名称:dl4j-apr,代码行数:24,代码来源:Pan15Tweet2Vec.java

示例7: fetchCopyRatioMaxLikelihoodEstimateData

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
/**
 * Fetches the maximum likelihood estimate of copy ratios and their precisions from compute blocks.
 * The result is output as a pair of target-by-sample matrices.
 *
 * @param logScale if true, the max likelihood estimate is reported in natural log scale
 * @return a pair of {@link INDArray}
 */
private ImmutablePair<INDArray, INDArray> fetchCopyRatioMaxLikelihoodEstimateData(final boolean logScale) {
    final INDArray M_Psi_inv_st = fetchFromWorkers(CoverageModelEMComputeBlock.CoverageModelICGCacheNode.M_Psi_inv_st, 1);
    final INDArray log_n_st = fetchFromWorkers(CoverageModelEMComputeBlock.CoverageModelICGCacheNode.log_n_st, 1);
    final INDArray m_t = fetchFromWorkers(CoverageModelEMComputeBlock.CoverageModelICGCacheNode.m_t, 1);

    /* calculate the required quantities */
    final INDArray copyRatioMaxLikelihoodEstimate;
    if (biasCovariatesEnabled) {
        final INDArray Wz_st = fetchFromWorkers(CoverageModelEMComputeBlock.CoverageModelICGCacheNode.Wz_st, 1);
        copyRatioMaxLikelihoodEstimate = log_n_st.sub(Wz_st).subiRowVector(m_t).subiColumnVector(sampleMeanLogReadDepths);
    } else {
        copyRatioMaxLikelihoodEstimate = log_n_st.subRowVector(m_t).subiColumnVector(sampleMeanLogReadDepths);
    }

    if (!logScale) {
        Transforms.exp(copyRatioMaxLikelihoodEstimate, false);
    }

    return ImmutablePair.of(copyRatioMaxLikelihoodEstimate.transpose(), M_Psi_inv_st.transpose());
}
 
开发者ID:broadinstitute,项目名称:gatk-protected,代码行数:28,代码来源:CoverageModelEMWorkspace.java

示例8: calculateBernoulli

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
private INDArray calculateBernoulli(INDArray minorityLabels, INDArray labelMask, double targetMinorityDist) {

        INDArray minorityClass = minorityLabels.dup().muli(labelMask);
        INDArray majorityClass = Transforms.not(minorityLabels).muli(labelMask);

        //all minorityLabel class, keep masks as is
        //presence of minoriy class and donotmask minority windows set to true return label as is
        if (majorityClass.sumNumber().intValue() == 0
                        || (minorityClass.sumNumber().intValue() > 0 && donotMaskMinorityWindows))
            return labelMask;
        //all majority class and set to not mask all majority windows sample majority class by 1-targetMinorityDist
        if (minorityClass.sumNumber().intValue() == 0 && !maskAllMajorityWindows)
            return labelMask.muli(1 - targetMinorityDist);

        //Probabilities to be used for bernoulli sampling
        INDArray minoritymajorityRatio = minorityClass.sum(1).div(majorityClass.sum(1));
        INDArray majorityBernoulliP = minoritymajorityRatio.muli(1 - targetMinorityDist).divi(targetMinorityDist);
        BooleanIndexing.replaceWhere(majorityBernoulliP, 1.0, Conditions.greaterThan(1.0)); //if minority ratio is already met round down to 1.0
        return majorityClass.muliColumnVector(majorityBernoulliP).addi(minorityClass);
    }
 
开发者ID:deeplearning4j,项目名称:nd4j,代码行数:21,代码来源:BaseUnderSamplingPreProcessor.java

示例9: testJaccardDistance

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
@Test
public void testJaccardDistance(){
    Nd4j.getRandom().setSeed(12345);

    INDArray a = Nd4j.rand(new int[]{3,4}).addi(0.1);
    INDArray b = Nd4j.rand(new int[]{3,4}).addi(0.1);

    SameDiff sd = SameDiff.create();
    SDVariable in1 = sd.var("in1", a);
    SDVariable in2 = sd.var("in2", b);

    SDVariable jaccard = sd.jaccardDistance("out", in1, in2);

    INDArray min = Transforms.min(a,b);
    INDArray max = Transforms.max(a,b);

    double minSum = min.sumNumber().doubleValue();
    double maxSum = max.sumNumber().doubleValue();
    double jd = 1.0 - minSum / maxSum;

    INDArray out = sd.execAndEndResult();
    assertEquals(1, out.length());

    assertEquals(jd, out.getDouble(0), 1e-6);
}
 
开发者ID:deeplearning4j,项目名称:nd4j,代码行数:26,代码来源:SameDiffTests.java

示例10: testAtanh

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
@Test
public void testAtanh(){
    //http://www.wolframalpha.com/input/?i=atanh(x)

    INDArray in = Nd4j.linspace(-0.9, 0.9, 10);
    INDArray out = Transforms.atanh(in, true);

    INDArray exp = Nd4j.create(in.shape());
    for( int i=0; i<10; i++ ){
        double x = in.getDouble(i);
        //Using "alternative form" from: http://www.wolframalpha.com/input/?i=atanh(x)
        double y = 0.5 * Math.log(x+1.0) - 0.5 * Math.log(1.0-x);
        exp.putScalar(i, y);
    }

    assertEquals(exp, out);
}
 
开发者ID:deeplearning4j,项目名称:nd4j,代码行数:18,代码来源:Nd4jTestsC.java

示例11: testBruteForce4d

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
@Test
public void testBruteForce4d() {
    Construct4dDataSet imageDataSet = new Construct4dDataSet(10, 5, 10, 15);

    NormalizerStandardize myNormalizer = new NormalizerStandardize();
    myNormalizer.fit(imageDataSet.sampleDataSet);
    assertEquals(imageDataSet.expectedMean, myNormalizer.getMean());

    float aat = Transforms.abs(myNormalizer.getStd().div(imageDataSet.expectedStd).sub(1)).maxNumber().floatValue();
    float abt = myNormalizer.getStd().maxNumber().floatValue();
    float act = imageDataSet.expectedStd.maxNumber().floatValue();
    System.out.println("ValA: " + aat);
    System.out.println("ValB: " + abt);
    System.out.println("ValC: " + act);
    assertTrue(aat < 0.05);

    NormalizerMinMaxScaler myMinMaxScaler = new NormalizerMinMaxScaler();
    myMinMaxScaler.fit(imageDataSet.sampleDataSet);
    assertEquals(imageDataSet.expectedMin, myMinMaxScaler.getMin());
    assertEquals(imageDataSet.expectedMax, myMinMaxScaler.getMax());

    DataSet copyDataSet = imageDataSet.sampleDataSet.copy();
    myNormalizer.transform(copyDataSet);
}
 
开发者ID:deeplearning4j,项目名称:nd4j,代码行数:25,代码来源:PreProcessor3D4DTest.java

示例12: testGivenMaxMinConstant

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
@Test
public void testGivenMaxMinConstant() {
    double tolerancePerc = 1; // 1% of correct value
    int nSamples = 500;
    int nFeatures = 3;

    INDArray featureSet = Nd4j.rand(nSamples, nFeatures).mul(0.1).add(10);
    INDArray labelSet = Nd4j.zeros(nSamples, 1);
    DataSet sampleDataSet = new DataSet(featureSet, labelSet);

    double givenMin = -1000;
    double givenMax = 1000;
    DataNormalization myNormalizer = new NormalizerMinMaxScaler(givenMin, givenMax);
    DataSet transformed = sampleDataSet.copy();

    myNormalizer.fit(sampleDataSet);
    myNormalizer.transform(transformed);

    //feature set is basically all 10s -> should transform to the min
    INDArray expected = Nd4j.ones(nSamples, nFeatures).mul(givenMin);
    INDArray delta = Transforms.abs(transformed.getFeatures().sub(expected)).div(expected);
    double maxdeltaPerc = delta.max(0, 1).mul(100).getDouble(0, 0);
    assertTrue(maxdeltaPerc < tolerancePerc);
}
 
开发者ID:deeplearning4j,项目名称:nd4j,代码行数:25,代码来源:NormalizerMinMaxScalerTest.java

示例13: testRevert

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
@Test
public void testRevert() {
    double tolerancePerc = 0.01; // 0.01% of correct value
    int nSamples = 500;
    int nFeatures = 3;

    INDArray featureSet = Nd4j.randn(nSamples, nFeatures);
    INDArray labelSet = Nd4j.zeros(nSamples, 1);
    DataSet sampleDataSet = new DataSet(featureSet, labelSet);

    NormalizerStandardize myNormalizer = new NormalizerStandardize();
    myNormalizer.fit(sampleDataSet);
    DataSet transformed = sampleDataSet.copy();
    myNormalizer.transform(transformed);
    //System.out.println(transformed.getFeatures());
    myNormalizer.revert(transformed);
    //System.out.println(transformed.getFeatures());
    INDArray delta = Transforms.abs(transformed.getFeatures().sub(sampleDataSet.getFeatures()))
                    .div(sampleDataSet.getFeatures());
    double maxdeltaPerc = delta.max(0, 1).mul(100).getDouble(0, 0);
    assertTrue(maxdeltaPerc < tolerancePerc);
}
 
开发者ID:deeplearning4j,项目名称:nd4j,代码行数:23,代码来源:NormalizerStandardizeTest.java

示例14: testItervsDataset

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
public float testItervsDataset(DataNormalization preProcessor) {
    DataSet dataCopy = data.copy();
    DataSetIterator dataIter = new TestDataSetIterator(dataCopy, batchSize);
    preProcessor.fit(dataCopy);
    preProcessor.transform(dataCopy);
    INDArray transformA = dataCopy.getFeatures();

    preProcessor.fit(dataIter);
    dataIter.setPreProcessor(preProcessor);
    DataSet next = dataIter.next();
    INDArray transformB = next.getFeatures();

    while (dataIter.hasNext()) {
        next = dataIter.next();
        INDArray transformb = next.getFeatures();
        transformB = Nd4j.vstack(transformB, transformb);
    }

    return Transforms.abs(transformB.div(transformA).rsub(1)).maxNumber().floatValue();
}
 
开发者ID:deeplearning4j,项目名称:nd4j,代码行数:21,代码来源:NormalizerTests.java

示例15: TestReadWriteSepPrec

import org.nd4j.linalg.ops.transforms.Transforms; //导入依赖的package包/类
@Test
public void TestReadWriteSepPrec() {
    INDArray origArr = Nd4j.rand('c', 3, 3).muli(1000); //since we write only four decimal points..
    Nd4j.writeTxt(origArr, "someArrNew.txt", ":", 3);
    INDArray readBack = Nd4j.readTxt("someArrNew.txt", ":");
    System.out.println("=========================================================================");
    System.out.println(origArr);
    System.out.println("=========================================================================");
    System.out.println(readBack);
    Assert.isTrue(Transforms.abs(origArr.subi(readBack)).maxNumber().doubleValue() < 0.001);
    try {
        Files.delete(Paths.get("someArrNew.txt"));
    } catch (IOException e) {
        e.printStackTrace();
    }
}
 
开发者ID:deeplearning4j,项目名称:nd4j,代码行数:17,代码来源:TestNdArrReadWriteTxtOptC.java


注:本文中的org.nd4j.linalg.ops.transforms.Transforms类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。