当前位置: 首页>>代码示例>>Java>>正文


Java MatrixOps.substitute方法代码示例

本文整理汇总了Java中cc.mallet.types.MatrixOps.substitute方法的典型用法代码示例。如果您正苦于以下问题:Java MatrixOps.substitute方法的具体用法?Java MatrixOps.substitute怎么用?Java MatrixOps.substitute使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在cc.mallet.types.MatrixOps的用法示例。


在下文中一共展示了MatrixOps.substitute方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getValueGradient

import cc.mallet.types.MatrixOps; //导入方法依赖的package包/类
public void getValueGradient (double [] buffer)
{
	// Gradient is (constraint - expectation - parameters/gaussianPriorVariance)
	if (cachedGradientStale) {
		if (cachedValueStale)
			// This will fill in the cachedGradient with the "-expectation"
			getValue ();
		MatrixOps.plusEquals (cachedGradient, constraints);
		// Incorporate prior on parameters
		MatrixOps.plusEquals (cachedGradient, parameters,	-1.0 / gaussianPriorVariance);
		
		// A parameter may be set to -infinity by an external user.
		// We set gradient to 0 because the parameter's value can
		// never change anyway and it will mess up future calculations
		// on the matrix, such as norm().
		MatrixOps.substitute (cachedGradient, Double.NEGATIVE_INFINITY, 0.0);
		// Set to zero all the gradient dimensions that are not among the selected features
		if (perLabelFeatureSelection == null) {
			for (int labelIndex = 0; labelIndex < numLabels; labelIndex++)
				MatrixOps.rowSetAll (cachedGradient, numFeatures,
														 labelIndex, 0.0, featureSelection, false);
		} else {
			for (int labelIndex = 0; labelIndex < numLabels; labelIndex++)
				MatrixOps.rowSetAll (cachedGradient, numFeatures,
														 labelIndex, 0.0,
														 perLabelFeatureSelection[labelIndex], false);
		}
		cachedGradientStale = false;
	}
	assert (buffer != null && buffer.length == parameters.length);
	System.arraycopy (cachedGradient, 0, buffer, 0, cachedGradient.length);
}
 
开发者ID:kostagiolasn,项目名称:NucleosomePatternClassifier,代码行数:33,代码来源:RankMaxEntTrainer.java

示例2: getValueGradient

import cc.mallet.types.MatrixOps; //导入方法依赖的package包/类
public void getValueGradient (double [] buffer) {

		// Gradient is (constraint - expectation - parameters/gaussianPriorVariance)
		if (cachedGradientStale) {
			numGetValueGradientCalls++;
			if (cachedValueStale)
				// This will fill in the cachedGradient with the "-expectation"
				getValue ();
			MatrixOps.plusEquals (cachedGradient, constraints);
			// Incorporate prior on parameters
			if (usingHyperbolicPrior) {
				throw new UnsupportedOperationException ("Hyperbolic prior not yet implemented.");
			}
			else if (usingGaussianPrior) {
				MatrixOps.plusEquals (cachedGradient, parameters,
									  -1.0 / gaussianPriorVariance);
			}

			// A parameter may be set to -infinity by an external user.
			// We set gradient to 0 because the parameter's value can
			// never change anyway and it will mess up future calculations
			// on the matrix, such as norm().
			MatrixOps.substitute (cachedGradient, Double.NEGATIVE_INFINITY, 0.0);
			// Set to zero all the gradient dimensions that are not among the selected features
			if (perLabelFeatureSelection == null) {
				for (int labelIndex = 0; labelIndex < numLabels; labelIndex++)
					MatrixOps.rowSetAll (cachedGradient, numFeatures,
							labelIndex, 0.0, featureSelection, false);
			} else {
				for (int labelIndex = 0; labelIndex < numLabels; labelIndex++)
					MatrixOps.rowSetAll (cachedGradient, numFeatures,
							labelIndex, 0.0,
							perLabelFeatureSelection[labelIndex], false);
			}
			cachedGradientStale = false;
		}
		assert (buffer != null && buffer.length == parameters.length);
		System.arraycopy (cachedGradient, 0, buffer, 0, cachedGradient.length);
		//System.out.println ("MaxEntTrainer gradient infinity norm = "+MatrixOps.infinityNorm(cachedGradient));
	}
 
开发者ID:kostagiolasn,项目名称:NucleosomePatternClassifier,代码行数:41,代码来源:MaxEntOptimizableByLabelLikelihood.java

示例3: getValueGradient

import cc.mallet.types.MatrixOps; //导入方法依赖的package包/类
public void getValueGradient (double [] buffer)
{
	// Gradient is (constraint - expectation - parameters/gaussianPriorVariance)
	if (cachedGradientStale) {
		numGetValueGradientCalls++;
		if (cachedValueStale)
			// This will fill in the cachedGradient with the "-expectation"
			getValue ();
		MatrixOps.plusEquals (cachedGradient, constraints);
		// Incorporate prior on parameters
		MatrixOps.plusEquals (cachedGradient, parameters,
							  -1.0 / gaussianPriorVariance);

		// A parameter may be set to -infinity by an external user.
		// We set gradient to 0 because the parameter's value can
		// never change anyway and it will mess up future calculations
		// on the matrix, such as norm().
		MatrixOps.substitute (cachedGradient, Double.NEGATIVE_INFINITY, 0.0);

		// Set to zero all the gradient dimensions that are not among the selected features
		if (perLabelFeatureSelection == null) {
			for (int labelIndex = 0; labelIndex < numLabels; labelIndex++)
				MatrixOps.rowSetAll (cachedGradient, numFeatures,
						labelIndex, 0.0, featureSelection, false);
		} else {
			for (int labelIndex = 0; labelIndex < numLabels; labelIndex++)
				MatrixOps.rowSetAll (cachedGradient, numFeatures,
						labelIndex, 0.0,
						perLabelFeatureSelection[labelIndex], false);
		}
		cachedGradientStale = false;
	}
	assert (buffer != null && buffer.length == parameters.length);
	System.arraycopy (cachedGradient, 0, buffer, 0, cachedGradient.length);
	//System.out.println ("MaxEntTrainer gradient infinity norm = "+MatrixOps.infinityNorm(cachedGradient));
}
 
开发者ID:kostagiolasn,项目名称:NucleosomePatternClassifier,代码行数:37,代码来源:MaxEntOptimizableByLabelDistribution.java

示例4: getValueGradient

import cc.mallet.types.MatrixOps; //导入方法依赖的package包/类
public void getValueGradient (double [] buffer)
{
	// Gradient is (constraint - expectation - parameters/gaussianPriorVariance)
	if (cachedGradientStale) {
		numGetValueGradientCalls++;
		if (cachedValueStale)
		// This will fill in the cachedGradient with the "-expectation"
			getValue ();
		// cachedGradient contains the negative expectations
		// expectations are model expectations and constraints are
		// empirical expectations
		MatrixOps.plusEquals (cachedGradient, constraints);
		// CPAL - we need a second copy of the constraints
		//      - actually, we only want this for the feature values
		//      - I've moved this up into getValue
		//if (usingMultiConditionalTraining){
		//    MatrixOps.plusEquals(cachedGradient, constraints);
		//}
		// Incorporate prior on parameters
		if (usingHyperbolicPrior) {
			throw new UnsupportedOperationException ("Hyperbolic prior not yet implemented.");
		}
		else {
			MatrixOps.plusEquals (cachedGradient, parameters,
			                      -1.0 / gaussianPriorVariance);
		}

		// A parameter may be set to -infinity by an external user.
		// We set gradient to 0 because the parameter's value can
		// never change anyway and it will mess up future calculations
		// on the matrix, such as norm().
		MatrixOps.substitute (cachedGradient, Double.NEGATIVE_INFINITY, 0.0);
		// Set to zero all the gradient dimensions that are not among the selected features
		if (perLabelFeatureSelection == null) {
			for (int labelIndex = 0; labelIndex < numLabels; labelIndex++)
				MatrixOps.rowSetAll (cachedGradient, numFeatures,
				                     labelIndex, 0.0, featureSelection, false);
		} else {
			for (int labelIndex = 0; labelIndex < numLabels; labelIndex++)
				MatrixOps.rowSetAll (cachedGradient, numFeatures,
				                     labelIndex, 0.0,
				                     perLabelFeatureSelection[labelIndex], false);
		}
		cachedGradientStale = false;
	}
	assert (buffer != null && buffer.length == parameters.length);
	System.arraycopy (cachedGradient, 0, buffer, 0, cachedGradient.length);
}
 
开发者ID:kostagiolasn,项目名称:NucleosomePatternClassifier,代码行数:49,代码来源:MCMaxEntTrainer.java


注:本文中的cc.mallet.types.MatrixOps.substitute方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。