本文整理汇总了Java中edu.stanford.nlp.math.ArrayMath.norm_inf方法的典型用法代码示例。如果您正苦于以下问题:Java ArrayMath.norm_inf方法的具体用法?Java ArrayMath.norm_inf怎么用?Java ArrayMath.norm_inf使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类edu.stanford.nlp.math.ArrayMath
的用法示例。
在下文中一共展示了ArrayMath.norm_inf方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: testDerivatives
import edu.stanford.nlp.math.ArrayMath; //导入方法依赖的package包/类
/**
*
* This function tests to make sure that the sum of the stochastic calculated gradients is equal to the
* full gradient. This requires using ordered sampling, so if the ObjectiveFunction itself randomizes
* the inputs this function will likely fail.
*
*
* @param x is the point to evaluate the function at
* @param functionTolerance is the tolerance to place on the infinity norm of the gradient and value
* @return boolean indicating success or failure.
*/
public boolean testDerivatives(double[] x, double functionTolerance){
boolean ret = false;
boolean compareHess = true;
System.err.println("Making sure that the stochastic derivatives are ok.");
AbstractStochasticCachingDiffFunction.SamplingMethod tmpSampleMethod = thisFunc.sampleMethod;
StochasticCalculateMethods tmpMethod = thisFunc.method;
//Make sure that our function is using ordered sampling. Otherwise we have no gaurentees.
thisFunc.sampleMethod = AbstractStochasticCachingDiffFunction.SamplingMethod.Ordered;
if(thisFunc.method==StochasticCalculateMethods.NoneSpecified){
System.err.println("No calculate method has been specified");
} else if( !thisFunc.method.calculatesHessianVectorProduct() ){
compareHess = false;
}
approxValue = 0;
approxGrad = new double[x.length];
curGrad = new double[x.length];
Hv = new double[x.length];
double percent = 0.0;
//This loop runs through all the batches and sums of the calculations to compare against the full gradient
for (int i = 0; i < numBatches ; i ++){
percent = 100*((double) i)/(numBatches);
//Can't figure out how to get a carriage return??? ohh well
System.err.printf("%5.1f percent complete\n",percent);
// update the "hopefully" correct Hessian
thisFunc.method = tmpMethod;
System.arraycopy(thisFunc.HdotVAt(x,v,testBatchSize),0,Hv,0,Hv.length);
// Now get the hessian through finite difference
thisFunc.method = StochasticCalculateMethods.ExternalFiniteDifference;
System.arraycopy(thisFunc.derivativeAt(x,v,testBatchSize ), 0,gradFD, 0, gradFD.length);
thisFunc.recalculatePrevBatch = true;
System.arraycopy(thisFunc.HdotVAt(x,v,gradFD,testBatchSize),0,HvFD,0,HvFD.length);
//Compare the difference
double DiffHv = ArrayMath.norm_inf(ArrayMath.pairwiseSubtract(Hv,HvFD));
//Keep track of the biggest H.v error
if (DiffHv > maxHvDiff){maxHvDiff = DiffHv;}
}
if( maxHvDiff < functionTolerance){
sayln("");
sayln("Success: Hessian approximations lined up");
ret = true;
}else{
sayln("");
sayln("Failure: Hessian approximation at somepoint was off by " + maxHvDiff);
ret = false;
}
thisFunc.sampleMethod = tmpSampleMethod;
thisFunc.method = tmpMethod;
return ret;
}