当前位置: 首页>>代码示例>>Java>>正文


Java Optimizer.optimize方法代码示例

本文整理汇总了Java中cc.mallet.optimize.Optimizer.optimize方法的典型用法代码示例。如果您正苦于以下问题:Java Optimizer.optimize方法的具体用法?Java Optimizer.optimize怎么用?Java Optimizer.optimize使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在cc.mallet.optimize.Optimizer的用法示例。


在下文中一共展示了Optimizer.optimize方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: train

import cc.mallet.optimize.Optimizer; //导入方法依赖的package包/类
public MCMaxEnt train (InstanceList trainingSet)
	{
		logger.fine ("trainingSet.size() = "+trainingSet.size());
		mt = new MaximizableTrainer (trainingSet, (MCMaxEnt)initialClassifier);
		Optimizer maximizer = new LimitedMemoryBFGS(mt);
		// CPAL - change the tolerance for large vocab experiments
		((LimitedMemoryBFGS)maximizer).setTolerance(.00001);    // std is .0001;
		maximizer.optimize (); // XXX given the loop below, this seems wrong.

		logger.info("MCMaxEnt ngetValueCalls:"+getValueCalls()+"\nMCMaxEnt ngetValueGradientCalls:"+getValueGradientCalls());
//		boolean converged;
//
//	 	for (int i = 0; i < numIterations; i++) {
//			converged = maximizer.maximize (mt, 1);
//			if (converged)
//			 	break;
//			else if (evaluator != null)
//			 	if (!evaluator.evaluate (mt.getClassifier(), converged, i, mt.getValue(),
//				 												 trainingSet, validationSet, testSet))
//				 	break;
//		}
//		TestMaximizable.testValueAndGradient (mt);
		progressLogger.info("\n"); //  progess messages are on one line; move on.
		return mt.getClassifier ();
	}
 
开发者ID:kostagiolasn,项目名称:NucleosomePatternClassifier,代码行数:26,代码来源:MCMaxEntTrainer.java

示例2: forward

import cc.mallet.optimize.Optimizer; //导入方法依赖的package包/类
public void forward() {
    // initialize first state
    double[] mean0 = new double[dimension];
    Arrays.fill(mean0, 0.0);
    double[] var0 = new double[dimension];
    Arrays.fill(var0, sigma);

    double[] preMean = mean0;
    double[] preVar = var0;

    int numConverged = 0;
    for (int t = 0; t < this.T; t++) {
        StateObjective objective = new StateObjective(preMean, preVar, states[t].getObservations());
        Optimizer optimizer = new LimitedMemoryBFGS(objective);
        boolean converged = false;
        try {
            converged = optimizer.optimize();
        } catch (Exception ex) {
            // This exception may be thrown if L-BFGS
            //  cannot step in the current direction.
            // This condition does not necessarily mean that
            //  the optimizer has failed, but it doesn't want
            //  to claim to have succeeded... 
            // do nothing
        }

        if (converged) {
            numConverged++;
        }

        for (int i = 0; i < dimension; i++) {
            states[t].setMean(i, objective.getParameter(i));
        }

        // compute diagonal approximation of the Hessian
        double[] exps = new double[dimension];
        double sumExp = 0.0;
        for (int i = 0; i < dimension; i++) {
            exps[i] = Math.exp(states[t].getMean(i));
            sumExp += exps[i];
        }

        for (int i = 0; i < dimension; i++) {
            double prob = exps[i] / sumExp;
            double negHess =
                    1.0 / preVar[i]
                    + states[t].getCountSum() * prob * (1 - prob);
            states[t].setVariance(i, 1.0 / negHess);
        }

        // update 
        for (int i = 0; i < dimension; i++) {
            preMean[i] = states[t].getMean(i);
            preVar[t] = states[t].getVariance(i) + this.sigmaSquare;
        }

        System.out.println("State " + t + ". " + converged);
        System.out.println("Mean:\t" + MiscUtils.arrayToString(states[t].getMean()));
        System.out.println("Var:\t" + MiscUtils.arrayToString(states[t].getVariance()));
        System.out.println("Dist:\t" + MiscUtils.arrayToString(states[t].getLogisticNormalDistribution()));
        System.out.println("True:\t" + MiscUtils.arrayToString(trueDist[t]));

        System.out.println("Obs:\t" + MiscUtils.arrayToString(observations[t]));
        System.out.println();
    }
    System.out.println("# converged = " + numConverged);
}
 
开发者ID:vietansegan,项目名称:segan,代码行数:68,代码来源:StateSpaceModel.java

示例3: forwardSingleChain

import cc.mallet.optimize.Optimizer; //导入方法依赖的package包/类
/**
     * Perform forward filtering for a single chain of nodes
     *
     * @param nodes The chain
     */
    private void forwardSingleChain(ArrayList<DNCRPNode> nodes,
            double[] priorMean, double[] priorVar) {
        if (debug) {
            logln("--- forward filtering chain: " + nodes.toString());
        }

//        double[] preMean = zeros.clone();
//        double[] preVar = sigmaSquares.clone();
        double[] preMean = priorMean.clone();
        double[] preVar = priorVar.clone();

        int numConverged = 0;
        for (int t = 0; t < nodes.size(); t++) {
            StateObjective objective = new StateObjective(preMean, preVar, nodes.get(t).getContent().getSparseCounts());
            Optimizer optimizer = new LimitedMemoryBFGS(objective);
            boolean converged = false;
            try {
                converged = optimizer.optimize();
            } catch (Exception ex) {
                // This exception may be thrown if L-BFGS
                //  cannot step in the current direction.
                // This condition does not necessarily mean that
                //  the optimizer has failed, but it doesn't want
                //  to claim to have succeeded... 
                // do nothing
            }

            if (converged) {
                numConverged++;
            }

            for (int i = 0; i < V; i++) {
                nodes.get(t).getContent().setMean(i, objective.getParameter(i));
            }

            // compute diagonal approximation of the Hessian
            double[] exps = new double[V];
            double sumExp = 0.0;
            for (int i = 0; i < V; i++) {
                exps[i] = Math.exp(nodes.get(t).getContent().getMean(i));
                sumExp += exps[i];
            }

            for (int i = 0; i < V; i++) {
                double prob = exps[i] / sumExp;
                double negHess =
                        1.0 / preVar[i]
                        + nodes.get(t).getContent().getCountSum() * prob * (1 - prob);
                nodes.get(t).getContent().setVariance(i, 1.0 / negHess);

//                logln("i = " + i 
//                        + ". exps = " + MiscUtils.formatDouble(exps[i])
//                        + ". preVar = " + MiscUtils.formatDouble(preVar[i])
//                        + ". prob = " + MiscUtils.formatDouble(prob)
//                        + ". negH = " + MiscUtils.formatDouble(negHess)
//                        + ". ---> " + MiscUtils.formatDouble(1.0 / negHess));
            }

            // debug
//            logln("---> node: " + nodes.get(t).toString());
//            logln(MiscUtils.arrayToString(nodes.get(t).getContent().getVariance()) + "\n");

            // update 
            nodes.get(t).getContent().updateDistribution();
            for (int i = 0; i < V; i++) {
                preMean[i] = nodes.get(t).getContent().getMean(i);
                preVar[i] = nodes.get(t).getContent().getVariance(i) + sigmaSquares[0];
            }
        }
    }
 
开发者ID:vietansegan,项目名称:segan,代码行数:76,代码来源:DHLDASampler.java


注:本文中的cc.mallet.optimize.Optimizer.optimize方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。