本文整理汇总了Python中openopt.NLP.maxiter方法的典型用法代码示例。如果您正苦于以下问题:Python NLP.maxiter方法的具体用法?Python NLP.maxiter怎么用?Python NLP.maxiter使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类openopt.NLP
的用法示例。
在下文中一共展示了NLP.maxiter方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: zeros
# 需要导入模块: from openopt import NLP [as 别名]
# 或者: from openopt.NLP import maxiter [as 别名]
# r = zeros((2, p.n))
# r[0, -1] = 1e4*4 * (x[-1]-1)**3
# r[1, -2] = 4 * (x[-2]-1.5)**3
# return r
#p.dh = DH
# p.dh=[chisq_grad,pos_sum_grad,]
p.contol = 1e-2#3 # required constraints tolerance, default for NLP is 1e-6
# for ALGENCAN solver gradtol is the only one stop criterium connected to openopt
# (except maxfun, maxiter)
# Note that in ALGENCAN gradtol means norm of projected gradient of the Augmented Lagrangian
# so it should be something like 1e-3...1e-5
p.gradtol = 1e-3#5 # gradient stop criterium (default for NLP is 1e-6)
#print 'maxiter', p.maxiter
#print 'maxfun', p.maxfun
p.maxiter=10
# p.maxfun=100
# see also: help(NLP) -> maxTime, maxCPUTime, ftol and xtol
# that are connected to / used in lincher and some other solvers
# optional: check of user-supplied derivatives
#p.checkdf()
#p.checkdc()
#p.checkdh()
# last but not least:
# please don't forget,
# Python indexing starts from ZERO!!
p.plot = 0