本文整理汇总了Python中openopt.NLP.dh方法的典型用法代码示例。如果您正苦于以下问题:Python NLP.dh方法的具体用法?Python NLP.dh怎么用?Python NLP.dh使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类openopt.NLP
的用法示例。
在下文中一共展示了NLP.dh方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1:
# 需要导入模块: from openopt import NLP [as 别名]
# 或者: from openopt.NLP import dh [as 别名]
#print 'maxfun', p.maxfun
p.maxIter=50
# p.maxfun=100
#p.df_iter = 50
p.maxTime = 4000
h_args=(h,k,l,fq,fqerr,x,z,cosmat_list,coslist,flist)
if 0:
#p.h=[pos_sum,neg_sum]
p.h=[pos_sum,neg_sum]
p.c=[chisq]
# p.h=[pos_sum,neg_sum]
p.args.h=h_args
p.args.c=h_args
p.dh=[pos_sum_grad,neg_sum_grad]
p.df=chisq_grad
if 1:
#p.h=[pos_sum,neg_sum,chisq]
p.c=[chisq]
p.h=[pos_sum,neg_sum]
p.args.h=h_args
p.args.c=h_args
p.dh=[pos_sum_grad,neg_sum_grad]
p.dc=chisq_grad
#p.dh=[pos_sum_grad,neg_sum_grad,neg_sum_grad]
p.df = S_grad
if 0:
print 'checking'
示例2: dc
# 需要导入模块: from openopt import NLP [as 别名]
# 或者: from openopt.NLP import dh [as 别名]
def dc(x):
r = zeros((2, p.n))
r[0,0] = 2 * 4 * x[0]**3
r[1,1] = 2 * x[1]
r[1,2] = 2 * x[2] + 15 #incorrect derivative
return r
p.dc = dc
p.h = lambda x: (1e1*(x[-1]-1)**4, (x[-2]-1.5)**4)
def dh(x):
r = zeros((2, p.n))
r[0,-1] = 1e1*4*(x[-1]-1)**3
r[1,-2] = 4*(x[-2]-1.5)**3 + 15 #incorrect derivative
return r
p.dh = dh
p.checkdf()
p.checkdc()
p.checkdh()
"""
you can use p.checkdF(x) for other point than x0 (F is f, c or h)
p.checkdc(myX)
or
p.checkdc(x=myX)
values with difference greater than
maxViolation (default 1e-5)
will be shown
p.checkdh(maxViolation=1e-4)
p.checkdh(myX, maxViolation=1e-4)
p.checkdh(x=myX, maxViolation=1e-4)
示例3: h
# 需要导入模块: from openopt import NLP [as 别名]
# 或者: from openopt.NLP import dh [as 别名]
r[2,35] = 2*x[35] + x[25]
return r
p.dc = DC
# non-linear equality constraints h(x) = 0
# 1e6*(x[last]-1)**4 = 0
# (x[last-1]-1.5)**4 = 0
p.h = lambda x: (1e4*(x[-1]-1)**4, (x[-2]-1.5)**4)
# dh(x)/dx: non-lin eq constraints gradients (optional):
def DH(x):
r = zeros((2, p.n))
r[0, -1] = 1e4*4 * (x[-1]-1)**3
r[1, -2] = 4 * (x[-2]-1.5)**3
return r
p.dh = DH
p.contol = 1e-3 # required constraints tolerance, default for NLP is 1e-6
# for ALGENCAN solver gtol is the only one stop criterium connected to openopt
# (except maxfun, maxiter)
# Note that in ALGENCAN gtol means norm of projected gradient of the Augmented Lagrangian
# so it should be something like 1e-3...1e-5
p.gtol = 1e-5 # gradient stop criterium (default for NLP is 1e-6)
# see also: help(NLP) -> maxTime, maxCPUTime, ftol and xtol
# that are connected to / used in lincher and some other solvers
# optional: check of user-supplied derivatives
p.checkdf()