本文整理汇总了Python中openopt.NLP.dc方法的典型用法代码示例。如果您正苦于以下问题:Python NLP.dc方法的具体用法?Python NLP.dc怎么用?Python NLP.dc使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类openopt.NLP
的用法示例。
在下文中一共展示了NLP.dc方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1:
# 需要导入模块: from openopt import NLP [as 别名]
# 或者: from openopt.NLP import dc [as 别名]
p.h=[pos_sum,neg_sum]
p.c=[chisq]
# p.h=[pos_sum,neg_sum]
p.args.h=h_args
p.args.c=h_args
p.dh=[pos_sum_grad,neg_sum_grad]
p.df=chisq_grad
if 1:
#p.h=[pos_sum,neg_sum,chisq]
p.c=[chisq]
p.h=[pos_sum,neg_sum]
p.args.h=h_args
p.args.c=h_args
p.dh=[pos_sum_grad,neg_sum_grad]
p.dc=chisq_grad
#p.dh=[pos_sum_grad,neg_sum_grad,neg_sum_grad]
p.df = S_grad
if 0:
print 'checking'
p.checkdf()
#p.checkdc()
print 'check equality constraints'
p.checkdh()
print 'checking inequality'
p.checkdc()
sys.exit()
print 'solving'
if 1:
#r=p.solve('scipy_cobyla')
示例2: df
# 需要导入模块: from openopt import NLP [as 别名]
# 或者: from openopt.NLP import dc [as 别名]
def df(x):
r = 2*(x-M)
r[0] += 15 #incorrect derivative
r[8] += 80 #incorrect derivative
return r
p.df = df
p.c = lambda x: [2* x[0] **4-32, x[1]**2+x[2]**2 - 8]
def dc(x):
r = zeros((2, p.n))
r[0,0] = 2 * 4 * x[0]**3
r[1,1] = 2 * x[1]
r[1,2] = 2 * x[2] + 15 #incorrect derivative
return r
p.dc = dc
p.h = lambda x: (1e1*(x[-1]-1)**4, (x[-2]-1.5)**4)
def dh(x):
r = zeros((2, p.n))
r[0,-1] = 1e1*4*(x[-1]-1)**3
r[1,-2] = 4*(x[-2]-1.5)**3 + 15 #incorrect derivative
return r
p.dh = dh
p.checkdf()
p.checkdc()
p.checkdh()
"""
you can use p.checkdF(x) for other point than x0 (F is f, c or h)
示例3: c2
# 需要导入模块: from openopt import NLP [as 别名]
# 或者: from openopt.NLP import dc [as 别名]
# p.c = lambda x: numpy.array(c1(x), c2(x), c3(x))
# def c(x):
# return c1(x), c2(x), c3(x)
# p.c = c
# dc(x)/dx: non-lin ineq constraints gradients (optional):
def DC(x):
r = zeros((3, N))
r[0,0] = 2 * 4 * x[0]**3
r[1,1] = 2 * x[1]
r[1,2] = 2 * x[2]
r[2,25] = 2*x[25] + x[35]
r[2,35] = 2*x[35] + x[25]
return r
p.dc = DC
# non-linear equality constraints h(x) = 0
# 1e6*(x[last]-1)**4 = 0
# (x[last-1]-1.5)**4 = 0
p.h = lambda x: (1e4*(x[-1]-1)**4, (x[-2]-1.5)**4)
# dh(x)/dx: non-lin eq constraints gradients (optional):
def DH(x):
r = zeros((2, p.n))
r[0, -1] = 1e4*4 * (x[-1]-1)**3
r[1, -2] = 4 * (x[-2]-1.5)**3
return r
p.dh = DH
p.contol = 1e-3 # required constraints tolerance, default for NLP is 1e-6