当前位置: 首页>>代码示例>>Python>>正文


Python MLP.get_gradient方法代码示例

本文整理汇总了Python中mlp.MLP.get_gradient方法的典型用法代码示例。如果您正苦于以下问题:Python MLP.get_gradient方法的具体用法?Python MLP.get_gradient怎么用?Python MLP.get_gradient使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在mlp.MLP的用法示例。


在下文中一共展示了MLP.get_gradient方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: open

# 需要导入模块: from mlp import MLP [as 别名]
# 或者: from mlp.MLP import get_gradient [as 别名]
 with open('debug_nnet.pickle') as f:
   init_param = pickle.load(f)
 init_param = numpy.concatenate([i.flatten() for i in init_param])
 mlp.packParam(init_param)
 
 with open('debug_data.pickle') as f:
   data = pickle.load(f)
 X = data[0]
 Y = data[1]
 
 with open('HJv.pickle') as f:
   HJv_theano = pickle.load(f)
 num_param = numpy.sum(mlp.sizes)
 batch_size = 100
 
 grad,train_nll,train_error=mlp.get_gradient(X,Y,batch_size)
 
 
 d = 1.0*numpy.ones((num_param,))
 col = mlp.get_Gv(X, Y, batch_size, d)
 #print 'Some col:'
 #print col
 
 """  
 grad,train_nll,train_error=mlp.get_gradient(X,Y,2)
 
 v=numpy.zeros(num_param)
 mlp.forward(X)
 O = mlp.layers[-1].output
 S = mlp.layers[-1].linear_output
 #nll.append(mlp.Cost(Y))
开发者ID:lelouchmatlab,项目名称:convex-hf,代码行数:33,代码来源:check_Gv.py

示例2:

# 需要导入模块: from mlp import MLP [as 别名]
# 或者: from mlp.MLP import get_gradient [as 别名]
   numpy.random.seed(18877)
   numpy.random.shuffle(train_cg_X)
   numpy.random.seed(18877)
   numpy.random.shuffle(train_cg_Y)
   
 train_cg_X_cur = train_cg_X[cg_chunk_index*cg_chunk_size:(cg_chunk_index+1)*cg_chunk_size,:]
 train_cg_Y_cur = train_cg_Y[cg_chunk_index*cg_chunk_size:(cg_chunk_index+1)*cg_chunk_size]
 
 cg_chunk_index = cg_chunk_index+1
 
 nll=[]
 error=[]
 
 print "Iter: %d ..."%(i), "Lambda: %f"%(mlp._lambda)
 
 grad,train_nll,train_error = mlp.get_gradient(train_gradient_X, train_gradient_Y, batch_size)
 
 delta, next_init, after_cost = mlp.cg(-grad, train_cg_X_cur, train_cg_Y_cur, batch_size, next_init, 1)
 
 Gv = mlp.get_Gv(train_cg_X_cur,train_cg_Y_cur,batch_size,delta)
 
 delta_cost = numpy.dot(delta,grad+0.5*Gv)
 
 before_cost = mlp.quick_cost(numpy.zeros((num_param,)), train_cg_X_cur, train_cg_Y_cur, batch_size)
 
 l2norm = numpy.linalg.norm(Gv + mlp._lambda*delta + grad)
 
 print "Residual Norm: ",l2norm
 print 'Before cost: %f, After cost: %f'%(before_cost,after_cost)
 param = mlp.flatParam() + delta
 
开发者ID:lelouchmatlab,项目名称:convex-hf,代码行数:32,代码来源:test.py


注:本文中的mlp.MLP.get_gradient方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。