當前位置: 首頁>>代碼示例>>Python>>正文


Python MLP.flatParam方法代碼示例

本文整理匯總了Python中mlp.MLP.flatParam方法的典型用法代碼示例。如果您正苦於以下問題:Python MLP.flatParam方法的具體用法?Python MLP.flatParam怎麽用?Python MLP.flatParam使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在mlp.MLP的用法示例。


在下文中一共展示了MLP.flatParam方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: xrange

# 需要導入模塊: from mlp import MLP [as 別名]
# 或者: from mlp.MLP import flatParam [as 別名]
    
    grad,train_nll,train_error = mlp.get_gradient(train_gradient_X, train_gradient_Y, batch_size)
    
    delta, next_init, after_cost = mlp.cg(-grad, train_cg_X_cur, train_cg_Y_cur, batch_size, next_init, 1)
    
    Gv = mlp.get_Gv(train_cg_X_cur,train_cg_Y_cur,batch_size,delta)
    
    delta_cost = numpy.dot(delta,grad+0.5*Gv)
    
    before_cost = mlp.quick_cost(numpy.zeros((num_param,)), train_cg_X_cur, train_cg_Y_cur, batch_size)
    
    l2norm = numpy.linalg.norm(Gv + mlp._lambda*delta + grad)
    
    print "Residual Norm: ",l2norm
    print 'Before cost: %f, After cost: %f'%(before_cost,after_cost)
    param = mlp.flatParam() + delta
    
    mlp.packParam(param)
    
    tune_lambda = (after_cost - before_cost)/delta_cost
    
    if tune_lambda < 0.25:
      mlp._lambda = mlp._lambda*1.5
    elif tune_lambda > 0.75:
      mlp._lambda = mlp._lambda/1.5

    print "Training   NNL: %f, Error: %f"%(train_nll,train_error)
    nll=[]
    error=[]
    for batch_index in xrange(n_valid_batches):
      X=valid_X[batch_index*batch_size:(batch_index+1)*batch_size,:]
開發者ID:lelouchmatlab,項目名稱:convex-hf,代碼行數:33,代碼來源:test.py


注:本文中的mlp.MLP.flatParam方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。