當前位置: 首頁>>代碼示例>>Python>>正文


Python MLP.kl_divergence方法代碼示例

本文整理匯總了Python中mlp.MLP.kl_divergence方法的典型用法代碼示例。如果您正苦於以下問題:Python MLP.kl_divergence方法的具體用法?Python MLP.kl_divergence怎麽用?Python MLP.kl_divergence使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在mlp.MLP的用法示例。


在下文中一共展示了MLP.kl_divergence方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: build_network

# 需要導入模塊: from mlp import MLP [as 別名]
# 或者: from mlp.MLP import kl_divergence [as 別名]
def build_network(args, wordEmbeddings, L1_reg=0.00, L2_reg=1e-4):

    print("Building model ...")

    rng = np.random.RandomState(1234)

    base_dir = os.path.dirname(os.path.realpath(__file__))
    data_dir = os.path.join(base_dir, 'data')
    sick_dir = os.path.join(data_dir, 'sick')

    rel_vocab_path = os.path.join(sick_dir, 'rel_vocab.txt')

    rels = defaultdict(int)

    with open(rel_vocab_path, 'r') as f:
        for tok in f:
            rels[tok.rstrip('\n')] += 1

    rep_model = depTreeLSTMModel(args.lstmDim)

    rep_model.initialParams(wordEmbeddings, rng=rng)

    rnn_optimizer = RNN_Optimization(rep_model, alpha=args.step, optimizer=args.optimizer)


    x = T.fmatrix('x')  # n * d, the data is presented as one sentence output    
    y = T.fmatrix('y')  # n * d, the target distribution\

    classifier = MLP(rng=rng,input=x, n_in=2*args.lstmDim, n_hidden=args.hiddenDim,n_out=5)

    cost = T.mean(classifier.kl_divergence(y)) + 0.5*L2_reg * classifier.L2_sqr

    gparams = [ T.grad(cost, param) for param in classifier.params]

    hw = classifier.params[0]
    hb = classifier.params[1]
    delta_x = theano.function([x,y], T.dot(hw, T.grad(cost, hb)), allow_input_downcast=True)
    
    if args.optimizer == "sgd":

        update_sdg = [
            (param, param - args.step * gparam)
            for param, gparam in zip(classifier.params, gparams)
        ]

        update_params_theano = theano.function(inputs=[x,y], outputs=cost,
                                updates=update_sdg, allow_input_downcast=True)
    elif args.optimizer == "adagrad":

        grad_updates_adagrad = sgd_updates_adagrad(classifier.params, cost)

        update_params_theano = theano.function(inputs=[x,y], outputs=cost,
                                updates=grad_updates_adagrad, allow_input_downcast=True)

    elif args.optimizer == "adadelta":

        grad_updates_adadelta = sgd_updates_adadelta(classifier.params, cost)

        update_params_theano = theano.function(inputs=[x,y], outputs=cost,
                                updates=grad_updates_adadelta, allow_input_downcast=True)
    elif args.optimizer == "adam":
        grad_updates_adam = sgd_updates_adam(classifier.params, cost)

        update_params_theano = theano.function(inputs=[x,y], outputs=cost,
                                updates=grad_updates_adam, allow_input_downcast=True)

    else:
        raise "Set optimizer"


    cost_and_prob = theano.function([x, y], [cost, classifier.output], allow_input_downcast=True)

    return rep_model, rnn_optimizer, update_params_theano, delta_x, cost_and_prob
開發者ID:jerryli1981,項目名稱:Semantic-Textual-Similarity,代碼行數:75,代碼來源:main_theano.py


注:本文中的mlp.MLP.kl_divergence方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。