当前位置: 首页>>代码示例>>Python>>正文


Python MLP.kl_divergence方法代码示例

本文整理汇总了Python中mlp.MLP.kl_divergence方法的典型用法代码示例。如果您正苦于以下问题:Python MLP.kl_divergence方法的具体用法?Python MLP.kl_divergence怎么用?Python MLP.kl_divergence使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在mlp.MLP的用法示例。


在下文中一共展示了MLP.kl_divergence方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: build_network

# 需要导入模块: from mlp import MLP [as 别名]
# 或者: from mlp.MLP import kl_divergence [as 别名]
def build_network(args, wordEmbeddings, L1_reg=0.00, L2_reg=1e-4):

    print("Building model ...")

    rng = np.random.RandomState(1234)

    base_dir = os.path.dirname(os.path.realpath(__file__))
    data_dir = os.path.join(base_dir, 'data')
    sick_dir = os.path.join(data_dir, 'sick')

    rel_vocab_path = os.path.join(sick_dir, 'rel_vocab.txt')

    rels = defaultdict(int)

    with open(rel_vocab_path, 'r') as f:
        for tok in f:
            rels[tok.rstrip('\n')] += 1

    rep_model = depTreeLSTMModel(args.lstmDim)

    rep_model.initialParams(wordEmbeddings, rng=rng)

    rnn_optimizer = RNN_Optimization(rep_model, alpha=args.step, optimizer=args.optimizer)


    x = T.fmatrix('x')  # n * d, the data is presented as one sentence output    
    y = T.fmatrix('y')  # n * d, the target distribution\

    classifier = MLP(rng=rng,input=x, n_in=2*args.lstmDim, n_hidden=args.hiddenDim,n_out=5)

    cost = T.mean(classifier.kl_divergence(y)) + 0.5*L2_reg * classifier.L2_sqr

    gparams = [ T.grad(cost, param) for param in classifier.params]

    hw = classifier.params[0]
    hb = classifier.params[1]
    delta_x = theano.function([x,y], T.dot(hw, T.grad(cost, hb)), allow_input_downcast=True)
    
    if args.optimizer == "sgd":

        update_sdg = [
            (param, param - args.step * gparam)
            for param, gparam in zip(classifier.params, gparams)
        ]

        update_params_theano = theano.function(inputs=[x,y], outputs=cost,
                                updates=update_sdg, allow_input_downcast=True)
    elif args.optimizer == "adagrad":

        grad_updates_adagrad = sgd_updates_adagrad(classifier.params, cost)

        update_params_theano = theano.function(inputs=[x,y], outputs=cost,
                                updates=grad_updates_adagrad, allow_input_downcast=True)

    elif args.optimizer == "adadelta":

        grad_updates_adadelta = sgd_updates_adadelta(classifier.params, cost)

        update_params_theano = theano.function(inputs=[x,y], outputs=cost,
                                updates=grad_updates_adadelta, allow_input_downcast=True)
    elif args.optimizer == "adam":
        grad_updates_adam = sgd_updates_adam(classifier.params, cost)

        update_params_theano = theano.function(inputs=[x,y], outputs=cost,
                                updates=grad_updates_adam, allow_input_downcast=True)

    else:
        raise "Set optimizer"


    cost_and_prob = theano.function([x, y], [cost, classifier.output], allow_input_downcast=True)

    return rep_model, rnn_optimizer, update_params_theano, delta_x, cost_and_prob
开发者ID:jerryli1981,项目名称:Semantic-Textual-Similarity,代码行数:75,代码来源:main_theano.py


注:本文中的mlp.MLP.kl_divergence方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。