当前位置: 首页>>代码示例>>Python>>正文


Python logistic_sgd.load_data方法代码示例

本文整理汇总了Python中logistic_sgd.load_data方法的典型用法代码示例。如果您正苦于以下问题:Python logistic_sgd.load_data方法的具体用法?Python logistic_sgd.load_data怎么用?Python logistic_sgd.load_data使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在logistic_sgd的用法示例。


在下文中一共展示了logistic_sgd.load_data方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: test_stacked_autoencoder

# 需要导入模块: import logistic_sgd [as 别名]
# 或者: from logistic_sgd import load_data [as 别名]
def test_stacked_autoencoder(finetune_lr=0.1, pretraining_epochs=15,
                             pretrain_lr=0.001, training_epochs=100,
                             dataset='mnist.pkl.gz', batch_size=1,
                             hidden_layers_sizes=[1000, 1000, 1000],
                             corruption_levels=[0.1, 0.2, 0.3],
                             pretrain_flag=True,
                             testerr_file='test_error.txt'):
    datasets = load_data("../data/mnist.pkl.gz")
    train_set_x = datasets[0][0]
    n_train_batches = train_set_x.get_value(borrow=True).shape[0] / batch_size
    numpy_rng = np.random.RandomState(89677)

    print "building the model ..."

    sda = StackedDenoisingAutoencoder(
        numpy_rng,
        28 * 28,
        hidden_layers_sizes,
        10,
        corruption_levels)

    # Pre-training
    if pretrain_flag:
        print "getting the pre-training functions ..."
        pretraining_functions = sda.pretraining_functions(train_set_x=train_set_x,
                                                      batch_size=batch_size)

        print "pre-training the model ..."
        for i in xrange(sda.n_layers):
            for epoch in xrange(pretraining_epochs):
                c = []
                for batch_index in xrange(n_train_batches):
                    c.append(pretraining_functions[i](index=batch_index,
                                                      corruption=corruption_levels[i],
                                                      lr=pretrain_lr))
                print "Pre-training layer %i, epoch %d, cost %f" % (i, epoch, np.mean(c))

    # Fine-tuning
    print "getting the fine-tuning functions ..."
    train_model, _, test_model = sda.build_finetune_functions(
        datasets=datasets,
        batch_size=batch_size,
        learning_rate=finetune_lr
    )

    print "fine-tuning the model ..."

    epoch = 0
    fp = open(testerr_file, "w")
    while (epoch < training_epochs):
        epoch = epoch + 1
        for minibatch_index in xrange(n_train_batches):
            train_model(minibatch_index)
        test_losses = test_model()
        test_score = np.mean(test_losses)
        print "Fine-tuning, epoch %d, test error %f" % (epoch, test_score * 100)
        fp.write("%d\t%f\n" % (epoch, test_score * 100))
    fp.close() 
开发者ID:aidiary,项目名称:deep-learning-theano,代码行数:60,代码来源:eval_sda.py


注:本文中的logistic_sgd.load_data方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。