當前位置: 首頁>>代碼示例>>Python>>正文


Python SdA.build_finetune_functions方法代碼示例

本文整理匯總了Python中SdA.build_finetune_functions方法的典型用法代碼示例。如果您正苦於以下問題:Python SdA.build_finetune_functions方法的具體用法?Python SdA.build_finetune_functions怎麽用?Python SdA.build_finetune_functions使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在SdA的用法示例。


在下文中一共展示了SdA.build_finetune_functions方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: test_SdA

# 需要導入模塊: import SdA [as 別名]
# 或者: from SdA import build_finetune_functions [as 別名]

#.........這裏部分代碼省略.........
        
        
        pretrain_log_file = open(prefix + 'log_pretrain_cost.txt', "a")
        for l in log_pretrain_cost:
            pretrain_log_file.write("%f\n"%l)
        pretrain_log_file.close()



        #print sda.params[0]
        end_time = time.clock()

        print >> sys.stderr, ('The pretraining code for file ' +
                          os.path.split(__file__)[1] +
                          ' ran for %.2fm' % ((end_time - start_time) / 60.))
                          
                          
                          
    # end-snippet-4
    ########################
    # FINETUNING THE MODEL #
    ########################

    # get the training, validation and testing function for the model
    
    
    datasets = load_data(u_patch_filename,u_groundtruth_filename,u_valid_filename,u_validtruth_filename)
    train_set_x, train_set_y = datasets[0]
    valid_set_x, valid_set_y = datasets[1]
    test_set_x, test_set_y = datasets[2]
    n_train_batches = train_set_x.get_value(borrow=True).shape[0]
    n_train_batches /= batch_size
    print '... getting the finetuning functions'
    train_fn, validate_model, test_model = sda.build_finetune_functions(datasets=datasets,batch_size=100,learning_rate=0.1)

    print '... finetunning the model'
    # early-stopping parameters
    patience = 10 * n_train_batches  # look as this many examples regardless
    patience_increase = 2.  # wait this much longer when a new best is
                            # found
    improvement_threshold = 0.995  # a relative improvement of this much is
                                   # considered significant
    validation_frequency = min(n_train_batches, patience / 2)
                                  # go through this many
                                  # minibatche before checking the network
                                  # on the validation set; in this case we
                                  # check every epoch

    best_validation_loss = numpy.inf
    test_score = 0.
    start_time = time.clock()

    done_looping = False
    epoch = 0
    flag = open(prefix+'flag.pkl','wb')
    cPickle.dump(2,flag, protocol = cPickle.HIGHEST_PROTOCOL)
    flag.close()
    
    log_valid_cost=[]

    while (epoch < training_epochs) and (not done_looping):
        
        if epochFlag_fineTuning is 1 and epoch < epochs_done_fineTuning:
            epoch = epochs_done_fineTuning
            epochFlag_fineTuning = 0
            
開發者ID:subru1603,項目名稱:DDP_SdA_Brain,代碼行數:69,代碼來源:10_test_SdA.py

示例2: test_SdA

# 需要導入模塊: import SdA [as 別名]
# 或者: from SdA import build_finetune_functions [as 別名]

#.........這裏部分代碼省略.........

    # get the training, validation and testing function for the model   


    if flag == 1:
    
        datasets = load_data(u_patch_filename,u_groundtruth_filename,u_valid_filename,u_validtruth_filename)
        train_set_x, train_set_y = datasets[0]
        valid_set_x, valid_set_y = datasets[1]
        test_set_x, test_set_y = datasets[2]
        n_train_batches = train_set_x.get_value(borrow=True).shape[0]
        
        n_train_batches /= batch_size
        
        numpy_rng = numpy.random.RandomState(89677)
        print '... building the model'
        
    #    print 'W: ', W
    #    print 'b: ', b
        
        ################################################################
        ################CONSTRUCTION OF SdA CLASS#######################
        sda = SdA(
            numpy_rng=numpy_rng,
            n_ins=n_ins,
            hidden_layers_sizes=hidden_layers_sizes,
            n_outs=n_outs, W = W, b = b)
        
        print 'SdA constructed'
        
    if StopAtPretraining == False:  
        
        print '... getting the finetuning functions'
        train_fn, validate_model, test_model = sda.build_finetune_functions(datasets=datasets,batch_size=batch_size)
        print batch_size

        print '... finetunning the model'
        ########################confusion matrix Block 1##########################    
        prediction = sda.get_prediction(train_set_x,batch_size)
        y_truth = np.load(u_groundtruth_filename)
        y_truth = y_truth[0:(len(y_truth)-(len(y_truth)%batch_size))]
        cnf_freq = 1
        ##################################################################  
        # early-stopping parameters
        patience = 40 * n_train_batches  # look as this many examples regardless
        patience_increase = 10.  # wait this much longer when a new best is
                                # found
        improvement_threshold = 0.995  # a relative improvement of this much is
                                       # considered significant
        validation_frequency = min(n_train_batches, patience / 2)
                                      # go through this many
                                      # minibatche before checking the network
                                      # on the validation set; in this case we
                                      # check every epoch

        best_validation_loss = numpy.inf
        test_score = 0.
        start_time = time.clock()

        finetune_lr_initial = finetune_lr

        done_looping = False
        epoch = 0
        flag = open(prefix+'flag.pkl','wb')
        cPickle.dump(2,flag, protocol = cPickle.HIGHEST_PROTOCOL)
        flag.close()
開發者ID:kvrd18,項目名稱:DDP_SdA_Brain,代碼行數:70,代碼來源:test_SdA.py


注:本文中的SdA.build_finetune_functions方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。