當前位置: 首頁>>代碼示例>>Python>>正文


Python MLP.cost方法代碼示例

本文整理匯總了Python中mlp.MLP.cost方法的典型用法代碼示例。如果您正苦於以下問題:Python MLP.cost方法的具體用法?Python MLP.cost怎麽用?Python MLP.cost使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在mlp.MLP的用法示例。


在下文中一共展示了MLP.cost方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: train

# 需要導入模塊: from mlp import MLP [as 別名]
# 或者: from mlp.MLP import cost [as 別名]
    def train(self, X, Y, learning_rate=0.1, n_epochs=100, report_frequency=10, lambda_l2=0.0):

        self.report_frequency = report_frequency 

        # allocate symbolic variables for the data
        x = T.matrix('x')  
        y = T.matrix('y')  

        # put the data in shared memory
        self.shared_x = theano.shared(numpy.asarray(X, dtype=theano.config.floatX))
        self.shared_y = theano.shared(numpy.asarray(Y, dtype=theano.config.floatX))
        rng = numpy.random.RandomState(1234)

        # initialize the mlp
        mlp = MLP(rng=rng, input=x, n_in=self.n_in, n_out=self.n_out,
                  n_hidden=self.n_hidden, activation=self.activation)

        # define the cost function, possibly with regularizing term
        if lambda_l2>0.0:
            cost = mlp.cost(y) + lambda_l2*mlp.l2
        else:
            cost = mlp.cost(y) 

        # compute the gradient of cost with respect to theta (stored in params)
        # the resulting gradients will be stored in a list gparams
        gparams = [T.grad(cost, param) for param in mlp.params]

        updates = [(param, param - learning_rate * gparam)
            for param, gparam in zip(mlp.params, gparams) ]

        # compiling a Theano function `train_model` that returns the cost, but
        # at the same time updates the parameter of the model based on the rules
        # defined in `updates`
        train_model = theano.function(
            inputs=[],
            outputs=cost,
            updates=updates,
            givens={
                x: self.shared_x,
                y: self.shared_y
            }
        )

        #define function that returns model prediction
        self.predict_model = theano.function(
            inputs=[mlp.input], outputs=mlp.y_pred)

        ###############
        # TRAIN MODEL #
        ###############

        epoch = 0

        while (epoch < n_epochs):
            epoch = epoch + 1
            epoch_cost = train_model()
            if epoch % self.report_frequency == 0:
                print("epoch: %d  cost: %f" % (epoch, epoch_cost))
開發者ID:TianqiJiang,項目名稱:Machine-Learning-Class,代碼行數:60,代碼來源:function_approximator.py


注:本文中的mlp.MLP.cost方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。