本文整理汇总了Python中mlp.MLP.cost方法的典型用法代码示例。如果您正苦于以下问题:Python MLP.cost方法的具体用法?Python MLP.cost怎么用?Python MLP.cost使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类mlp.MLP
的用法示例。
在下文中一共展示了MLP.cost方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: train
# 需要导入模块: from mlp import MLP [as 别名]
# 或者: from mlp.MLP import cost [as 别名]
def train(self, X, Y, learning_rate=0.1, n_epochs=100, report_frequency=10, lambda_l2=0.0):
self.report_frequency = report_frequency
# allocate symbolic variables for the data
x = T.matrix('x')
y = T.matrix('y')
# put the data in shared memory
self.shared_x = theano.shared(numpy.asarray(X, dtype=theano.config.floatX))
self.shared_y = theano.shared(numpy.asarray(Y, dtype=theano.config.floatX))
rng = numpy.random.RandomState(1234)
# initialize the mlp
mlp = MLP(rng=rng, input=x, n_in=self.n_in, n_out=self.n_out,
n_hidden=self.n_hidden, activation=self.activation)
# define the cost function, possibly with regularizing term
if lambda_l2>0.0:
cost = mlp.cost(y) + lambda_l2*mlp.l2
else:
cost = mlp.cost(y)
# compute the gradient of cost with respect to theta (stored in params)
# the resulting gradients will be stored in a list gparams
gparams = [T.grad(cost, param) for param in mlp.params]
updates = [(param, param - learning_rate * gparam)
for param, gparam in zip(mlp.params, gparams) ]
# compiling a Theano function `train_model` that returns the cost, but
# at the same time updates the parameter of the model based on the rules
# defined in `updates`
train_model = theano.function(
inputs=[],
outputs=cost,
updates=updates,
givens={
x: self.shared_x,
y: self.shared_y
}
)
#define function that returns model prediction
self.predict_model = theano.function(
inputs=[mlp.input], outputs=mlp.y_pred)
###############
# TRAIN MODEL #
###############
epoch = 0
while (epoch < n_epochs):
epoch = epoch + 1
epoch_cost = train_model()
if epoch % self.report_frequency == 0:
print("epoch: %d cost: %f" % (epoch, epoch_cost))