本文整理汇总了Python中LogisticRegression.LogisticRegression.loss_nll方法的典型用法代码示例。如果您正苦于以下问题:Python LogisticRegression.loss_nll方法的具体用法?Python LogisticRegression.loss_nll怎么用?Python LogisticRegression.loss_nll使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类LogisticRegression.LogisticRegression
的用法示例。
在下文中一共展示了LogisticRegression.loss_nll方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: evaluate_lenet5
# 需要导入模块: from LogisticRegression import LogisticRegression [as 别名]
# 或者: from LogisticRegression.LogisticRegression import loss_nll [as 别名]
#.........这里部分代码省略.........
layer1_out_size = (layer0_out_size - 5 + 1) / 2
layer1 = LeNetConvPoolLayer(
rng,
input=layer0.output,
image_shape=(batch_size, nkerns[0], layer0_out_size, layer0_out_size),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2),
W=layer1_W,
b=layer1_b,
)
# the TanhLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size,num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (20,32*4*4) = (20,512)
layer2_input = layer1.output.flatten(2)
# construct a fully-connected sigmoidal layer
layer2 = HiddenLayer(
rng,
input=layer2_input,
input_dimensions=nkerns[1] * layer1_out_size * layer1_out_size,
output_dimensions=500,
activation_function=T.tanh,
Weight=layer2_W,
bias=layer2_b,
)
# classify the values of the fully-connected sigmoidal layer
layer3 = LogisticRegression(
input=layer2.output, input_dimensions=500, output_dimensions=class_count, params=layer3_p
)
# the cost we minimize during training is the NLL of the model
cost = layer3.loss_nll(y)
# create a function to compute the mistakes that are made by the model
train_errors = theano.function(
inputs=[index],
outputs=layer3.prediction_accuracy(y),
givens={
x: train_set_x[index * batch_size : (index + 1) * batch_size],
y: train_set_y[index * batch_size : (index + 1) * batch_size],
},
)
test_model = theano.function(
[index],
layer3.prediction_accuracy(y),
givens={
x: test_set_x[index * batch_size : (index + 1) * batch_size],
y: test_set_y[index * batch_size : (index + 1) * batch_size],
},
)
validate_model = theano.function(
[index],
layer3.prediction_accuracy(y),
givens={
x: valid_set_x[index * batch_size : (index + 1) * batch_size],
y: valid_set_y[index * batch_size : (index + 1) * batch_size],
},
)
#######################Confusion matrix code######################################
confusion_model_train = theano.function(
[index], layer3.getPrediction(), givens={x: train_set_x[index * batch_size : (index + 1) * batch_size]}