当前位置: 首页>>代码示例>>Python>>正文


Python LogisticRegression.loss_nll方法代码示例

本文整理汇总了Python中LogisticRegression.LogisticRegression.loss_nll方法的典型用法代码示例。如果您正苦于以下问题:Python LogisticRegression.loss_nll方法的具体用法?Python LogisticRegression.loss_nll怎么用?Python LogisticRegression.loss_nll使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在LogisticRegression.LogisticRegression的用法示例。


在下文中一共展示了LogisticRegression.loss_nll方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: evaluate_lenet5

# 需要导入模块: from LogisticRegression import LogisticRegression [as 别名]
# 或者: from LogisticRegression.LogisticRegression import loss_nll [as 别名]

#.........这里部分代码省略.........
    layer1_out_size = (layer0_out_size - 5 + 1) / 2
    layer1 = LeNetConvPoolLayer(
        rng,
        input=layer0.output,
        image_shape=(batch_size, nkerns[0], layer0_out_size, layer0_out_size),
        filter_shape=(nkerns[1], nkerns[0], 5, 5),
        poolsize=(2, 2),
        W=layer1_W,
        b=layer1_b,
    )

    # the TanhLayer being fully-connected, it operates on 2D matrices of
    # shape (batch_size,num_pixels) (i.e matrix of rasterized images).
    # This will generate a matrix of shape (20,32*4*4) = (20,512)
    layer2_input = layer1.output.flatten(2)

    # construct a fully-connected sigmoidal layer
    layer2 = HiddenLayer(
        rng,
        input=layer2_input,
        input_dimensions=nkerns[1] * layer1_out_size * layer1_out_size,
        output_dimensions=500,
        activation_function=T.tanh,
        Weight=layer2_W,
        bias=layer2_b,
    )

    # classify the values of the fully-connected sigmoidal layer
    layer3 = LogisticRegression(
        input=layer2.output, input_dimensions=500, output_dimensions=class_count, params=layer3_p
    )

    # the cost we minimize during training is the NLL of the model
    cost = layer3.loss_nll(y)

    # create a function to compute the mistakes that are made by the model
    train_errors = theano.function(
        inputs=[index],
        outputs=layer3.prediction_accuracy(y),
        givens={
            x: train_set_x[index * batch_size : (index + 1) * batch_size],
            y: train_set_y[index * batch_size : (index + 1) * batch_size],
        },
    )

    test_model = theano.function(
        [index],
        layer3.prediction_accuracy(y),
        givens={
            x: test_set_x[index * batch_size : (index + 1) * batch_size],
            y: test_set_y[index * batch_size : (index + 1) * batch_size],
        },
    )

    validate_model = theano.function(
        [index],
        layer3.prediction_accuracy(y),
        givens={
            x: valid_set_x[index * batch_size : (index + 1) * batch_size],
            y: valid_set_y[index * batch_size : (index + 1) * batch_size],
        },
    )

    #######################Confusion matrix code######################################
    confusion_model_train = theano.function(
        [index], layer3.getPrediction(), givens={x: train_set_x[index * batch_size : (index + 1) * batch_size]}
开发者ID:sandipmukherjee,项目名称:Annuili-detection-using-Deep-learning,代码行数:70,代码来源:ConvolutionalNN.py


注:本文中的LogisticRegression.LogisticRegression.loss_nll方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。