当前位置: 首页>>代码示例>>Python>>正文


Python MLPClassifier.layer_num方法代码示例

本文整理汇总了Python中sklearn.neural_network.MLPClassifier.layer_num方法的典型用法代码示例。如果您正苦于以下问题:Python MLPClassifier.layer_num方法的具体用法?Python MLPClassifier.layer_num怎么用?Python MLPClassifier.layer_num使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在sklearn.neural_network.MLPClassifier的用法示例。


在下文中一共展示了MLPClassifier.layer_num方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: main

# 需要导入模块: from sklearn.neural_network import MLPClassifier [as 别名]
# 或者: from sklearn.neural_network.MLPClassifier import layer_num [as 别名]
def main():
    ########################################
    ## CHANGE FILE PATH HERE TO YOUR REPO ##
    ########################################
    path = '/Users/Kevin/Desktop/'


    ########################################
    ##### THIS CODE COMPILES THE BUILD #####
    ########################################
    os.system(''.join(['cd ' + path + 'scikit-learn/;', 
                       'python setup.py build;',
                       'sudo python setup.py install;',
                       'cd ' + path + 'scikit-learn/sklearn/neural_network/']))


    #######################################
    ############ OPTION PARSER ############
    #######################################
    parser = OptionParser(usage="usage: %prog [options] arg1 arg2",
                          version="%prog 1.0")
    parser.add_option("-n", "--hidden",
                      dest="layers",
                      default="[10,10]",
                      help="specifies the number of layers e.g. do -n [10,10]")
    parser.add_option("-f", "--filename",
                      dest="update_file",
                      default="param_updates.txt",
                      help="file name to write parameter updates to",)
    parser.add_option("-l", "--layernum",
                      dest="layer_num",
                      default="0",
                      help="specifies the layer that the update weight is randomly sampled between, for the input layer, use 0")
    parser.add_option("-t", "--trainsize",
                      dest="training_size",
                      default="30000",
                      help="specifies the training size")
    parser.add_option("-s", "--testsize",
                      dest="test_size",
                      default="5000",
                      help="specifies the training size")
    parser.add_option("-d", "--threshold",
                      dest="threshold",
                      default="0",
                      help="specifies the dropout threshold for updates to weights, e.g. -d 1e-5")
    parser.add_option("-p", "--percentdropout",
                      dest="dropout_percentage",
                      default="15",
                      help="specifies the dropout chance for updates to weights, e.g. -p 15")
    (options, args) = parser.parse_args()

    print options
    print args


    #######################################
    ########### HYPERPARAMETERS ###########
    #######################################
    training_size = int(options.training_size)
    test_size = int(options.test_size)
    hidden_layers = tuple(eval(options.layers))
    max_iteration = 100
    tolerance = 1e-4
    batch_size = 1


    #######################################
    ########## FETCH MNIST DATA ###########
    #######################################
    mnist = fetch_mldata("MNIST original")
    # rescale the data, use the traditional train/test split
    X, y = mnist.data / 255., mnist.target
    X_train, X_test = X[:training_size], X[training_size:training_size + test_size]
    y_train, y_test = y[:training_size], y[training_size:training_size + test_size]

    # Validate shape of training and test matrices
    print X_train.shape, y_train.shape, X_test.shape, y_test.shape


    #######################################
    ####### RANDOM WEIGHT SELECTION #######
    #######################################
    layer_num = int(options.layer_num)
    if layer_num > len(hidden_layers):
        layer_num = 0
    if layer_num == -1:
        print "Last layer used (layer between last hidden layer and output layer)"
        layer_num = len(hidden_layers)

    layer_sizes_array = [X_train.shape[1]] + list(hidden_layers) + [len(set(y_train))]

    print layer_sizes_array

    layer_num_array = [layer_num, random.randint(0, layer_sizes_array[layer_num] - 1), random.randint(0, layer_sizes_array[layer_num + 1] - 1)]


    #######################################
    ######### TRAIN AND FIT MODEL #########
    #######################################
    mlp = MLPClassifier(hidden_layer_sizes=hidden_layers, alpha=1e-4, max_iter=max_iteration,
#.........这里部分代码省略.........
开发者ID:kmalta,项目名称:scikit-learn,代码行数:103,代码来源:mnist_plot.py


注:本文中的sklearn.neural_network.MLPClassifier.layer_num方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。