当前位置: 首页>>代码示例>>Python>>正文


Python NeuralNetwork.gradient_descent方法代码示例

本文整理汇总了Python中NeuralNetwork.NeuralNetwork.gradient_descent方法的典型用法代码示例。如果您正苦于以下问题:Python NeuralNetwork.gradient_descent方法的具体用法?Python NeuralNetwork.gradient_descent怎么用?Python NeuralNetwork.gradient_descent使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在NeuralNetwork.NeuralNetwork的用法示例。


在下文中一共展示了NeuralNetwork.gradient_descent方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: main

# 需要导入模块: from NeuralNetwork import NeuralNetwork [as 别名]
# 或者: from NeuralNetwork.NeuralNetwork import gradient_descent [as 别名]
def main():

    if len(sys.argv) != 3:
        print "USAGE: python DigitClassifier" \
            "<path_to_training_file> <path_to_testing_file>"
        sys.exit(-1)

    training_data = None
    validation_data = None
    testing_data = None
    # load training file
    print "Loading training data from '" + sys.argv[1] + "'..."
    with open(sys.argv[1], 'rb') as f:
        # skip headings
        next(f)
        X = []
        Y = []
        for line in f:
            line = line.strip().split(',', 1)
            Y.append(vectorize_digit(int(line[0])))
            X.append(line[1].split(','))

        # convert X into numpys float32 representation
        X = np.array(X).astype(np.float32)
        # normalize pixel values to lie between 0 - 1
        # performance is very bad without normalization
        X *= 1.0 / 255.0
        X = [np.reshape(i, (784, 1)) for i in X]

        # split point
        N = int(len(X) * 0.2)

        # split the data into 80-20
        x = X[:N]
        X = X[N:]
        y = [de_vectorize(i) for i in Y[:N]]
        Y = Y[N:]

        training_data = zip(X, Y)
        validation_data = zip(x, y)

    print "Data Loaded."
    print "Generating Neural Network..."

    input_layer_neurons = 784
    hidden_layer_neurons = [30]
    output_layer_neurons = 10

    epochs = 30
    batch_size = 10
    learning_rate = 3.0

    net = NeuralNetwork([input_layer_neurons] +
                        hidden_layer_neurons + [output_layer_neurons])

    print "Network Generated..."
    print "\t Input Layer neuron count: " + str(input_layer_neurons)
    print "\t Hidden Layer Count: " + str(len(hidden_layer_neurons))
    for i in xrange(len(hidden_layer_neurons)):
        print "\t\tHidden Layer " + str(i + 1) + " neuron count: "\
            + str(hidden_layer_neurons[i])
    print "\t Output Layer neuron count: " + str(output_layer_neurons)

    print "\nTraining for " + str(epochs) + " epochs..."
    net.gradient_descent(training_data, epochs, batch_size,
                         learning_rate, validation_data)

    # load training file
    print "Loading testing data from '" + sys.argv[2] + "'..."
    with open(sys.argv[2], 'rb') as f:
        # skip headings
        next(f)
        X = []
        for line in f:
            X.append(line.split(','))

        X = np.array(X).astype(np.float32)
        X *= 1.0 / 255.0
        X = [np.reshape(i, (784, 1)) for i in X]

        testing_data = X

    # get the classifier predictions
    predictions = net.classify(testing_data)
    print predictions
    return
开发者ID:omkarkarande,项目名称:ML_DigitOCR,代码行数:88,代码来源:DigitClassifier.py


注:本文中的NeuralNetwork.NeuralNetwork.gradient_descent方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。