本文整理匯總了Python中pybrain.supervised.trainers.RPropMinusTrainer.trainOnDataset方法的典型用法代碼示例。如果您正苦於以下問題:Python RPropMinusTrainer.trainOnDataset方法的具體用法?Python RPropMinusTrainer.trainOnDataset怎麽用?Python RPropMinusTrainer.trainOnDataset使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類pybrain.supervised.trainers.RPropMinusTrainer
的用法示例。
在下文中一共展示了RPropMinusTrainer.trainOnDataset方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: RecurrentNetwork
# 需要導入模塊: from pybrain.supervised.trainers import RPropMinusTrainer [as 別名]
# 或者: from pybrain.supervised.trainers.RPropMinusTrainer import trainOnDataset [as 別名]
train_set, test_set = DS.splitWithProportion(0.7)
# build our recurrent network with 10 hidden neurodes, one recurrent
# connection, using tanh activation functions
net = RecurrentNetwork()
hidden_neurodes = 10
net.addInputModule(LinearLayer(len(train_set["input"][0]), name="in"))
net.addModule(TanhLayer(hidden_neurodes, name="hidden1"))
net.addOutputModule(LinearLayer(len(train_set["target"][0]), name="out"))
net.addConnection(FullConnection(net["in"], net["hidden1"], name="c1"))
net.addConnection(FullConnection(net["hidden1"], net["out"], name="c2"))
net.addRecurrentConnection(FullConnection(net["out"], net["hidden1"], name="cout"))
net.sortModules()
net.randomize()
# train for 30 epochs (overkill) using the rprop- training algorithm
trainer = RPropMinusTrainer(net, dataset=train_set, verbose=True)
trainer.trainOnDataset(train_set, 30)
# test on training set
predictions_train = np.array([net.activate(train_set["input"][i])[0] for i in xrange(len(train_set))])
plt.plot(train_set["target"], c="k")
plt.plot(predictions_train, c="r")
plt.show()
# and on test set
predictions_test = np.array([net.activate(test_set["input"][i])[0] for i in xrange(len(test_set))])
plt.plot(test_set["target"], c="k")
plt.plot(predictions_test, c="r")
plt.show()
示例2: len
# 需要導入模塊: from pybrain.supervised.trainers import RPropMinusTrainer [as 別名]
# 或者: from pybrain.supervised.trainers.RPropMinusTrainer import trainOnDataset [as 別名]
len_pList = len(pList)
test_set_num = 10 #int(math.floor(len_pList*0.15))
epochs = 35
hiddenNodes = 8
print "======== Settings ========"
print "input_interval: %d, input_vector_size: %d, data_set: %d, test_set_num: %d, epochs: %d" % (interval, inputSize, len_pList, test_set_num, epochs, )
limit = len_pList-test_set_num
ds = createDataset3(pList[0:int(limit)], limit,inputSize,1)
#net = buildNetwork(1,6,1,bias=True,recurrent=True)
#trainer = BackpropTrainer(net,ds,batchlearning=False,lrdecay=0.0,momentum=0.0,learningrate=0.01)
net = buildNetwork(inputSize, hiddenNodes, 1, bias=True)
trainer = RPropMinusTrainer(net, verbose=True,)
#trainer = BackpropTrainer(net,ds,batchlearning=False,lrdecay=0.0,momentum=0.0,learningrate=0.01, verbose=True)
trainer.trainOnDataset(ds,epochs)
trainer.testOnData(verbose=True)
i = len_pList-test_set_num
last_value = normalize(pList[i-2][1])
last_last_value = normalize(pList[i-1][1])
out_data = []
print "======== Testing ========"
for i in range(len_pList-test_set_num+1, len_pList):
value = denormalize(net.activate([last_last_value, last_value]))
out_datum = (i, pList[i][1], value)
out_data.append(out_datum)
print "Index: %d Actual: %f Prediction: %f" % out_datum
last_value = normalize(value)