本文整理汇总了Python中pybrain.supervised.trainers.BackpropTrainer._checkGradient方法的典型用法代码示例。如果您正苦于以下问题:Python BackpropTrainer._checkGradient方法的具体用法?Python BackpropTrainer._checkGradient怎么用?Python BackpropTrainer._checkGradient使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类pybrain.supervised.trainers.BackpropTrainer
的用法示例。
在下文中一共展示了BackpropTrainer._checkGradient方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1:
# 需要导入模块: from pybrain.supervised.trainers import BackpropTrainer [as 别名]
# 或者: from pybrain.supervised.trainers.BackpropTrainer import _checkGradient [as 别名]
train_x,train_y = train
valid_x,valid_y=valid
test_x,test_y = test
trndata=create_dataset(train_x ,train_y)
validdata=create_dataset(valid_x,valid_y)
testdata=create_dataset(test_x,test_y)
####toy example#####
trndata._convertToOneOfMany()
testdata._convertToOneOfMany()
validdata._convertToOneOfMany()
## data basically divided up into multi vector representations
#create the trainer that uses the backprop algo
trainer=BackpropTrainer(n,dataset=trndata,momentum=0.1,verbose=True)
results=trainer._checkGradient(dataset=trndata,silent=True)
#do the gradient check for this network and we will plot the results later
#wt_container_sizes=[] #this list contains the sizes of all the wt connections
#wt_container_sizes.append(784*hidden_size) #input_to_hidden_0
#if more hidden layers exist
#i=0
#while i<num_hidden-1:
# wt=hidden_size*hidden_size
# wt_container_sizes.append(wt)
# i=i+1
#wt_container_sizes.append(hidden_size*10) #the hidden_to_output_connections
#print "Weight connections are : " , wt_container_sizes
#TODO later use the wt_container_sizes[] list
print "Using 3 hidden layers containing 3 hidden neurons each"
grads=[]
grads.append(find_gradients(results,0,7840,7840))