本文整理汇总了Python中pybrain.supervised.BackpropTrainer._checkGradient方法的典型用法代码示例。如果您正苦于以下问题:Python BackpropTrainer._checkGradient方法的具体用法?Python BackpropTrainer._checkGradient怎么用?Python BackpropTrainer._checkGradient使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类pybrain.supervised.BackpropTrainer
的用法示例。
在下文中一共展示了BackpropTrainer._checkGradient方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: gradientCheck
# 需要导入模块: from pybrain.supervised import BackpropTrainer [as 别名]
# 或者: from pybrain.supervised.BackpropTrainer import _checkGradient [as 别名]
def gradientCheck(module, tolerance=0.0001, dataset=None):
""" check the gradient of a module with a randomly generated dataset,
(and, in the case of a network, determine which modules contain incorrect derivatives). """
if module.paramdim == 0:
print('Module has no parameters')
return True
if dataset:
d = dataset
else:
d = buildAppropriateDataset(module)
b = BackpropTrainer(module)
res = b._checkGradient(d, True)
# compute average precision on every parameter
precision = zeros(module.paramdim)
for seqres in res:
for i, p in enumerate(seqres):
if p[0] == 0 and p[1] == 0:
precision[i] = 0
else:
precision[i] += abs((p[0] + p[1]) / (p[0] - p[1]))
precision /= len(res)
if max(precision) < tolerance:
print('Perfect gradient')
return True
else:
print('Incorrect gradient', precision)
if isinstance(module, Network):
index = 0
for m in module._containerIterator():
if max(precision[index:index + m.paramdim]) > tolerance:
print('Incorrect module:', m, res[-1][index:index + m.paramdim])
index += m.paramdim
else:
print(res)
return False