本文整理汇总了Python中blocks.graph.ComputationGraph.items方法的典型用法代码示例。如果您正苦于以下问题:Python ComputationGraph.items方法的具体用法?Python ComputationGraph.items怎么用?Python ComputationGraph.items使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类blocks.graph.ComputationGraph
的用法示例。
在下文中一共展示了ComputationGraph.items方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: get_updates
# 需要导入模块: from blocks.graph import ComputationGraph [as 别名]
# 或者: from blocks.graph.ComputationGraph import items [as 别名]
def get_updates(self, learning_rate, grads, lr_scalers):
"""Wraps the respective method of the wrapped learning rule.
Performs name-based input substitution for the monitored values.
Currently very hacky: the inputs from the gradients are typically
named `$ALGO[$SOURCE]` in PyLearn2, where `$ALGO` is the algorithm
name and `$SOURCE` is a source name from the data specification.
This convention is exploited to match them with the inputs of
monitoring values, whose input names are expected to match source
names.
"""
updates = self.learning_rule.get_updates(learning_rate, grads,
lr_scalers)
grad_inputs = ComputationGraph(list(grads.values())).dict_of_inputs()
for value, accumulator in zip(self.values, self.accumulators):
value_inputs = ComputationGraph(value).dict_of_inputs()
replace_dict = dict()
for name, input_ in value_inputs.items():
# See docstring to see how it works
grad_input = grad_inputs[unpack(
[n for n in grad_inputs
if n.endswith('[{}]'.format(name))],
singleton=True)]
replace_dict[input_] = tensor.unbroadcast(
grad_input, *range(grad_input.ndim))
updates[accumulator] = (
accumulator + theano.clone(value, replace_dict))
self._callback_called = True
updates.update(self.updates)
return updates