本文整理汇总了Python中blocks.graph.ComputationGraph.has_inputs方法的典型用法代码示例。如果您正苦于以下问题:Python ComputationGraph.has_inputs方法的具体用法?Python ComputationGraph.has_inputs怎么用?Python ComputationGraph.has_inputs使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类blocks.graph.ComputationGraph
的用法示例。
在下文中一共展示了ComputationGraph.has_inputs方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: AggregationBuffer
# 需要导入模块: from blocks.graph import ComputationGraph [as 别名]
# 或者: from blocks.graph.ComputationGraph import has_inputs [as 别名]
class AggregationBuffer(object):
"""Intermediate results of aggregating values of Theano variables.
Encapsulates aggregators for a list of Theano variables. Collects
the respective updates and provides initialization and readout
routines.
Parameters
----------
variables : list of :class:`~tensor.TensorVariable`
The variable names are used as record names in the logs. Hence, all
the variable names must be unique.
use_take_last : bool
When ``True``, the :class:`TakeLast` aggregation scheme is used
instead of :class:`_DataIndependent` for those variables that
do not require data to be computed.
Attributes
----------
initialization_updates : list of tuples
Initialization updates of the aggregators.
accumulation_updates : list of tuples
Accumulation updates of the aggregators.
readout_variables : dict
A dictionary of record names to :class:`~tensor.TensorVariable`
representing the aggregated values.
inputs : list of :class:`~tensor.TensorVariable`
The list of inputs needed for accumulation.
"""
def __init__(self, variables, use_take_last=False):
_validate_variable_names(variables)
self.variables = variables
self.variable_names = [v.name for v in self.variables]
self.use_take_last = use_take_last
self._computation_graph = ComputationGraph(self.variables)
self.inputs = self._computation_graph.inputs
self._initialized = False
self._create_aggregators()
self._compile()
def _create_aggregators(self):
"""Create aggregators and collect updates."""
self.initialization_updates = []
self.accumulation_updates = []
self.readout_variables = OrderedDict()
for v in self.variables:
logger.debug('variable to evaluate: %s', v.name)
if not hasattr(v.tag, 'aggregation_scheme'):
if not self._computation_graph.has_inputs(v):
scheme = (TakeLast if self.use_take_last
else _DataIndependent)
logger.debug('Using %s aggregation scheme'
' for %s since it does not depend on'
' the data', scheme.__name__, v.name)
v.tag.aggregation_scheme = scheme(v)
else:
logger.debug('Using the default '
' (average over minibatches)'
' aggregation scheme for %s', v.name)
v.tag.aggregation_scheme = Mean(v, 1.0)
aggregator = v.tag.aggregation_scheme.get_aggregator()
self.initialization_updates.extend(
aggregator.initialization_updates)
self.accumulation_updates.extend(aggregator.accumulation_updates)
self.readout_variables[v.name] = aggregator.readout_variable
def _compile(self):
"""Compiles Theano functions.
.. todo::
The current compilation method does not account for updates
attached to `ComputationGraph` elements. Compiling should
be out-sourced to `ComputationGraph` to deal with it.
"""
logger.debug("Compiling initialization and readout functions")
if self.initialization_updates:
self._initialize_fun = theano.function(
[], [], updates=self.initialization_updates)
else:
self._initialize_fun = None
# We need to call `as_tensor_variable` here
# to avoid returning `CudaNdarray`s to the user, which
# happens otherwise under some circumstances (see
# https://groups.google.com/forum/#!topic/theano-users/H3vkDN-Shok)
self._readout_fun = theano.function(
[], [tensor.as_tensor_variable(v)
for v in self.readout_variables.values()])
logger.debug("Initialization and readout functions compiled")
def initialize_aggregators(self):
"""Initialize the aggregators."""
self._initialized = True
#.........这里部分代码省略.........