当前位置: 首页>>代码示例>>Python>>正文


Python nest.flatten方法代码示例

本文整理汇总了Python中tensorflow.python.util.nest.flatten方法的典型用法代码示例。如果您正苦于以下问题:Python nest.flatten方法的具体用法?Python nest.flatten怎么用?Python nest.flatten使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tensorflow.python.util.nest的用法示例。


在下文中一共展示了nest.flatten方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: output_dtype

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def output_dtype(self):
        """Types of output of one step.
        """
        # Assume the dtype of the cell is the output_size structure
        # containing the input_state's first component's dtype.
        # Return that structure and the sample_ids_dtype from the helper.
        dtype = nest.flatten(self._initial_state)[0].dtype
        return AttentionRNNDecoderOutput(
            logits=nest.map_structure(lambda _: dtype, self._rnn_output_size()),
            sample_id=self._helper.sample_ids_dtype,
            cell_output=nest.map_structure(
                lambda _: dtype, self._cell.output_size),
            attention_scores=nest.map_structure(
                lambda _: dtype, self._alignments_size()),
            attention_context=nest.map_structure(
                lambda _: dtype, self._cell.state_size.attention)) 
开发者ID:qkaren,项目名称:Counterfactual-StoryRW,代码行数:18,代码来源:rnn_decoders.py

示例2: transpose_batch_time

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def transpose_batch_time(inputs):
    """Transposes inputs between time-major and batch-major.

    Args:
        inputs: A Tensor of shape `[batch_size, max_time, ...]` (batch-major)
            or `[max_time, batch_size, ...]` (time-major), or a (possibly
            nested) tuple of such elements.

    Returns:
        A (possibly nested tuple of) Tensor with transposed batch and
        time dimensions of inputs.
    """
    flat_input = nest.flatten(inputs)
    flat_input = [ops.convert_to_tensor(input_) for input_ in flat_input]
    # pylint: disable=protected-access
    flat_input = [rnn._transpose_batch_time(input_) for input_ in flat_input]
    return nest.pack_sequence_as(structure=inputs, flat_sequence=flat_input) 
开发者ID:qkaren,项目名称:Counterfactual-StoryRW,代码行数:19,代码来源:shapes.py

示例3: _create

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def _create(self):
        # Concat bridge inputs on the depth dimensions
        bridge_input = nest.map_structure(
            lambda x: tf.reshape(x, [self.batch_size, _total_tensor_depth(x)]),
            self._bridge_input)
        bridge_input_flat = nest.flatten([bridge_input])
        bridge_input_concat = tf.concat(bridge_input_flat, axis=1)

        state_size_splits = nest.flatten(self.decoder_state_size)
        total_decoder_state_size = sum(state_size_splits)

        # Pass bridge inputs through a fully connected layer layer
        initial_state_flat = tf.contrib.layers.fully_connected(
            bridge_input_concat,
            num_outputs=total_decoder_state_size,
            activation_fn=self._activation_fn,
            weights_initializer=tf.truncated_normal_initializer(
                stddev=self.parameter_init),
            biases_initializer=tf.zeros_initializer(),
            scope=None)

        # Shape back into required state size
        initial_state = tf.split(initial_state_flat, state_size_splits, axis=1)
        return nest.pack_sequence_as(self.decoder_state_size, initial_state) 
开发者ID:hirofumi0810,项目名称:tensorflow_end2end_speech_recognition,代码行数:26,代码来源:bridge.py

示例4: tile_batch

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def tile_batch(t, multiplier, name=None):
    """Tile the batch dimension of a (possibly nested structure of) tensor(s) t.
    For each tensor t in a (possibly nested structure) of tensors,
    this function takes a tensor t shaped `[batch_size, s0, s1, ...]` composed of
    minibatch entries `t[0], ..., t[batch_size - 1]` and tiles it to have a shape
    `[batch_size * multiplier, s0, s1, ...]` composed of minibatch entries
    `t[0], t[0], ..., t[1], t[1], ...` where each minibatch entry is repeated
    `multiplier` times.
    Args:
      t: `Tensor` shaped `[batch_size, ...]`.
      multiplier: Python int.
      name: Name scope for any created operations.
    Returns:
      A (possibly nested structure of) `Tensor` shaped
      `[batch_size * multiplier, ...]`.
    Raises:
      ValueError: if tensor(s) `t` do not have a statically known rank or
      the rank is < 1.
    """
    flat_t = nest.flatten(t)
    with tf.name_scope(name, "tile_batch", flat_t + [multiplier]):
        return nest.map_structure(lambda t_: _tile_batch(t_, multiplier), t) 
开发者ID:hirofumi0810,项目名称:tensorflow_end2end_speech_recognition,代码行数:24,代码来源:beam_search_decoder_from_tensorflow.py

示例5: get_next

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def get_next(self, name=None):
    """Returns a nested structure of `tf.Tensor`s containing the next element.

    Args:
      name: (Optional.) A name for the created operation.

    Returns:
      A nested structure of `tf.Tensor` objects.
    """
    return nest.pack_sequence_as(
        self._output_types,
        gen_dataset_ops.iterator_get_next(
            self._iterator_resource,
            output_types=nest.flatten(self._output_types),
            output_shapes=nest.flatten(self._output_shapes),
            name=name)) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:18,代码来源:dataset_ops.py

示例6: make_one_shot_iterator

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def make_one_shot_iterator(self):
    """Creates an `Iterator` for enumerating the elements of this dataset.

    **N.B.** The returned iterator will be initialized automatically.
    A "one-shot" iterator does not currently support re-initialization.

    Returns:
      An `Iterator` over the elements of this dataset.
    """
    # NOTE(mrry): We capture by value here to ensure that `_make_dataset()` is
    # a 0-argument function.
    @function.Defun(capture_by_value=True)
    def _make_dataset():
      return self.make_dataset_resource()

    _make_dataset.add_to_graph(ops.get_default_graph())

    return Iterator(
        gen_dataset_ops.one_shot_iterator(
            dataset_factory=_make_dataset,
            output_types=nest.flatten(self.output_types),
            output_shapes=nest.flatten(self.output_shapes)), None,
        self.output_types, self.output_shapes) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:25,代码来源:dataset_ops.py

示例7: make_dataset_resource

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def make_dataset_resource(self):
    input_resource = self._input_dataset.make_dataset_resource()
    if self._num_threads is None:
      return gen_dataset_ops.map_dataset(
          input_resource,
          self._map_func.captured_inputs,
          f=self._map_func,
          output_types=nest.flatten(self.output_types),
          output_shapes=nest.flatten(self.output_shapes))
    else:
      return gen_dataset_ops.parallel_map_dataset(
          input_resource,
          self._map_func.captured_inputs,
          f=self._map_func,
          num_threads=self._num_threads,
          output_buffer_size=self._output_buffer_size,
          output_types=nest.flatten(self.output_types),
          output_shapes=nest.flatten(self.output_shapes)) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:20,代码来源:dataset_ops.py

示例8: initialize

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def initialize(self, name=None):
    """Initialize the decoder.

    Args:
      name: Name scope for any created operations.

    Returns:
      `(finished, start_inputs, initial_state)`.
    """
    finished, start_inputs = self._finished, self._start_inputs

    initial_state = BeamSearchDecoderState(
        cell_state=self._initial_cell_state,
        log_probs=array_ops.zeros(
            [self._batch_size, self._beam_width],
            dtype=nest.flatten(self._initial_cell_state)[0].dtype),
        finished=finished,
        lengths=array_ops.zeros(
            [self._batch_size, self._beam_width], dtype=dtypes.int32))

    return (finished, start_inputs, initial_state) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:23,代码来源:beam_search_decoder.py

示例9: state_tuple_to_dict

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def state_tuple_to_dict(state):
  """Returns a dict containing flattened `state`.

  Args:
    state: A `Tensor` or a nested tuple of `Tensors`. All of the `Tensor`s must
    have the same rank and agree on all dimensions except the last.

  Returns:
    A dict containing the `Tensor`s that make up `state`. The keys of the dict
    are of the form "STATE_PREFIX_i" where `i` is the place of this `Tensor`
    in a depth-first traversal of `state`.
  """
  with ops.name_scope('state_tuple_to_dict'):
    flat_state = nest.flatten(state)
    state_dict = {}
    for i, state_component in enumerate(flat_state):
      state_name = _get_state_name(i)
      state_value = (None if state_component is None else array_ops.identity(
          state_component, name=state_name))
      state_dict[state_name] = state_value
  return state_dict 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:23,代码来源:state_saving_rnn_estimator.py

示例10: state_tuple_to_dict

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def state_tuple_to_dict(state):
  """Returns a dict containing flattened `state`.

  Args:
    state: A `Tensor` or a nested tuple of `Tensors`. All of the `Tensor`s must
    have the same rank and agree on all dimensions except the last.

  Returns:
    A dict containing the `Tensor`s that make up `state`. The keys of the dict
    are of the form "STATE_PREFIX_i" where `i` is the place of this `Tensor`
    in a depth-first traversal of `state`.
  """
  with ops.name_scope('state_tuple_to_dict'):
    flat_state = nest.flatten(state)
    state_dict = {}
    for i, state_component in enumerate(flat_state):
      state_name = _get_state_name(i)
      state_value = (None if state_component is None
                     else array_ops.identity(state_component, name=state_name))
      state_dict[state_name] = state_value
  return state_dict 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:23,代码来源:dynamic_rnn_estimator.py

示例11: __init__

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def __init__(self, inpt, n_hidden, n_output, transfer_hidden=tf.nn.elu, transfer=None,
                 hidden_weight_init=None, hidden_bias_init=None,weight_init=None, bias_init=None,
                 name=None):
        """
        :param inpt: inpt tensor
        :param n_hidden: scalar ot list, number of hidden units
        :param n_output: scalar, number of output units
        :param transfer_hidden: scalar or list, transfers for hidden units. If list, len must be == len(n_hidden).
        :param transfer: tf.Op or None
        """

        self.n_hidden = nest.flatten(n_hidden)
        self.n_output = n_output
        self.hidden_weight_init = hidden_weight_init
        self.hidden_bias_init = hidden_bias_init

        transfer_hidden = nest.flatten(transfer_hidden)
        if len(transfer_hidden) == 1:
            transfer_hidden *= len(self.n_hidden)
        self.transfer_hidden = transfer_hidden

        self.transfer = transfer
        super(MLP, self).__init__(inpt, name, weight_init, bias_init) 
开发者ID:akosiorek,项目名称:hart,代码行数:25,代码来源:nn.py

示例12: _zero_state

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def _zero_state(self, img, att, presence, state, transform_features, transform_state=False):

        with tf.variable_scope(self.__class__.__name__) as vs:
            features = self.extract_features(img, att)[1]

            if transform_features:
                features_flat = tf.reshape(features, (-1, self.n_units))
                features_flat = AffineLayer(features_flat, self.n_units, name='init_feature_transform').output
                features = tf.reshape(features_flat, tf.shape(features))

            rnn_outputs, hidden_state = self._propagate(features, state)

            hidden_state = nest.flatten(hidden_state)

            if transform_state:
                for i, hs in enumerate(hidden_state):
                    name = 'init_state_transform_{}'.format(i)
                    hidden_state[i] = AffineLayer(hs, self.n_units, name=name).output

            state = nest.pack_sequence_as(structure=state, flat_sequence=hidden_state)
        self.rnn_vs = vs
        return state, rnn_outputs 
开发者ID:akosiorek,项目名称:hart,代码行数:24,代码来源:attention_ops.py

示例13: output_dtype

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def output_dtype(self):
        # Assume the dtype of the cell is the output_size structure
        # containing the input_state's first component's dtype.
        # Return that structure and int32 (the id)
        dtype = nest.flatten(self._initial_state)[0].dtype
        return BasicDecoderOutput(
            nest.map_structure(lambda _: dtype, self._rnn_output_size()),
            dtypes.int32) 
开发者ID:vineetjohn,项目名称:linguistic-style-transfer,代码行数:10,代码来源:custom_decoder.py

示例14: initialize

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def initialize(self, name=None):
        helper_init = self._helper.initialize()

        flat_initial_state = nest.flatten(self._initial_state)
        dtype = flat_initial_state[0].dtype
        initial_state = self._cell.zero_state(
            batch_size=tf.shape(flat_initial_state[0])[0], dtype=dtype)
        initial_state = initial_state.clone(cell_state=self._initial_state)

        return [helper_init[0], helper_init[1], initial_state] 
开发者ID:qkaren,项目名称:Counterfactual-StoryRW,代码行数:12,代码来源:rnn_decoders.py

示例15: flatten

# 需要导入模块: from tensorflow.python.util import nest [as 别名]
# 或者: from tensorflow.python.util.nest import flatten [as 别名]
def flatten(tensor, preserve_dims, flattened_dim=None):
    """Flattens a tensor whiling keeping several leading dimensions.

    :attr:`preserve_dims` must < tensor's rank

    Args:
        tensor: A Tensor to flatten.
        preserve_dims (int): The number of leading dimensions to preserve.
        flatterned_dim (int, optional): The size of the resulting flattened
            dimension. If not given, infer automatically, which can cause
            a statically unknown dimension size.

    Returns:
        A Tensor with rank :attr:`perserve_dims`+1.

    Example:
        .. code-block:: python

            x = tf.ones(shape=[d_1, d_2, d_3, d_4])
            y = flatten(x, 2) # y.shape == [d_1, d_2, d_3 * d_4]
    """
    if flattened_dim is None:
        flattened_dim = -1
    shape = tf.concat([tf.shape(tensor)[:preserve_dims], [flattened_dim]],
                      axis=0)
    tensor_ = tf.reshape(tensor, shape)
    return tensor_ 
开发者ID:qkaren,项目名称:Counterfactual-StoryRW,代码行数:29,代码来源:shapes.py


注:本文中的tensorflow.python.util.nest.flatten方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。