当前位置: 首页>>代码示例>>Python>>正文


Python Dense.get_weights方法代码示例

本文整理汇总了Python中keras.layers.Dense.get_weights方法的典型用法代码示例。如果您正苦于以下问题:Python Dense.get_weights方法的具体用法?Python Dense.get_weights怎么用?Python Dense.get_weights使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在keras.layers.Dense的用法示例。


在下文中一共展示了Dense.get_weights方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: Attention

# 需要导入模块: from keras.layers import Dense [as 别名]
# 或者: from keras.layers.Dense import get_weights [as 别名]
class Attention(Model):
    """Implements standard attention.

    Given some memory, a memory mask and a query, outputs the weighted memory cells.
    """

    def __init__(self, memory_cells, query, project_query=False):
        """Define Attention.

        Args:
            memory_cells (SequenceBatch): a SequenceBatch containing a Tensor of shape (batch_size, num_cells, cell_dim)
            query (Tensor): a tensor of shape (batch_size, query_dim).
            project_query (bool): defaults to False. If True, the query goes through an extra projection layer to
                coerce it to cell_dim.
        """
        cell_dim = memory_cells.values.get_shape().as_list()[2]
        if project_query:
            # project the query up/down to cell_dim
            self._projection_layer = Dense(cell_dim, activation='linear')
            query = self._projection_layer(query)  # (batch_size, cand_dim)

        memory_values, memory_mask = memory_cells.values, memory_cells.mask

        # batch matrix multiply to compute logit scores for all choices in all batches
        query = tf.expand_dims(query, 2)  # (batch_size, cell_dim, 1)
        logit_values = tf.batch_matmul(memory_values, query)  # (batch_size, num_cells, 1)
        logit_values = tf.squeeze(logit_values, [2])  # (batch_size, num_cells)

        # set all pad logits to negative infinity
        logits = SequenceBatch(logit_values, memory_mask)
        logits = logits.with_pad_value(-float('inf'))

        # normalize to get probs
        probs = tf.nn.softmax(logits.values)  # (batch_size, num_cells)

        retrieved = tf.batch_matmul(tf.expand_dims(probs, 1), memory_values)  # (batch_size, 1, cell_dim)
        retrieved = tf.squeeze(retrieved, [1])  # (batch_size, cell_dim)

        self._logits = logits.values
        self._probs = probs
        self._retrieved = retrieved

    @property
    def logits(self):
        return self._logits  # (batch_size, num_cells)

    @property
    def probs(self):
        return self._probs  # (batch_size, num_cells)

    @property
    def retrieved(self):
        return self._retrieved  # (batch_size, cell_dim)

    @property
    def projection_weights(self):
        """Get projection weights.

        Returns:
            (np.array, np.array): a pair of numpy arrays, (W, b) used to project the query tensor to
                match the predicate embedding dimension.
        """
        return self._projection_layer.get_weights()

    @projection_weights.setter
    def projection_weights(self, value):
        W, b = value
        self._projection_layer.set_weights([W, b])
开发者ID:siddk,项目名称:lang2program,代码行数:70,代码来源:model.py


注:本文中的keras.layers.Dense.get_weights方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。