当前位置: 首页>>代码示例>>Python>>正文


Python Variable.type_as方法代码示例

本文整理汇总了Python中torch.autograd.Variable.type_as方法的典型用法代码示例。如果您正苦于以下问题:Python Variable.type_as方法的具体用法?Python Variable.type_as怎么用?Python Variable.type_as使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在torch.autograd.Variable的用法示例。


在下文中一共展示了Variable.type_as方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: test_type_as

# 需要导入模块: from torch.autograd import Variable [as 别名]
# 或者: from torch.autograd.Variable import type_as [as 别名]
 def test_type_as(self):
     x = Variable(torch.Tensor([0]), requires_grad=True)
     self.assertONNX(lambda x: x.type_as(x), x)
开发者ID:inkawhich,项目名称:pytorch,代码行数:5,代码来源:test_operators.py

示例2: InverseAutoregressiveFlow

# 需要导入模块: from torch.autograd import Variable [as 别名]
# 或者: from torch.autograd.Variable import type_as [as 别名]
class InverseAutoregressiveFlow(Bijector):
    """
    An implementation of an Inverse Autoregressive Flow. Together with the `TransformedDistribution` this
    provides a way to create richer variational approximations.

    Example usage::

    >>> base_dist = Normal(...)
    >>> iaf = InverseAutoregressiveFlow(...)
    >>> pyro.module("my_iaf", iaf)
    >>> iaf_dist = TransformedDistribution(base_dist, iaf)

    Note that this implementation is only meant to be used in settings where the inverse of the Bijector
    is never explicitly computed (rather the result is cached from the forward call). In the context of
    variational inference, this means that the InverseAutoregressiveFlow should only be used in the guide,
    i.e. in the variational distribution. In other contexts the inverse could in principle be computed but
    this would be a (potentially) costly computation that scales with the dimension of the input (and in
    any case support for this is not included in this implementation).

    :param input_dim: dimension of input
    :type input_dim: int
    :param hidden_dim: hidden dimension (number of hidden units)
    :type hidden_dim: int
    :param sigmoid_bias: bias on the hidden units fed into the sigmoid; default=`2.0`
    :type sigmoid_bias: float
    :param permutation: whether the order of the inputs should be permuted (by default the conditional
        dependence structure of the autoregression follows the sequential order)
    :type permutation: bool

    References:

    1. Improving Variational Inference with Inverse Autoregressive Flow [arXiv:1606.04934]
    Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling

    2. Variational Inference with Normalizing Flows [arXiv:1505.05770]
    Danilo Jimenez Rezende, Shakir Mohamed

    3. MADE: Masked Autoencoder for Distribution Estimation [arXiv:1502.03509]
    Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle
    """

    def __init__(self, input_dim, hidden_dim, sigmoid_bias=2.0, permutation=None):
        super(InverseAutoregressiveFlow, self).__init__()
        self.input_dim = input_dim
        self.hidden_dim = hidden_dim
        self.arn = AutoRegressiveNN(input_dim, hidden_dim, output_dim_multiplier=2, permutation=permutation)
        self.sigmoid = nn.Sigmoid()
        self.sigmoid_bias = Variable(torch.Tensor([sigmoid_bias]))
        self._intermediates_cache = {}
        self.add_inverse_to_cache = True

    def get_arn(self):
        """
        :rtype: pyro.nn.AutoRegressiveNN

        Return the AutoRegressiveNN associated with the InverseAutoregressiveFlow
        """
        return self.arn

    def __call__(self, x, *args, **kwargs):
        """
        :param x: the input into the bijection
        :type x: torch.autograd.Variable

        Invokes the bijection x=>y; in the prototypical context of a TransformedDistribution `x` is a
        sample from the base distribution (or the output of a previous flow)
        """
        hidden = self.arn(x)
        sigma = self.sigmoid(hidden[:, 0:self.input_dim] + self.sigmoid_bias.type_as(hidden))
        mean = hidden[:, self.input_dim:]
        y = sigma * x + (Variable(torch.ones(sigma.size())).type_as(sigma) - sigma) * mean
        self._add_intermediate_to_cache(sigma, y, 'sigma')
        return y

    def inverse(self, y, *args, **kwargs):
        """
        :param y: the output of the bijection
        :type y: torch.autograd.Variable

        Inverts y => x. As noted above, this implementation is incapable of inverting arbitrary values
        `y`; rather it assumes `y` is the result of a previously computed application of the bijector
        to some `x` (which was cached on the forward call)
        """
        if (y, 'x') in self._intermediates_cache:
            x = self._intermediates_cache.pop((y, 'x'))
            return x
        else:
            raise KeyError("Bijector InverseAutoregressiveFlow expected to find" +
                           "key in intermediates cache but didn't")

    def _add_intermediate_to_cache(self, intermediate, y, name):
        """
        Internal function used to cache intermediate results computed during the forward call
        """
        assert((y, name) not in self._intermediates_cache),\
            "key collision in _add_intermediate_to_cache"
        self._intermediates_cache[(y, name)] = intermediate

    def log_det_jacobian(self, y, *args, **kwargs):
        """
#.........这里部分代码省略.........
开发者ID:Magica-Chen,项目名称:pyro,代码行数:103,代码来源:transformed_distribution.py


注:本文中的torch.autograd.Variable.type_as方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。