本文整理汇总了Python中torch.Tensor.uniform_方法的典型用法代码示例。如果您正苦于以下问题:Python Tensor.uniform_方法的具体用法?Python Tensor.uniform_怎么用?Python Tensor.uniform_使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类torch.Tensor
的用法示例。
在下文中一共展示了Tensor.uniform_方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: uniform_unit_scaling
# 需要导入模块: from torch import Tensor [as 别名]
# 或者: from torch.Tensor import uniform_ [as 别名]
def uniform_unit_scaling(tensor: torch.Tensor, nonlinearity: str = "linear"):
"""
An initaliser which preserves output variance for approximately gaussian
distributed inputs. This boils down to initialising layers using a uniform
distribution in the range ``(-sqrt(3/dim[0]) * scale, sqrt(3 / dim[0]) * scale)``, where
``dim[0]`` is equal to the input dimension of the parameter and the ``scale``
is a constant scaling factor which depends on the non-linearity used.
See `Random Walk Initialisation for Training Very Deep Feedforward Networks
<https://www.semanticscholar.org/paper/Random-Walk-Initialization-for-Training-Very-Deep-Sussillo-Abbott/be9728a0728b6acf7a485225b1e41592176eda0b>`_
for more information.
Parameters
----------
tensor : ``torch.Tensor``, required.
The tensor to initialise.
nonlinearity : ``str``, optional (default = "linear")
The non-linearity which is performed after the projection that this
tensor is involved in. This must be the name of a function contained
in the ``torch.nn.functional`` package.
Returns
-------
The initialised tensor.
"""
size = 1.
# Estimate the input size. This won't work perfectly,
# but it covers almost all use cases where this initialiser
# would be expected to be useful, i.e in large linear and
# convolutional layers, as the last dimension will almost
# always be the output size.
for dimension in list(tensor.size())[:-1]:
size *= dimension
activation_scaling = torch.nn.init.calculate_gain(nonlinearity, tensor)
max_value = math.sqrt(3 / size) * activation_scaling
return tensor.uniform_(-max_value, max_value)