当前位置: 首页>>代码示例>>Python>>正文


Python nn.batch_normalization方法代码示例

本文整理汇总了Python中tensorflow.python.ops.nn.batch_normalization方法的典型用法代码示例。如果您正苦于以下问题:Python nn.batch_normalization方法的具体用法?Python nn.batch_normalization怎么用?Python nn.batch_normalization使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tensorflow.python.ops.nn的用法示例。


在下文中一共展示了nn.batch_normalization方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: batch_normalization

# 需要导入模块: from tensorflow.python.ops import nn [as 别名]
# 或者: from tensorflow.python.ops.nn import batch_normalization [as 别名]
def batch_normalization(x, mean, var, beta, gamma, epsilon=1e-3):
  """Applies batch normalization on x given mean, var, beta and gamma.

  I.e. returns:
  `output = (x - mean) / (sqrt(var) + epsilon) * gamma + beta`

  Arguments:
      x: Input tensor or variable.
      mean: Mean of batch.
      var: Variance of batch.
      beta: Tensor with which to center the input.
      gamma: Tensor by which to scale the input.
      epsilon: Fuzz factor.

  Returns:
      A tensor.
  """
  return nn.batch_normalization(x, mean, var, beta, gamma, epsilon)


# SHAPE OPERATIONS 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:23,代码来源:backend.py

示例2: normalize

# 需要导入模块: from tensorflow.python.ops import nn [as 别名]
# 或者: from tensorflow.python.ops.nn import batch_normalization [as 别名]
def normalize(self, inputs):
    """Apply normalization to input.

    The shape must match the declared shape in the constructor.
    [This is copied from tf.contrib.rnn.LayerNormBasicLSTMCell.]

    Args:
      inputs: Input tensor

    Returns:
      Normalized version of input tensor.

    Raises:
      ValueError: if inputs has undefined rank.
    """
    inputs_shape = inputs.get_shape()
    inputs_rank = inputs_shape.ndims
    if inputs_rank is None:
      raise ValueError('Inputs %s has undefined rank.' % inputs.name)
    axis = range(1, inputs_rank)

    beta = self._component.get_variable('beta_%s' % self._name)
    gamma = self._component.get_variable('gamma_%s' % self._name)

    with tf.variable_scope('layer_norm_%s' % self._name):
      # Calculate the moments on the last axis (layer activations).
      mean, variance = nn.moments(inputs, axis, keep_dims=True)

      # Compute layer normalization using the batch_normalization function.
      variance_epsilon = 1E-12
      outputs = nn.batch_normalization(
          inputs, mean, variance, beta, gamma, variance_epsilon)
      outputs.set_shape(inputs_shape)
      return outputs 
开发者ID:ringringyi,项目名称:DOTA_models,代码行数:36,代码来源:network_units.py

示例3: normalize_batch_in_training

# 需要导入模块: from tensorflow.python.ops import nn [as 别名]
# 或者: from tensorflow.python.ops.nn import batch_normalization [as 别名]
def normalize_batch_in_training(x, gamma, beta, reduction_axes, epsilon=1e-3):
  """Computes mean and std for batch then apply batch_normalization on batch.

  Arguments:
      x: Input tensor or variable.
      gamma: Tensor by which to scale the input.
      beta: Tensor with which to center the input.
      reduction_axes: iterable of integers,
          axes over which to normalize.
      epsilon: Fuzz factor.

  Returns:
      A tuple length of 3, `(normalized_tensor, mean, variance)`.
  """
  mean, var = nn.moments(
      x, reduction_axes, shift=None, name=None, keep_dims=False)
  if sorted(reduction_axes) == list(range(ndim(x)))[:-1]:
    normed = nn.batch_normalization(x, mean, var, beta, gamma, epsilon)
  else:
    # need broadcasting
    target_shape = []
    for axis in range(ndim(x)):
      if axis in reduction_axes:
        target_shape.append(1)
      else:
        target_shape.append(array_ops.shape(x)[axis])
    target_shape = array_ops.stack(target_shape)

    broadcast_mean = array_ops.reshape(mean, target_shape)
    broadcast_var = array_ops.reshape(var, target_shape)
    if gamma is None:
      broadcast_gamma = None
    else:
      broadcast_gamma = array_ops.reshape(gamma, target_shape)
    if beta is None:
      broadcast_beta = None
    else:
      broadcast_beta = array_ops.reshape(beta, target_shape)
    normed = nn.batch_normalization(x, broadcast_mean, broadcast_var,
                                    broadcast_beta, broadcast_gamma, epsilon)
  return normed, mean, var 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:43,代码来源:backend.py

示例4: normalize

# 需要导入模块: from tensorflow.python.ops import nn [as 别名]
# 或者: from tensorflow.python.ops.nn import batch_normalization [as 别名]
def normalize(self, inputs):
    """Apply normalization to input.

    The shape must match the declared shape in the constructor.
    [This is copied from tf.contrib.rnn.LayerNormBasicLSTMCell.]

    Args:
      inputs: Input tensor

    Returns:
      Normalized version of input tensor.

    Raises:
      ValueError: if inputs has undefined rank.
    """
    inputs_shape = inputs.get_shape()
    inputs_rank = inputs_shape.ndims
    if inputs_rank is None:
      raise ValueError('Inputs %s has undefined rank.' % inputs.name)
    axis = range(1, inputs_rank)

    beta = self._component.get_variable('beta_%s' % self._name)
    gamma = self._component.get_variable('gamma_%s' % self._name)

    with tf.variable_scope('layer_norm_%s' % self._name):
      # Calculate the moments on the last axis (layer activations).
      mean, variance = nn.moments(inputs, axis, keep_dims=True)

      # Compute layer normalization using the batch_normalization function.
      variance_epsilon = 1E-12
      outputs = nn.batch_normalization(inputs, mean, variance, beta, gamma,
                                       variance_epsilon)
      outputs.set_shape(inputs_shape)
      return outputs 
开发者ID:rky0930,项目名称:yolo_v2,代码行数:36,代码来源:network_units.py


注:本文中的tensorflow.python.ops.nn.batch_normalization方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。