当前位置: 首页>>代码示例>>Python>>正文


Python layers.layer_norm方法代码示例

本文整理汇总了Python中tensorflow.contrib.layers.python.layers.layer_norm方法的典型用法代码示例。如果您正苦于以下问题:Python layers.layer_norm方法的具体用法?Python layers.layer_norm怎么用?Python layers.layer_norm使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tensorflow.contrib.layers.python.layers的用法示例。


在下文中一共展示了layers.layer_norm方法的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: normalize

# 需要导入模块: from tensorflow.contrib.layers.python import layers [as 别名]
# 或者: from tensorflow.contrib.layers.python.layers import layer_norm [as 别名]
def normalize(inp, activation, reuse, scope):
    """The function to forward the normalization.
    Args:
      inp: the input feature maps.
      reuse: whether reuse the variables for the batch norm.
      scope: the label for this conv layer.
      activation: the activation function for this conv layer.
    Return:
      The processed feature maps.
    """
    if FLAGS.norm == 'batch_norm':
        return tf_layers.batch_norm(inp, activation_fn=activation, reuse=reuse, scope=scope)
    elif FLAGS.norm == 'layer_norm':
        return tf_layers.layer_norm(inp, activation_fn=activation, reuse=reuse, scope=scope)
    elif FLAGS.norm == 'None':
        if activation is not None:
            return activation(inp)        
        return inp
    else:
        raise ValueError('Please set correct normalization.')

## Loss functions 
开发者ID:yaoyao-liu,项目名称:meta-transfer-learning,代码行数:24,代码来源:misc.py

示例2: __init__

# 需要导入模块: from tensorflow.contrib.layers.python import layers [as 别名]
# 或者: from tensorflow.contrib.layers.python.layers import layer_norm [as 别名]
def __init__(self, num_units, forget_bias=1.0, reuse_norm=False,
               input_size=None, activation=nn_ops.relu,
               layer_norm=True, norm_gain=1.0, norm_shift=0.0,
               loop_steps=1, decay_rate=0.9, learning_rate=0.5,
               dropout_keep_prob=1.0, dropout_prob_seed=None):

    if input_size is not None:
      logging.warn("%s: The input_size parameter is deprecated.", self)

    self._num_units = num_units
    self._activation = activation
    self._forget_bias = forget_bias
    self._reuse_norm = reuse_norm
    self._keep_prob = dropout_keep_prob
    self._seed = dropout_prob_seed
    self._layer_norm = layer_norm
    self._S = loop_steps
    self._eta = learning_rate
    self._lambda = decay_rate
    self._g = norm_gain
    self._b = norm_shift 
开发者ID:jxwufan,项目名称:AssociativeRetrieval,代码行数:23,代码来源:FastWeightsRNN.py

示例3: normalize

# 需要导入模块: from tensorflow.contrib.layers.python import layers [as 别名]
# 或者: from tensorflow.contrib.layers.python.layers import layer_norm [as 别名]
def normalize(inp, activation, reuse, scope):
    if FLAGS.norm == 'batch_norm':
        return tf_layers.batch_norm(inp, activation_fn=activation, reuse=reuse, scope=scope)
    elif FLAGS.norm == 'layer_norm':
        return tf_layers.layer_norm(inp, activation_fn=activation, reuse=reuse, scope=scope)
    elif FLAGS.norm == 'None':
        if activation is not None:
            return activation(inp)
        else:
            return inp


# Loss functions 
开发者ID:kylehkhsu,项目名称:cactus-maml,代码行数:15,代码来源:utils.py

示例4: normalize

# 需要导入模块: from tensorflow.contrib.layers.python import layers [as 别名]
# 或者: from tensorflow.contrib.layers.python.layers import layer_norm [as 别名]
def normalize(inp, activation, reuse, scope):
    if FLAGS.norm == 'batch_norm':
        return tf_layers.batch_norm(inp, activation_fn=activation, reuse=reuse, scope=scope)
    elif FLAGS.norm == 'layer_norm':
        return tf_layers.layer_norm(inp, activation_fn=activation, reuse=reuse, scope=scope)
    elif FLAGS.norm == 'None':
        if activation is not None:
            return activation(inp)
        else:
            return inp

## Loss functions 
开发者ID:cbfinn,项目名称:maml,代码行数:14,代码来源:utils.py

示例5: normalize

# 需要导入模块: from tensorflow.contrib.layers.python import layers [as 别名]
# 或者: from tensorflow.contrib.layers.python.layers import layer_norm [as 别名]
def normalize(inp, activation, reuse, scope):
    if FLAGS.norm == 'batch_norm':
        return tf_layers.batch_norm(inp, activation_fn=activation, reuse=reuse, scope=scope)
    elif FLAGS.norm == 'layer_norm':
        return tf_layers.layer_norm(inp, activation_fn=activation, reuse=reuse, scope=scope)
    elif FLAGS.norm == 'None':
        return activation(inp)

## Loss functions 
开发者ID:yoonholee,项目名称:MT-net,代码行数:11,代码来源:utils.py

示例6: _norm

# 需要导入模块: from tensorflow.contrib.layers.python import layers [as 别名]
# 或者: from tensorflow.contrib.layers.python.layers import layer_norm [as 别名]
def _norm(self, inp, scope=None):
    reuse = tf.get_variable_scope().reuse
    with vs.variable_scope(scope or "Norm") as scope:
      normalized = layer_norm(inp, reuse=reuse, scope=scope)
      return normalized 
开发者ID:jxwufan,项目名称:AssociativeRetrieval,代码行数:7,代码来源:FastWeightsRNN.py

示例7: normalize

# 需要导入模块: from tensorflow.contrib.layers.python import layers [as 别名]
# 或者: from tensorflow.contrib.layers.python.layers import layer_norm [as 别名]
def normalize(inp, activation, reuse, scope):
    if FLAGS.norm == 'batch_norm':
        return tf_layers.batch_norm(inp, activation_fn=activation, reuse=reuse, scope=scope)
    elif FLAGS.norm == 'layer_norm':
        return tf_layers.layer_norm(inp, activation_fn=activation, reuse=reuse, scope=scope)
    elif FLAGS.norm == 'None':
        if activation is not None:
            return activation(inp)
        else:
            return inp


## Loss functions 
开发者ID:xinzheli1217,项目名称:learning-to-self-train,代码行数:15,代码来源:misc.py


注:本文中的tensorflow.contrib.layers.python.layers.layer_norm方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。