当前位置: 首页>>代码示例>>Python>>正文


Python network_units.maybe_apply_dropout方法代码示例

本文整理汇总了Python中dragnn.python.network_units.maybe_apply_dropout方法的典型用法代码示例。如果您正苦于以下问题:Python network_units.maybe_apply_dropout方法的具体用法?Python network_units.maybe_apply_dropout怎么用?Python network_units.maybe_apply_dropout使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在dragnn.python.network_units的用法示例。


在下文中一共展示了network_units.maybe_apply_dropout方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: residual

# 需要导入模块: from dragnn.python import network_units [as 别名]
# 或者: from dragnn.python.network_units import maybe_apply_dropout [as 别名]
def residual(old_input, new_input, dropout_keep_rate, layer_norm):
  """Residual layer combining old_input and new_input.

  Computes old_input + dropout(new_input) if layer_norm is None; otherwise:
  layer_norm(old_input + dropout(new_input)).

  Args:
    old_input: old float32 Tensor input to residual layer
    new_input: new float32 Tensor input to residual layer
    dropout_keep_rate: dropout proportion of units to keep
    layer_norm: network_units.LayerNorm to apply to residual output, or None

  Returns:
    float32 Tensor output of residual layer.
  """
  res_sum = old_input + network_units.maybe_apply_dropout(new_input,
                                                          dropout_keep_rate,
                                                          False)
  return layer_norm.normalize(res_sum) if layer_norm else res_sum 
开发者ID:rky0930,项目名称:yolo_v2,代码行数:21,代码来源:transformer_units.py

示例2: dot_product_attention

# 需要导入模块: from dragnn.python import network_units [as 别名]
# 或者: from dragnn.python.network_units import maybe_apply_dropout [as 别名]
def dot_product_attention(queries, keys, values, dropout_keep_rate, bias=None):
  """Computes dot-product attention.

  Args:
    queries: a Tensor with shape [batch, heads, seq_len, depth_keys]
    keys: a Tensor with shape [batch, heads, seq_len, depth_keys]
    values: a Tensor with shape [batch, heads, seq_len, depth_values]
    dropout_keep_rate: dropout proportion of units to keep
    bias: A bias to add before applying the softmax, or None. This can be used
          for masking padding in the batch.

  Returns:
    A Tensor with shape [batch, heads, seq_len, depth_values].
  """
  # [batch, num_heads, seq_len, seq_len]
  logits = tf.matmul(queries, keys, transpose_b=True)
  if bias is not None:
    logits += bias

  attn_weights = tf.nn.softmax(logits)

  # Dropping out the attention links for each of the heads
  attn_weights = network_units.maybe_apply_dropout(attn_weights,
                                                   dropout_keep_rate,
                                                   False)
  return tf.matmul(attn_weights, values) 
开发者ID:rky0930,项目名称:yolo_v2,代码行数:28,代码来源:transformer_units.py

示例3: mlp

# 需要导入模块: from dragnn.python import network_units [as 别名]
# 或者: from dragnn.python.network_units import maybe_apply_dropout [as 别名]
def mlp(component, input_tensor, dropout_keep_rate, depth):
  """Feed the input through an MLP.

  Each layer except the last is followed by a ReLU activation and dropout.

  Args:
    component: the DRAGNN Component containing parameters for the MLP
    input_tensor: the float32 Tensor input to the MLP.
    dropout_keep_rate: dropout proportion of units to keep
    depth: depth of the MLP.

  Returns:
    the float32 output Tensor
  """
  for i in range(depth):
    ff_weights = component.get_variable('ff_weights_%d' % i)
    input_tensor = tf.nn.conv2d(input_tensor,
                                ff_weights,
                                [1, 1, 1, 1],
                                padding='SAME')
    # Apply ReLU and dropout to all but the last layer
    if i < depth - 1:
      input_tensor = tf.nn.relu(input_tensor)
      input_tensor = network_units.maybe_apply_dropout(input_tensor,
                                                       dropout_keep_rate,
                                                       False)
  return input_tensor 
开发者ID:rky0930,项目名称:yolo_v2,代码行数:29,代码来源:transformer_units.py


注:本文中的dragnn.python.network_units.maybe_apply_dropout方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。