當前位置: 首頁>>代碼示例>>Python>>正文


Python network_units.maybe_apply_dropout方法代碼示例

本文整理匯總了Python中dragnn.python.network_units.maybe_apply_dropout方法的典型用法代碼示例。如果您正苦於以下問題:Python network_units.maybe_apply_dropout方法的具體用法?Python network_units.maybe_apply_dropout怎麽用?Python network_units.maybe_apply_dropout使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在dragnn.python.network_units的用法示例。


在下文中一共展示了network_units.maybe_apply_dropout方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: residual

# 需要導入模塊: from dragnn.python import network_units [as 別名]
# 或者: from dragnn.python.network_units import maybe_apply_dropout [as 別名]
def residual(old_input, new_input, dropout_keep_rate, layer_norm):
  """Residual layer combining old_input and new_input.

  Computes old_input + dropout(new_input) if layer_norm is None; otherwise:
  layer_norm(old_input + dropout(new_input)).

  Args:
    old_input: old float32 Tensor input to residual layer
    new_input: new float32 Tensor input to residual layer
    dropout_keep_rate: dropout proportion of units to keep
    layer_norm: network_units.LayerNorm to apply to residual output, or None

  Returns:
    float32 Tensor output of residual layer.
  """
  res_sum = old_input + network_units.maybe_apply_dropout(new_input,
                                                          dropout_keep_rate,
                                                          False)
  return layer_norm.normalize(res_sum) if layer_norm else res_sum 
開發者ID:rky0930,項目名稱:yolo_v2,代碼行數:21,代碼來源:transformer_units.py

示例2: dot_product_attention

# 需要導入模塊: from dragnn.python import network_units [as 別名]
# 或者: from dragnn.python.network_units import maybe_apply_dropout [as 別名]
def dot_product_attention(queries, keys, values, dropout_keep_rate, bias=None):
  """Computes dot-product attention.

  Args:
    queries: a Tensor with shape [batch, heads, seq_len, depth_keys]
    keys: a Tensor with shape [batch, heads, seq_len, depth_keys]
    values: a Tensor with shape [batch, heads, seq_len, depth_values]
    dropout_keep_rate: dropout proportion of units to keep
    bias: A bias to add before applying the softmax, or None. This can be used
          for masking padding in the batch.

  Returns:
    A Tensor with shape [batch, heads, seq_len, depth_values].
  """
  # [batch, num_heads, seq_len, seq_len]
  logits = tf.matmul(queries, keys, transpose_b=True)
  if bias is not None:
    logits += bias

  attn_weights = tf.nn.softmax(logits)

  # Dropping out the attention links for each of the heads
  attn_weights = network_units.maybe_apply_dropout(attn_weights,
                                                   dropout_keep_rate,
                                                   False)
  return tf.matmul(attn_weights, values) 
開發者ID:rky0930,項目名稱:yolo_v2,代碼行數:28,代碼來源:transformer_units.py

示例3: mlp

# 需要導入模塊: from dragnn.python import network_units [as 別名]
# 或者: from dragnn.python.network_units import maybe_apply_dropout [as 別名]
def mlp(component, input_tensor, dropout_keep_rate, depth):
  """Feed the input through an MLP.

  Each layer except the last is followed by a ReLU activation and dropout.

  Args:
    component: the DRAGNN Component containing parameters for the MLP
    input_tensor: the float32 Tensor input to the MLP.
    dropout_keep_rate: dropout proportion of units to keep
    depth: depth of the MLP.

  Returns:
    the float32 output Tensor
  """
  for i in range(depth):
    ff_weights = component.get_variable('ff_weights_%d' % i)
    input_tensor = tf.nn.conv2d(input_tensor,
                                ff_weights,
                                [1, 1, 1, 1],
                                padding='SAME')
    # Apply ReLU and dropout to all but the last layer
    if i < depth - 1:
      input_tensor = tf.nn.relu(input_tensor)
      input_tensor = network_units.maybe_apply_dropout(input_tensor,
                                                       dropout_keep_rate,
                                                       False)
  return input_tensor 
開發者ID:rky0930,項目名稱:yolo_v2,代碼行數:29,代碼來源:transformer_units.py


注:本文中的dragnn.python.network_units.maybe_apply_dropout方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。