本文整理匯總了Python中dragnn.python.network_units.LayerNorm方法的典型用法代碼示例。如果您正苦於以下問題:Python network_units.LayerNorm方法的具體用法?Python network_units.LayerNorm怎麽用?Python network_units.LayerNorm使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類dragnn.python.network_units
的用法示例。
在下文中一共展示了network_units.LayerNorm方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: residual
# 需要導入模塊: from dragnn.python import network_units [as 別名]
# 或者: from dragnn.python.network_units import LayerNorm [as 別名]
def residual(old_input, new_input, dropout_keep_rate, layer_norm):
"""Residual layer combining old_input and new_input.
Computes old_input + dropout(new_input) if layer_norm is None; otherwise:
layer_norm(old_input + dropout(new_input)).
Args:
old_input: old float32 Tensor input to residual layer
new_input: new float32 Tensor input to residual layer
dropout_keep_rate: dropout proportion of units to keep
layer_norm: network_units.LayerNorm to apply to residual output, or None
Returns:
float32 Tensor output of residual layer.
"""
res_sum = old_input + network_units.maybe_apply_dropout(new_input,
dropout_keep_rate,
False)
return layer_norm.normalize(res_sum) if layer_norm else res_sum