本文整理汇总了Python中tensorflow.python.ops.gen_nn_ops.fused_batch_norm_grad方法的典型用法代码示例。如果您正苦于以下问题:Python gen_nn_ops.fused_batch_norm_grad方法的具体用法?Python gen_nn_ops.fused_batch_norm_grad怎么用?Python gen_nn_ops.fused_batch_norm_grad使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类tensorflow.python.ops.gen_nn_ops
的用法示例。
在下文中一共展示了gen_nn_ops.fused_batch_norm_grad方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: _FusedBatchNormGrad
# 需要导入模块: from tensorflow.python.ops import gen_nn_ops [as 别名]
# 或者: from tensorflow.python.ops.gen_nn_ops import fused_batch_norm_grad [as 别名]
def _FusedBatchNormGrad(op, *grad):
"""Return the gradients for the 3 inputs of BatchNorm.
Args:
op: The BatchNormOp for which we need to compute gradients.
*grad: An argument list for tensors of gradients wrt the outputs
with grad[0] as grad_y.
Returns:
grad_x: gradient for x, which is scale * rsqrt(variance + epsilon) *
[grad_y - mean(grad_y) - (x - mean(x)) *
mean(grad_y * (x - mean(x))) / (variance + epsilon)]
grad_scale: gradient for scale, which is sum(grad_y * (x - mean(x)) *
rsqrt(variance + epsilon))
grad_offset: gradient for offset, which is sum(grad_y)
"""
return gen_nn_ops.fused_batch_norm_grad(
grad[0],
op.inputs[0],
op.inputs[1],
op.outputs[3],
op.outputs[4],
epsilon=op.get_attr("epsilon"),
data_format=op.get_attr("data_format"),
is_training=op.get_attr("is_training"))
示例2: _BaseFusedBatchNormGrad
# 需要导入模块: from tensorflow.python.ops import gen_nn_ops [as 别名]
# 或者: from tensorflow.python.ops.gen_nn_ops import fused_batch_norm_grad [as 别名]
def _BaseFusedBatchNormGrad(op, use_v2, *grad):
"""Return the gradients for the 3 inputs of BatchNorm.
Args:
op: The BatchNormOp for which we need to compute gradients.
use_v2: Boolean indicating whether to use the V2 version of the fused batch
norm gradient.
*grad: An argument list for tensors of gradients wrt the outputs
with grad[0] as grad_y.
Returns:
grad_x: gradient for x, which is scale * rsqrt(variance + epsilon) *
[grad_y - mean(grad_y) - (x - mean(x)) *
mean(grad_y * (x - mean(x))) / (variance + epsilon)]
in training mode; grad_y * scale * rsqrt(pop_variance + epsilon)
in freeze mode.
grad_scale: gradient for scale, which is sum(grad_y * (x - mean(x)) *
rsqrt(variance + epsilon)) in training mode;
sum(grad_y * (x - pop_mean) * rsqrt(pop_variance + epsilon))
in freeze mode.
grad_offset: gradient for offset, which is sum(grad_y) in training mode;
sum(grad_y) in freeze mode.
"""
x = op.inputs[0]
grad_y = grad[0]
scale = op.inputs[1]
epsilon = op.get_attr("epsilon")
data_format = op.get_attr("data_format")
is_training = op.get_attr("is_training")
grad_fun = (gen_nn_ops.fused_batch_norm_grad_v2 if use_v2
else gen_nn_ops.fused_batch_norm_grad)
if is_training:
return grad_fun(
grad_y,
x,
scale,
op.outputs[3],
op.outputs[4],
epsilon=epsilon,
data_format=data_format,
is_training=is_training)
else:
pop_mean = op.inputs[3]
pop_var = op.inputs[4]
if data_format == b"NCHW":
x = array_ops.transpose(x, [0, 2, 3, 1])
grad_y = array_ops.transpose(grad_y, [0, 2, 3, 1])
dx, dscale, doffset, _, _ = grad_fun(
grad_y,
x,
scale,
pop_mean,
pop_var,
epsilon=epsilon,
data_format='NHWC',
is_training=is_training)
if data_format == b"NCHW":
dx = array_ops.transpose(dx, [0, 3, 1, 2])
return dx, dscale, doffset, None, None
开发者ID:PacktPublishing,项目名称:Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda,代码行数:63,代码来源:nn_grad.py