当前位置: 首页>>代码示例>>Python>>正文


Python array_ops.prevent_gradient函数代码示例

本文整理汇总了Python中tensorflow.python.ops.array_ops.prevent_gradient函数的典型用法代码示例。如果您正苦于以下问题:Python prevent_gradient函数的具体用法?Python prevent_gradient怎么用?Python prevent_gradient使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。


在下文中一共展示了prevent_gradient函数的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: _SoftmaxCrossEntropyWithLogitsGrad

def _SoftmaxCrossEntropyWithLogitsGrad(op, grad_0, _):
  """Gradient function for SoftmaxCrossEntropyWithLogits."""
  # grad_0 is the backprop for cost, and we multiply it with the gradients
  # (which is output[1])
  # There is no gradient for the labels
  #
  # Currently there is no way to take the second derivative of this op
  # due to the fused implementation's interaction with tf.gradients(),
  # so we make sure we prevent silently incorrect results by raising
  # an error if the second derivative is requested via prevent_gradient.
  softmax_grad_without_gradient = array_ops.prevent_gradient(op.outputs[1])
  return _BroadcastMul(grad_0, softmax_grad_without_gradient), None
开发者ID:BloodD,项目名称:tensorflow,代码行数:12,代码来源:nn_grad.py

示例2: _CTCLossGrad

def _CTCLossGrad(op, grad_loss, _):
  """The derivative provided by CTC Loss.

  Args:
     op: the CTCLoss op.
     grad_loss: The backprop for cost.

  Returns:
     The CTC Loss gradient.
  """
  # Outputs are: loss, grad
  #
  # Currently there is no way to take the second derivative of this op
  # due to the fused implementation's interaction with tf.gradients(),
  # so we make sure we prevent silently incorrect results by raising
  # an error if the second derivative is requested via prevent_gradient.
  grad_without_gradient = array_ops.prevent_gradient(op.outputs[1])
  # Return gradient for inputs and None for
  # labels_indices, labels_values and sequence_length
  return [_BroadcastMul(grad_loss, grad_without_gradient), None, None, None]
开发者ID:AliMiraftab,项目名称:tensorflow,代码行数:20,代码来源:ctc_ops.py

示例3: testPreventGradient

 def testPreventGradient(self):
   with ops.Graph().as_default():
     inp = constant(1.0, shape=[100, 32], name="in")
     out = array_ops.prevent_gradient(inp)
     with self.assertRaisesRegexp(LookupError, "explicitly disabled"):
       _ = gradients.gradients(out, inp)
开发者ID:didukhle,项目名称:tensorflow,代码行数:6,代码来源:gradients_test.py

示例4: _FuzzyCTCLossGrad

 def _FuzzyCTCLossGrad(op, grad_loss, _):
     grad_without_gradient = array_ops.prevent_gradient(
         op.outputs[1], message="Currently there is no way to take the second "
                                " derivative of ctc_loss due to the fused implementation's interaction "
                                " with tf.gradients()")
     return [_BroadcastMul(tf.expand_dims(grad_loss, -1), grad_without_gradient), None, None, None]
开发者ID:AIRob,项目名称:calamari,代码行数:6,代码来源:tensorflow_fuzzy_ctc_loader.py


注:本文中的tensorflow.python.ops.array_ops.prevent_gradient函数示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。