当前位置: 首页>>代码示例>>Python>>正文


Python gen_math_ops._rsqrt_grad方法代码示例

本文整理汇总了Python中tensorflow.python.ops.gen_math_ops._rsqrt_grad方法的典型用法代码示例。如果您正苦于以下问题:Python gen_math_ops._rsqrt_grad方法的具体用法?Python gen_math_ops._rsqrt_grad怎么用?Python gen_math_ops._rsqrt_grad使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tensorflow.python.ops.gen_math_ops的用法示例。


在下文中一共展示了gen_math_ops._rsqrt_grad方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: _RsqrtGrad

# 需要导入模块: from tensorflow.python.ops import gen_math_ops [as 别名]
# 或者: from tensorflow.python.ops.gen_math_ops import _rsqrt_grad [as 别名]
def _RsqrtGrad(op, grad):
  """Returns -0.5 * grad * conj(y)^3."""
  y = op.outputs[0]  # y = x^(-1/2)
  return gen_math_ops._rsqrt_grad(y, grad) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:6,代码来源:math_grad.py

示例2: _RsqrtGradGrad

# 需要导入模块: from tensorflow.python.ops import gen_math_ops [as 别名]
# 或者: from tensorflow.python.ops.gen_math_ops import _rsqrt_grad [as 别名]
def _RsqrtGradGrad(op, grad):
  """Returns backprop gradient for f(a,b) = -0.5 * b * conj(a)^3."""
  a = op.inputs[0]  # a = x^{-1/2}
  b = op.inputs[1]  # backprop gradient for a
  with ops.control_dependencies([grad.op]):
    ca = math_ops.conj(a)
    cg = math_ops.conj(grad)
    grad_a = -1.5 * cg * b * math_ops.square(ca)
    # pylint: disable=protected-access
    grad_b = gen_math_ops._rsqrt_grad(ca, grad)
    return grad_a, grad_b 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:13,代码来源:math_grad.py

示例3: testGradGrad

# 需要导入模块: from tensorflow.python.ops import gen_math_ops [as 别名]
# 或者: from tensorflow.python.ops.gen_math_ops import _rsqrt_grad [as 别名]
def testGradGrad(self):
    np.random.seed(7)
    shape = (5,)
    dtype_tols = [(np.float32, 5e-4), (np.float64, 1e-6), (np.complex64, 5e-4),
                  (np.complex128, 1e-6)]
    op_range = [(gen_math_ops._inv_grad, [-2, 2]),
                (gen_math_ops._rsqrt_grad, [0.1, 3]),
                (gen_math_ops._sigmoid_grad, [-2, 2]),
                (gen_math_ops._sqrt_grad, [0.1, 3]),
                (gen_math_ops._tanh_grad, [-2, 2]),]

    def rand(dtype):
      x = np.random.uniform(
          real_range[0], real_range[1], size=shape[0]).astype(dtype)
      if dtype in (np.complex64, np.complex128):
        x += 1j * np.random.uniform(-2, 2, size=shape[0]).astype(dtype)
      return x

    for op, real_range in op_range:
      with self.test_session():
        for dtype, tol in dtype_tols:
          x = tf.constant(rand(dtype))
          y = tf.constant(rand(dtype))
          z = op(x, y)
          grads = tf.test.compute_gradient(
              [x, y], [shape, shape],
              z,
              shape,
              x_init_value=[rand(dtype), rand(dtype)])
          if isinstance(grads, tuple):
            grads = [grads]
          for analytical, numerical in grads:
            self.assertAllClose(analytical, numerical, rtol=tol, atol=tol) 
开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:35,代码来源:cwise_ops_test.py

示例4: _RsqrtGrad

# 需要导入模块: from tensorflow.python.ops import gen_math_ops [as 别名]
# 或者: from tensorflow.python.ops.gen_math_ops import _rsqrt_grad [as 别名]
def _RsqrtGrad(op, grad):
  """Returns -0.5 * grad * conj(y)^3."""
  y = op.outputs[0]  # y = x^(-1/2)
  # pylint: disable=protected-access
  return gen_math_ops._rsqrt_grad(y, grad)
  # pylint: enable=protected-access 
开发者ID:PacktPublishing,项目名称:Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda,代码行数:8,代码来源:math_grad.py

示例5: _RsqrtGradGrad

# 需要导入模块: from tensorflow.python.ops import gen_math_ops [as 别名]
# 或者: from tensorflow.python.ops.gen_math_ops import _rsqrt_grad [as 别名]
def _RsqrtGradGrad(op, grad):
  """Returns backprop gradient for f(a,b) = -0.5 * b * conj(a)^3."""
  a = op.inputs[0]  # a = x^{-1/2}
  b = op.inputs[1]  # backprop gradient for a
  with ops.control_dependencies([grad]):
    ca = math_ops.conj(a)
    cg = math_ops.conj(grad)
    grad_a = -1.5 * cg * b * math_ops.square(ca)
    # pylint: disable=protected-access
    grad_b = gen_math_ops._rsqrt_grad(ca, grad)
    return grad_a, grad_b 
开发者ID:PacktPublishing,项目名称:Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda,代码行数:13,代码来源:math_grad.py


注:本文中的tensorflow.python.ops.gen_math_ops._rsqrt_grad方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。