当前位置: 首页>>代码示例>>Python>>正文


Python math_ops.cumsum方法代码示例

本文整理汇总了Python中tensorflow.python.ops.math_ops.cumsum方法的典型用法代码示例。如果您正苦于以下问题:Python math_ops.cumsum方法的具体用法?Python math_ops.cumsum怎么用?Python math_ops.cumsum使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tensorflow.python.ops.math_ops的用法示例。


在下文中一共展示了math_ops.cumsum方法的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: safe_cumprod

# 需要导入模块: from tensorflow.python.ops import math_ops [as 别名]
# 或者: from tensorflow.python.ops.math_ops import cumsum [as 别名]
def safe_cumprod(x, *args, **kwargs):
    """Computes cumprod of x in logspace using cumsum to avoid underflow.

    The cumprod function and its gradient can result in numerical instabilities
    when its argument has very small and/or zero values.  As long as the argument
    is all positive, we can instead compute the cumulative product as
    exp(cumsum(log(x))).  This function can be called identically to tf.cumprod.

    Args:
      x: Tensor to take the cumulative product of.
      *args: Passed on to cumsum; these are identical to those in cumprod.
      **kwargs: Passed on to cumsum; these are identical to those in cumprod.
    Returns:
      Cumulative product of x.
    """
    with ops.name_scope(None, "SafeCumprod", [x]):
        x = ops.convert_to_tensor(x, name="x")
        tiny = np.finfo(x.dtype.as_numpy_dtype).tiny
        return math_ops.exp(math_ops.cumsum(
            math_ops.log(clip_ops.clip_by_value(x, tiny, 1)), *args, **kwargs)) 
开发者ID:HareeshBahuleyan,项目名称:tf-var-attention,代码行数:22,代码来源:attention_wrapper.py

示例2: safe_cumprod

# 需要导入模块: from tensorflow.python.ops import math_ops [as 别名]
# 或者: from tensorflow.python.ops.math_ops import cumsum [as 别名]
def safe_cumprod(x, *args, **kwargs):
  """Computes cumprod of x in logspace using cumsum to avoid underflow.

  The cumprod function and its gradient can result in numerical instabilities
  when its argument has very small and/or zero values.  As long as the argument
  is all positive, we can instead compute the cumulative product as
  exp(cumsum(log(x))).  This function can be called identically to tf.cumprod.

  Args:
    x: Tensor to take the cumulative product of.
    *args: Passed on to cumsum; these are identical to those in cumprod.
    **kwargs: Passed on to cumsum; these are identical to those in cumprod.
  Returns:
    Cumulative product of x.
  """
  with ops.name_scope(None, "SafeCumprod", [x]):
    x = ops.convert_to_tensor(x, name="x")
    tiny = np.finfo(x.dtype.as_numpy_dtype).tiny
    return math_ops.exp(math_ops.cumsum(
        math_ops.log(clip_ops.clip_by_value(x, tiny, 1)), *args, **kwargs)) 
开发者ID:yanghoonkim,项目名称:NQG_ASs2s,代码行数:22,代码来源:attention_wrapper_mod.py

示例3: safe_cumprod

# 需要导入模块: from tensorflow.python.ops import math_ops [as 别名]
# 或者: from tensorflow.python.ops.math_ops import cumsum [as 别名]
def safe_cumprod(x, *args, **kwargs):
  """Computes cumprod of x in logspace using cumsum to avoid underflow.

  The cumprod function and its gradient can result in numerical instabilities
  when its argument has very small and/or zero values.  As long as the argument
  is all positive, we can instead compute the cumulative product as
  exp(cumsum(log(x))).  This function can be called identically to tf.cumprod.

  Args:
    x: Tensor to take the cumulative product of.
    *args: Passed on to cumsum; these are identical to those in cumprod.
    **kwargs: Passed on to cumsum; these are identical to those in cumprod.
  Returns:
    Cumulative product of x.
  """
  with ops.name_scope(None, "SafeCumprod", [x]):
    x = ops.convert_to_tensor(x, name="x")
    tiny = np.finfo(x.dtype.as_numpy_dtype).tiny
    return math_ops.exp(
        math_ops.cumsum(
            math_ops.log(clip_ops.clip_by_value(x, tiny, 1)), *args, **kwargs
        )
    ) 
开发者ID:NVIDIA,项目名称:OpenSeq2Seq,代码行数:25,代码来源:attention_wrapper.py

示例4: _CumsumGrad

# 需要导入模块: from tensorflow.python.ops import math_ops [as 别名]
# 或者: from tensorflow.python.ops.math_ops import cumsum [as 别名]
def _CumsumGrad(op, grad):
  axis = op.inputs[1]
  exclusive = op.get_attr("exclusive")
  reverse = op.get_attr("reverse")
  return [
      math_ops.cumsum(
          grad, axis, exclusive=exclusive, reverse=not reverse), None
  ] 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:10,代码来源:math_grad.py

示例5: _CumprodGrad

# 需要导入模块: from tensorflow.python.ops import math_ops [as 别名]
# 或者: from tensorflow.python.ops.math_ops import cumsum [as 别名]
def _CumprodGrad(op, grad):
  x = op.inputs[0]
  axis = op.inputs[1]
  exclusive = op.get_attr("exclusive")
  reverse = op.get_attr("reverse")

  # TODO This fails when x contains 0 and should be fixed
  prod = math_ops.cumprod(x, axis, exclusive=exclusive, reverse=reverse)
  out = math_ops.cumsum(
      prod * grad, axis, exclusive=exclusive, reverse=not reverse)
  return [out / x, None] 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:13,代码来源:math_grad.py

示例6: cumsum

# 需要导入模块: from tensorflow.python.ops import math_ops [as 别名]
# 或者: from tensorflow.python.ops.math_ops import cumsum [as 别名]
def cumsum(x, axis=0):
  """Cumulative sum of the values in a tensor, alongside the specified axis.

  Arguments:
      x: A tensor or variable.
      axis: An integer, the axis to compute the sum.

  Returns:
      A tensor of the cumulative sum of values of `x` along `axis`.
  """
  axis = _normalize_axis(axis, ndim(x))
  return math_ops.cumsum(x, axis=axis) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:14,代码来源:backend.py

示例7: _strict_1d_cumsum

# 需要导入模块: from tensorflow.python.ops import math_ops [as 别名]
# 或者: from tensorflow.python.ops.math_ops import cumsum [as 别名]
def _strict_1d_cumsum(tensor, len_tensor):
  """Cumsum of a 1D tensor with defined shape by padding and convolving."""
  # Assumes tensor shape is fully defined.
  return math_ops.cumsum(tensor)[:len_tensor] 
开发者ID:google-research,项目名称:tf-slim,代码行数:6,代码来源:histogram_ops.py

示例8: _CumsumGrad

# 需要导入模块: from tensorflow.python.ops import math_ops [as 别名]
# 或者: from tensorflow.python.ops.math_ops import cumsum [as 别名]
def _CumsumGrad(op, grad):
  axis = op.inputs[1]
  exclusive = op.get_attr("exclusive")
  reverse = op.get_attr("reverse")
  return [math_ops.cumsum(grad, axis, exclusive=exclusive,
                          reverse=not reverse), None] 
开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:8,代码来源:math_grad.py

示例9: _CumprodGrad

# 需要导入模块: from tensorflow.python.ops import math_ops [as 别名]
# 或者: from tensorflow.python.ops.math_ops import cumsum [as 别名]
def _CumprodGrad(op, grad):
  x = op.inputs[0]
  axis = op.inputs[1]
  exclusive = op.get_attr("exclusive")
  reverse = op.get_attr("reverse")

  # TODO This fails when x contains 0 and should be fixed
  prod = math_ops.cumprod(x, axis, exclusive=exclusive, reverse=reverse)
  out = math_ops.cumsum(prod * grad, axis, exclusive=exclusive,
                        reverse=not reverse)
  return [out / x, None] 
开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:13,代码来源:math_grad.py

示例10: cumsum

# 需要导入模块: from tensorflow.python.ops import math_ops [as 别名]
# 或者: from tensorflow.python.ops.math_ops import cumsum [as 别名]
def cumsum(x, axis=0):
  """Cumulative sum of the values in a tensor, alongside the specified axis.

  Arguments:
      x: A tensor or variable.
      axis: An integer, the axis to compute the sum.

  Returns:
      A tensor of the cumulative sum of values of `x` along `axis`.
  """
  return math_ops.cumsum(x, axis=axis) 
开发者ID:PacktPublishing,项目名称:Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda,代码行数:13,代码来源:backend.py

示例11: sparsemax

# 需要导入模块: from tensorflow.python.ops import math_ops [as 别名]
# 或者: from tensorflow.python.ops.math_ops import cumsum [as 别名]
def sparsemax(logits, name=None):
  """Computes sparsemax activations [1].

  For each batch `i` and class `j` we have
    sparsemax[i, j] = max(logits[i, j] - tau(logits[i, :]), 0)

  [1]: https://arxiv.org/abs/1602.02068

  Args:
    logits: A `Tensor`. Must be one of the following types: `half`, `float32`,
      `float64`.
    name: A name for the operation (optional).

  Returns:
    A `Tensor`. Has the same type as `logits`.
  """

  with ops.name_scope(name, "sparsemax", [logits]) as name:
    logits = ops.convert_to_tensor(logits, name="logits")
    obs = array_ops.shape(logits)[0]
    dims = array_ops.shape(logits)[1]

    z = logits - math_ops.reduce_mean(logits, axis=1)[:, array_ops.newaxis]

    # sort z
    z_sorted, _ = nn.top_k(z, k=dims)

    # calculate k(z)
    z_cumsum = math_ops.cumsum(z_sorted, axis=1)
    k = math_ops.range(
        1, math_ops.cast(dims, logits.dtype) + 1, dtype=logits.dtype)
    z_check = 1 + k * z_sorted > z_cumsum
    # because the z_check vector is always [1,1,...1,0,0,...0] finding the
    # (index + 1) of the last `1` is the same as just summing the number of 1.
    k_z = math_ops.reduce_sum(math_ops.cast(z_check, dtypes.int32), axis=1)

    # calculate tau(z)
    indices = array_ops.stack([math_ops.range(0, obs), k_z - 1], axis=1)
    tau_sum = array_ops.gather_nd(z_cumsum, indices)
    tau_z = (tau_sum - 1) / math_ops.cast(k_z, logits.dtype)

    # calculate p
    return math_ops.maximum(
        math_ops.cast(0, logits.dtype), z - tau_z[:, array_ops.newaxis]) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:46,代码来源:sparsemax.py

示例12: _broadcast_ragged_sources_for_overlap

# 需要导入模块: from tensorflow.python.ops import math_ops [as 别名]
# 或者: from tensorflow.python.ops.math_ops import cumsum [as 别名]
def _broadcast_ragged_sources_for_overlap(source_start, source_limit,
                                          target_splits):
  """Repeats source indices for each target item in the same batch.

  Args:
    source_start: `<int>[batch_size, (source_size)]`
    source_limit: `<int>[batch_size, (source_size)]`
    target_splits: `<int64>[batch_size, (target_size+1)]`

  Returns:
    `<int>[batch_size, (source_size), (target_size)]`.
    A tuple of tensors `(tiled_source_start, tiled_source_limit)` where:

    * `tiled_target_start[b, s, t] = source_start[b, s]`
    * `tiled_target_limit[b, s, t] = source_limit[b, s]`
  """
  source_splits = source_start.row_splits
  target_rowlens = target_splits[1:] - target_splits[:-1]
  source_batch_ids = segment_id_ops.row_splits_to_segment_ids(source_splits)

  # <int64>[sum(source_size[b] for b in range(batch_size))]
  # source_repeats[i] is the number of target spans in the batch that contains
  # source span i.  We need to add a new ragged dimension that repeats each
  # source span this number of times.
  source_repeats = ragged_gather_ops.gather(target_rowlens, source_batch_ids)

  # <int64>[sum(source_size[b] for b in range(batch_size)) + 1]
  # The row_splits tensor for the inner ragged dimension of the result tensors.
  inner_splits = array_ops.concat([[0], math_ops.cumsum(source_repeats)],
                                  axis=0)

  # <int64>[sum(source_size[b] * target_size[b] for b in range(batch_size))]
  # Indices for gathering source indices.
  source_indices = segment_id_ops.row_splits_to_segment_ids(inner_splits)

  source_start = ragged_tensor.RaggedTensor.from_nested_row_splits(
      array_ops.gather(source_start.values, source_indices),
      [source_splits, inner_splits])
  source_limit = ragged_tensor.RaggedTensor.from_nested_row_splits(
      array_ops.gather(source_limit.values, source_indices),
      [source_splits, inner_splits])

  return source_start, source_limit 
开发者ID:tensorflow,项目名称:text,代码行数:45,代码来源:pointer_ops.py


注:本文中的tensorflow.python.ops.math_ops.cumsum方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。