本文整理匯總了Python中tensorflow.python.ops.gen_nn_ops._softmax方法的典型用法代碼示例。如果您正苦於以下問題:Python gen_nn_ops._softmax方法的具體用法?Python gen_nn_ops._softmax怎麽用?Python gen_nn_ops._softmax使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類tensorflow.python.ops.gen_nn_ops
的用法示例。
在下文中一共展示了gen_nn_ops._softmax方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: softmax
# 需要導入模塊: from tensorflow.python.ops import gen_nn_ops [as 別名]
# 或者: from tensorflow.python.ops.gen_nn_ops import _softmax [as 別名]
def softmax(logits, dim=-1, name=None):
"""Computes softmax activations.
For each batch `i` and class `j` we have
softmax = exp(logits) / reduce_sum(exp(logits), dim)
Args:
logits: A non-empty `Tensor`. Must be one of the following types: `half`,
`float32`, `float64`.
dim: The dimension softmax would be performed on. The default is -1 which
indicates the last dimension.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `logits`. Same shape as `logits`.
Raises:
InvalidArgumentError: if `logits` is empty or `dim` is beyond the last
dimension of `logits`.
"""
return _softmax(logits, gen_nn_ops._softmax, dim, name)
示例2: log_softmax
# 需要導入模塊: from tensorflow.python.ops import gen_nn_ops [as 別名]
# 或者: from tensorflow.python.ops.gen_nn_ops import _softmax [as 別名]
def log_softmax(logits, dim=-1, name=None):
"""Computes log softmax activations.
For each batch `i` and class `j` we have
logsoftmax = logits - log(reduce_sum(exp(logits), dim))
Args:
logits: A non-empty `Tensor`. Must be one of the following types: `half`,
`float32`, `float64`.
dim: The dimension softmax would be performed on. The default is -1 which
indicates the last dimension.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `logits`. Same shape as `logits`.
Raises:
InvalidArgumentError: if `logits` is empty or `dim` is beyond the last
dimension of `logits`.
"""
return _softmax(logits, gen_nn_ops._log_softmax, dim, name)
示例3: softmax
# 需要導入模塊: from tensorflow.python.ops import gen_nn_ops [as 別名]
# 或者: from tensorflow.python.ops.gen_nn_ops import _softmax [as 別名]
def softmax(logits, dim=-1, name=None):
"""Computes softmax activations.
This function performs the equivalent of
softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), dim)
Args:
logits: A non-empty `Tensor`. Must be one of the following types: `half`,
`float32`, `float64`.
dim: The dimension softmax would be performed on. The default is -1 which
indicates the last dimension.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type and shape as `logits`.
Raises:
InvalidArgumentError: if `logits` is empty or `dim` is beyond the last
dimension of `logits`.
"""
return _softmax(logits, gen_nn_ops._softmax, dim, name)
開發者ID:PacktPublishing,項目名稱:Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda,代碼行數:24,代碼來源:nn_ops.py