本文整理汇总了Python中tensorflow.python.util.deprecation.deprecated_argument_lookup方法的典型用法代码示例。如果您正苦于以下问题:Python deprecation.deprecated_argument_lookup方法的具体用法?Python deprecation.deprecated_argument_lookup怎么用?Python deprecation.deprecated_argument_lookup使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类tensorflow.python.util.deprecation
的用法示例。
在下文中一共展示了deprecation.deprecated_argument_lookup方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: reverse_sequence
# 需要导入模块: from tensorflow.python.util import deprecation [as 别名]
# 或者: from tensorflow.python.util.deprecation import deprecated_argument_lookup [as 别名]
def reverse_sequence(input,
seq_lengths,
seq_axis=None,
batch_axis=None,
name=None,
seq_dim=None,
batch_dim=None):
seq_axis = deprecation.deprecated_argument_lookup("seq_axis", seq_axis,
"seq_dim", seq_dim)
batch_axis = deprecation.deprecated_argument_lookup("batch_axis", batch_axis,
"batch_dim", batch_dim)
return gen_array_ops.reverse_sequence(
input=input,
seq_lengths=seq_lengths,
seq_dim=seq_axis,
batch_dim=batch_axis,
name=name)
# pylint: enable=redefined-builtin
示例2: reverse_sequence
# 需要导入模块: from tensorflow.python.util import deprecation [as 别名]
# 或者: from tensorflow.python.util.deprecation import deprecated_argument_lookup [as 别名]
def reverse_sequence(input,
seq_lengths,
seq_axis=None,
batch_axis=None,
name=None,
seq_dim=None,
batch_dim=None):
seq_axis = deprecation.deprecated_argument_lookup("seq_axis", seq_axis,
"seq_dim", seq_dim)
batch_axis = deprecation.deprecated_argument_lookup("batch_axis", batch_axis,
"batch_dim", batch_dim)
return gen_array_ops.reverse_sequence(
input=input,
seq_lengths=seq_lengths,
seq_dim=seq_axis,
batch_dim=batch_axis,
name=name)
# pylint: enable=redefined-builtin
开发者ID:PacktPublishing,项目名称:Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda,代码行数:22,代码来源:array_ops.py
示例3: cosine_distance
# 需要导入模块: from tensorflow.python.util import deprecation [as 别名]
# 或者: from tensorflow.python.util.deprecation import deprecated_argument_lookup [as 别名]
def cosine_distance(predictions,
labels=None,
axis=None,
weights=1.0,
scope=None,
dim=None):
"""Adds a cosine-distance loss to the training procedure.
Note that the function assumes that `predictions` and `labels` are already
unit-normalized.
Args:
predictions: An arbitrary matrix.
labels: A `Tensor` whose shape matches 'predictions'
axis: The dimension along which the cosine distance is computed.
weights: Coefficients for the loss a scalar, a tensor of shape
[batch_size] or a tensor whose shape matches `predictions`.
scope: The scope for the operations performed in computing the loss.
dim: The old (deprecated) name for `axis`.
Returns:
A scalar `Tensor` representing the loss value.
Raises:
ValueError: If `predictions` shape doesn't match `labels` shape, or
`weights` is `None`.
"""
axis = deprecated_argument_lookup(
"axis", axis, "dim", dim)
if axis is None:
raise ValueError("You must specify 'axis'.")
with ops.name_scope(scope, "cosine_distance_loss",
[predictions, labels, weights]) as scope:
predictions.get_shape().assert_is_compatible_with(labels.get_shape())
predictions = math_ops.cast(predictions, dtypes.float32)
labels = math_ops.cast(labels, dtypes.float32)
radial_diffs = math_ops.multiply(predictions, labels)
losses = 1 - math_ops.reduce_sum(
radial_diffs, axis=[
axis,
])
return compute_weighted_loss(losses, weights, scope=scope)