本文整理汇总了Python中torch.Tensor.sigmoid方法的典型用法代码示例。如果您正苦于以下问题:Python Tensor.sigmoid方法的具体用法?Python Tensor.sigmoid怎么用?Python Tensor.sigmoid使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类torch.Tensor
的用法示例。
在下文中一共展示了Tensor.sigmoid方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: _greedy_decode
# 需要导入模块: from torch import Tensor [as 别名]
# 或者: from torch.Tensor import sigmoid [as 别名]
def _greedy_decode(arc_scores: torch.Tensor,
arc_tag_logits: torch.Tensor,
mask: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Decodes the head and head tag predictions by decoding the unlabeled arcs
independently for each word and then again, predicting the head tags of
these greedily chosen arcs independently.
Parameters
----------
arc_scores : ``torch.Tensor``, required.
A tensor of shape (batch_size, sequence_length, sequence_length) used to generate
a distribution over attachments of a given word to all other words.
arc_tag_logits : ``torch.Tensor``, required.
A tensor of shape (batch_size, sequence_length, sequence_length, num_tags) used to
generate a distribution over tags for each arc.
mask : ``torch.Tensor``, required.
A mask of shape (batch_size, sequence_length).
Returns
-------
arc_probs : ``torch.Tensor``
A tensor of shape (batch_size, sequence_length, sequence_length) representing the
probability of an arc being present for this edge.
arc_tag_probs : ``torch.Tensor``
A tensor of shape (batch_size, sequence_length, sequence_length, sequence_length)
representing the distribution over edge tags for a given edge.
"""
# Mask the diagonal, because we don't self edges.
inf_diagonal_mask = torch.diag(arc_scores.new(mask.size(1)).fill_(-numpy.inf))
arc_scores = arc_scores + inf_diagonal_mask
# shape (batch_size, sequence_length, sequence_length, num_tags)
arc_tag_logits = arc_tag_logits + inf_diagonal_mask.unsqueeze(0).unsqueeze(-1)
# Mask padded tokens, because we only want to consider actual word -> word edges.
minus_mask = (1 - mask).byte().unsqueeze(2)
arc_scores.masked_fill_(minus_mask, -numpy.inf)
arc_tag_logits.masked_fill_(minus_mask.unsqueeze(-1), -numpy.inf)
# shape (batch_size, sequence_length, sequence_length)
arc_probs = arc_scores.sigmoid()
# shape (batch_size, sequence_length, sequence_length, num_tags)
arc_tag_probs = torch.nn.functional.softmax(arc_tag_logits, dim=-1)
return arc_probs, arc_tag_probs