当前位置: 首页>>代码示例>>Python>>正文


Python Tensor.masked_select方法代码示例

本文整理汇总了Python中torch.Tensor.masked_select方法的典型用法代码示例。如果您正苦于以下问题:Python Tensor.masked_select方法的具体用法?Python Tensor.masked_select怎么用?Python Tensor.masked_select使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在torch.Tensor的用法示例。


在下文中一共展示了Tensor.masked_select方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: _loss_helper

# 需要导入模块: from torch import Tensor [as 别名]
# 或者: from torch.Tensor import masked_select [as 别名]
    def _loss_helper(self,  # pylint: disable=inconsistent-return-statements
                     direction: int,
                     direction_embeddings: torch.Tensor,
                     direction_targets: torch.Tensor,
                     token_embeddings: torch.Tensor) -> Tuple[int, int]:
        mask = direction_targets > 0
        # we need to subtract 1 to undo the padding id since the softmax
        # does not include a padding dimension

        # shape (batch_size * timesteps, )
        non_masked_targets = direction_targets.masked_select(mask) - 1

        # shape (batch_size * timesteps, embedding_dim)
        non_masked_embeddings = direction_embeddings.masked_select(
                mask.unsqueeze(-1)
        ).view(-1, self._forward_dim)
        # note: need to return average loss across forward and backward
        # directions, but total sum loss across all batches.
        # Assuming batches include full sentences, forward and backward
        # directions have the same number of samples, so sum up loss
        # here then divide by 2 just below
        if not self._softmax_loss.tie_embeddings or not self._use_character_inputs:
            return self._softmax_loss(non_masked_embeddings, non_masked_targets)
        else:
            # we also need the token embeddings corresponding to the
            # the targets
            raise NotImplementedError("This requires SampledSoftmaxLoss, which isn't implemented yet.")
            # pylint: disable=unreachable
            non_masked_token_embeddings = self._get_target_token_embeddings(token_embeddings, mask, direction)
            return self._softmax(non_masked_embeddings,
                                 non_masked_targets,
                                 non_masked_token_embeddings)
开发者ID:apmoore1,项目名称:allennlp,代码行数:34,代码来源:language_model.py

示例2: _get_target_token_embeddings

# 需要导入模块: from torch import Tensor [as 别名]
# 或者: from torch.Tensor import masked_select [as 别名]
 def _get_target_token_embeddings(self,
                                  token_embeddings: torch.Tensor,
                                  mask: torch.Tensor,
                                  direction: int) -> torch.Tensor:
     # Need to shift the mask in the correct direction
     zero_col = token_embeddings.new_zeros(mask.size(0), 1).byte()
     if direction == 0:
         # forward direction, get token to right
         shifted_mask = torch.cat([zero_col, mask[:, 0:-1]], dim=1)
     else:
         shifted_mask = torch.cat([mask[:, 1:], zero_col], dim=1)
     return token_embeddings.masked_select(shifted_mask.unsqueeze(-1)).view(-1, self._forward_dim)
开发者ID:apmoore1,项目名称:allennlp,代码行数:14,代码来源:language_model.py


注:本文中的torch.Tensor.masked_select方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。