当前位置: 首页>>代码示例>>Python>>正文


Python Variable.masked_fill_方法代码示例

本文整理汇总了Python中torch.autograd.Variable.masked_fill_方法的典型用法代码示例。如果您正苦于以下问题:Python Variable.masked_fill_方法的具体用法?Python Variable.masked_fill_怎么用?Python Variable.masked_fill_使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在torch.autograd.Variable的用法示例。


在下文中一共展示了Variable.masked_fill_方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: backward

# 需要导入模块: from torch.autograd import Variable [as 别名]
# 或者: from torch.autograd.Variable import masked_fill_ [as 别名]
    def backward(ctx, grad_output):
        input1, input2, y = ctx.saved_variables
        grad_input1 = Variable(input1.data.new(input1.size()).zero_())
        grad_input2 = Variable(input1.data.new(input1.size()).zero_())

        dist = ((input1 - input2).mul_(-1) * y).add_(ctx.margin)
        mask = dist.ge(0)

        grad_input1.masked_fill_(mask, 1)
        grad_input1 = grad_input1.mul_(-1) * y
        grad_input2.masked_fill_(mask, 1) * y
        grad_input2 = grad_input2 * y

        if ctx.size_average:
            grad_input1.div_(y.size(0))
            grad_input2.div_(y.size(0))

        return grad_input1 * grad_output, grad_input2 * grad_output, None, None, None
开发者ID:MaheshBhosale,项目名称:pytorch,代码行数:20,代码来源:loss.py

示例2: word_dropout

# 需要导入模块: from torch.autograd import Variable [as 别名]
# 或者: from torch.autograd.Variable import masked_fill_ [as 别名]
def word_dropout(
        inp, target_code, p=0.0, reserved_codes=(), training=True):
    """
    Applies word dropout to an input Variable. Dropout isn't constant
    across batch examples.

    Parameters:
    -----------
    - inp: torch.Tensor
    - target_code: int, code to use as replacement for dropped timesteps
    - dropout: float, dropout rate
    - reserved_codes: tuple of ints, ints in the input that should never
        be dropped
    - training: bool
    """
    if not training or not p > 0:
        return inp
    inp = Variable(inp.data.new(*inp.size()).copy_(inp.data))
    mask = variable_length_dropout_mask(
        inp.data, dropout_rate=p, reserved_codes=reserved_codes)
    inp.masked_fill_(Variable(mask), target_code)
    return inp
开发者ID:mikekestemont,项目名称:seqmod,代码行数:24,代码来源:custom.py

示例3: nllloss_double_backwards

# 需要导入模块: from torch.autograd import Variable [as 别名]
# 或者: from torch.autograd.Variable import masked_fill_ [as 别名]
def nllloss_double_backwards(ctx, ggI):
    t = ctx.saved_variables
    target = t[1]
    weights = Variable(ctx.additional_args[1])
    size_average = ctx.additional_args[0]
    ignore_index = ctx.additional_args[3]
    reduce = ctx.additional_args[4]

    gI = None

    # can't scatter/gather on indices outside of range, let's just put them in range
    # and 0 out the weights later (so it doesn't matter where in range we put them)
    target_mask = target == ignore_index
    safe_target = target.clone()
    safe_target.masked_fill_(target_mask, 0)

    if weights.dim() == 0:
        weights_to_scatter = Variable(ggI.data.new(safe_target.size()).fill_(1))
    else:
        weights_maybe_resized = weights
        while weights_maybe_resized.dim() < target.dim():
            weights_maybe_resized = weights_maybe_resized.unsqueeze(1)

        weights_maybe_resized = weights_maybe_resized.expand(weights.size()[0:1] + target.size()[1:])
        weights_to_scatter = weights_maybe_resized.gather(0, safe_target)

    weights_to_scatter.masked_fill_(target_mask, 0)
    divisor = weights_to_scatter.sum() if size_average and reduce else 1
    weights_to_scatter = -1 * weights_to_scatter / divisor
    zeros = Variable(ggI.data.new(ggI.size()).zero_())
    mask = zeros.scatter_(1, safe_target.unsqueeze(1), weights_to_scatter.unsqueeze(1))

    if reduce:
        ggO = (ggI * mask).sum()
    else:
        ggO = (ggI * mask).sum(dim=1)

    return gI, None, ggO, None, None, None
开发者ID:Jsmilemsj,项目名称:pytorch,代码行数:40,代码来源:auto_double_backwards.py


注:本文中的torch.autograd.Variable.masked_fill_方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。