当前位置: 首页>>代码示例>>Python>>正文


Python Variable.zerograd方法代码示例

本文整理汇总了Python中chainer.Variable.zerograd方法的典型用法代码示例。如果您正苦于以下问题:Python Variable.zerograd方法的具体用法?Python Variable.zerograd怎么用?Python Variable.zerograd使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在chainer.Variable的用法示例。


在下文中一共展示了Variable.zerograd方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: _apply_backward

# 需要导入模块: from chainer import Variable [as 别名]
# 或者: from chainer.Variable import zerograd [as 别名]
 def _apply_backward(self, x, grid, grads, use_cudnn):
     x = Variable(x)
     grid = Variable(grid)
     y = functions.spatial_transformer_sampler(
         x, grid, use_cudnn=use_cudnn)
     x.zerograd()
     grid.zerograd()
     y.grad = grads
     y.backward()
     return x, grid, y
开发者ID:delta2323,项目名称:chainer,代码行数:12,代码来源:test_spatial_transformer_sampler.py

示例2: update_step

# 需要导入模块: from chainer import Variable [as 别名]
# 或者: from chainer.Variable import zerograd [as 别名]
def update_step(net, images, step_size=1.5, end='inception_4c/output', jitter=32, clip=True):
    offset_x, offset_y = np.random.randint(-jitter, jitter + 1, 2)
    data = np.roll(np.roll(images, offset_x, -1), offset_y, -2)

    x = Variable(xp.asarray(data))
    x.zerograd()
    dest, = net(x, outputs=[end])
    objective(dest).backward()
    g = cuda.to_cpu(x.grad)

    data[:] += step_size / np.abs(g).mean() * g
    data = np.roll(np.roll(data, -offset_x, -1), -offset_y, -2)
    if clip:
        bias = net.mean.reshape((1, 3, 1, 1))
        data[:] = np.clip(data, -bias, 255 - bias)
    return data
开发者ID:dsanno,项目名称:chainer-deepdream,代码行数:18,代码来源:deepdream.py

示例3: Variable

# 需要导入模块: from chainer import Variable [as 别名]
# 或者: from chainer.Variable import zerograd [as 别名]
        for batch_indexes in np.array_split(perm, num_batches):
            x_batch = x_train[batch_indexes]
            t_batch = t_train[batch_indexes]

            x = Variable(x_batch)
            t = Variable(t_batch)

            # 順伝播
            a_z = F.linear(x, w_1, b_1)
            z = F.tanh(a_z)
            a_y = F.linear(z, w_2, b_2)

            loss = F.softmax_cross_entropy(a_y, t)

            # 逆伝播
            w_1.zerograd()
            w_2.zerograd()
            b_1.zerograd()
            b_2.zerograd()

            loss.backward(retain_grad=True)
            grad_w_1 = w_1.grad
            grad_w_2 = w_2.grad
            grad_b_1 = b_1.grad
            grad_b_2 = b_2.grad

            w_1.data = w_1.data - learning_rate * grad_w_1
            w_2.data = w_2.data - learning_rate * grad_w_2
            b_1.data = b_1.data - learning_rate * grad_b_1
            b_2.data = b_2.data - learning_rate * grad_b_2
开发者ID:matsumishoki,项目名称:machine_learning,代码行数:32,代码来源:mnist_chainer_neural_network.py


注:本文中的chainer.Variable.zerograd方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。