用法:
backward(out_grads=None, is_train=True)
- out_grads:(
NDArray
or
list of NDArray
or
dict of str to NDArray
,
optional
) - 要传播返回的输出的梯度。仅当在不是损失函数的输出上调用 bind 时才需要此参数。 - is_train:(
bool
,
default True
) - 这是用于训练还是推理。请注意,在极少数情况下,您希望使用 is_train=False 向后调用以在推理期间获得梯度。
- out_grads:(
参数:
向后传递以获得参数的梯度。
例子:
>>> # Example for binding on loss function symbol, which gives the loss value of the model. >>> # Equivalently it gives the head gradient for backward pass. >>> # In this example the built-in SoftmaxOutput is used as loss function. >>> # MakeLoss can be used to define customized loss function symbol. >>> net = mx.sym.Variable('data') >>> net = mx.sym.FullyConnected(net, name='fc', num_hidden=6) >>> net = mx.sym.Activation(net, name='relu', act_type="relu") >>> net = mx.sym.SoftmaxOutput(net, name='softmax')
>>> args = {'data': mx.nd.ones((1, 4)), 'fc_weight': mx.nd.ones((6, 4)), >>> 'fc_bias': mx.nd.array((1, 4, 4, 4, 5, 6)), 'softmax_label': mx.nd.ones((1))} >>> args_grad = {'fc_weight': mx.nd.zeros((6, 4)), 'fc_bias': mx.nd.zeros((6))} >>> texec = net.bind(ctx=mx.cpu(), args=args, args_grad=args_grad) >>> out = texec.forward(is_train=True)[0].copy() >>> print out.asnumpy() [[ 0.00378404 0.07600445 0.07600445 0.07600445 0.20660152 0.5616011 ]] >>> texec.backward() >>> print(texec.grad_arrays[1].asnumpy()) [[ 0.00378404 0.00378404 0.00378404 0.00378404] [-0.92399555 -0.92399555 -0.92399555 -0.92399555] [ 0.07600445 0.07600445 0.07600445 0.07600445] [ 0.07600445 0.07600445 0.07600445 0.07600445] [ 0.20660152 0.20660152 0.20660152 0.20660152] [ 0.5616011 0.5616011 0.5616011 0.5616011 ]] >>> >>> # Example for binding on non-loss function symbol. >>> # Here the binding symbol is neither built-in loss function >>> # nor customized loss created by MakeLoss. >>> # As a result the head gradient is not automatically provided. >>> a = mx.sym.Variable('a') >>> b = mx.sym.Variable('b') >>> # c is not a loss function symbol >>> c = 2 * a + b >>> args = {'a': mx.nd.array([1,2]), 'b':mx.nd.array([2,3])} >>> args_grad = {'a': mx.nd.zeros((2)), 'b': mx.nd.zeros((2))} >>> texec = c.bind(ctx=mx.cpu(), args=args, args_grad=args_grad) >>> out = texec.forward(is_train=True)[0].copy() >>> print(out.asnumpy()) [ 4. 7.] >>> # out_grads is the head gradient in backward pass. >>> # Here we define 'c' as loss function. >>> # Then 'out' is passed as head gradient of backward pass. >>> texec.backward(out) >>> print(texec.grad_arrays[0].asnumpy()) [ 8. 14.] >>> print(texec.grad_arrays[1].asnumpy()) [ 4. 7.]
相关用法
- Python mxnet.executor.Executor.copy_params_from用法及代码示例
- Python mxnet.executor.Executor.set_monitor_callback用法及代码示例
- Python mxnet.executor.Executor.reshape用法及代码示例
- Python mxnet.executor.Executor.debug_str用法及代码示例
- Python mxnet.executor.Executor.forward用法及代码示例
- Python mxnet.executor.Executor用法及代码示例
- Python mxnet.engine.bulk用法及代码示例
- Python mxnet.symbol.op.broadcast_logical_xor用法及代码示例
- Python mxnet.test_utils.get_zip_data用法及代码示例
- Python mxnet.ndarray.op.uniform用法及代码示例
- Python mxnet.symbol.op.log_softmax用法及代码示例
- Python mxnet.symbol.space_to_depth用法及代码示例
- Python mxnet.ndarray.op.sample_negative_binomial用法及代码示例
- Python mxnet.ndarray.NDArray.ndim用法及代码示例
- Python mxnet.module.BaseModule.get_outputs用法及代码示例
- Python mxnet.module.BaseModule.forward用法及代码示例
- Python mxnet.symbol.random_pdf_poisson用法及代码示例
- Python mxnet.ndarray.op.khatri_rao用法及代码示例
- Python mxnet.ndarray.op.unravel_index用法及代码示例
- Python mxnet.symbol.argmin用法及代码示例
注:本文由纯净天空筛选整理自apache.org大神的英文原创作品 mxnet.executor.Executor.backward。非经特殊声明,原始代码版权归原作者所有,本译文未经允许或授权,请勿转载或复制。