本文整理汇总了Python中nervanagpu.NervanaGPU.dropout方法的典型用法代码示例。如果您正苦于以下问题:Python NervanaGPU.dropout方法的具体用法?Python NervanaGPU.dropout怎么用?Python NervanaGPU.dropout使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类nervanagpu.NervanaGPU
的用法示例。
在下文中一共展示了NervanaGPU.dropout方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: GPU
# 需要导入模块: from nervanagpu import NervanaGPU [as 别名]
# 或者: from nervanagpu.NervanaGPU import dropout [as 别名]
#.........这里部分代码省略.........
def softmax(self, x, out):
"""
Softmax nonlinearity. Computes exp(x-max(x)) / sum_i exp(x_i-max(x_i))
Arguments:
x (GPUTensor): input tensor.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
out[:] = (self.ng.reciprocal(self.ng.sum(
self.ng.exp(x - self.ng.max(x, axis=0)), axis=0)) *
self.ng.exp(x - self.ng.max(x, axis=0)))
return out
def softmax_gradient(self, y, err, out):
"""
Gradient of the softmax nonlinearity.
Arguments:
y (GPUTensor): input tensor.
err (GPUTensor): backpropagated error.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
raise NotImplementedError("Softmax gradient should use shortcut")
return out
def make_binary_mask(self, tsr, keepthresh=0.5, dtype=default_dtype):
"""
Create a binary mask for dropout layers.
Arguments:
tsr (GPUTensor): Output tensor
keepthresh (float): fraction of ones
"""
self.ng.dropout(keep=keepthresh, out=tsr)
def gdm_compound(self, ps_item, us_item, vs_item, momentum_coef,
learning_rate, epoch):
"""
Perform gradient descent update with momentum.
Arguments:
ps_item (GPUTensor): parameter tensor (e.g. a weight matrix)
us_item (GPUTensor): update tensor, contains gradient wrt. weights
vs_item (GPUTensor): velocity tensor.
momentum_coef (float): momentum coefficient.
learning_rate (float): learning rate.
epoch (int): epoch (used in conjunction with diagnostics).
Outputs are written to vs_item (updated velocity)
and ps_item (updated weights)
"""
vs_item[:] = vs_item * momentum_coef - us_item * learning_rate
ps_item[:] = ps_item + vs_item
def gdmwd_compound(self, ps_item, us_item, vs_item, momentum_coef,
learning_rate, wd, epoch):
"""
Perform gradient descent update with momentum and weight decay.
Arguments: