本文整理匯總了Python中nervanagpu.NervanaGPU.dropout方法的典型用法代碼示例。如果您正苦於以下問題:Python NervanaGPU.dropout方法的具體用法?Python NervanaGPU.dropout怎麽用?Python NervanaGPU.dropout使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類nervanagpu.NervanaGPU
的用法示例。
在下文中一共展示了NervanaGPU.dropout方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: GPU
# 需要導入模塊: from nervanagpu import NervanaGPU [as 別名]
# 或者: from nervanagpu.NervanaGPU import dropout [as 別名]
#.........這裏部分代碼省略.........
def softmax(self, x, out):
"""
Softmax nonlinearity. Computes exp(x-max(x)) / sum_i exp(x_i-max(x_i))
Arguments:
x (GPUTensor): input tensor.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
out[:] = (self.ng.reciprocal(self.ng.sum(
self.ng.exp(x - self.ng.max(x, axis=0)), axis=0)) *
self.ng.exp(x - self.ng.max(x, axis=0)))
return out
def softmax_gradient(self, y, err, out):
"""
Gradient of the softmax nonlinearity.
Arguments:
y (GPUTensor): input tensor.
err (GPUTensor): backpropagated error.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
raise NotImplementedError("Softmax gradient should use shortcut")
return out
def make_binary_mask(self, tsr, keepthresh=0.5, dtype=default_dtype):
"""
Create a binary mask for dropout layers.
Arguments:
tsr (GPUTensor): Output tensor
keepthresh (float): fraction of ones
"""
self.ng.dropout(keep=keepthresh, out=tsr)
def gdm_compound(self, ps_item, us_item, vs_item, momentum_coef,
learning_rate, epoch):
"""
Perform gradient descent update with momentum.
Arguments:
ps_item (GPUTensor): parameter tensor (e.g. a weight matrix)
us_item (GPUTensor): update tensor, contains gradient wrt. weights
vs_item (GPUTensor): velocity tensor.
momentum_coef (float): momentum coefficient.
learning_rate (float): learning rate.
epoch (int): epoch (used in conjunction with diagnostics).
Outputs are written to vs_item (updated velocity)
and ps_item (updated weights)
"""
vs_item[:] = vs_item * momentum_coef - us_item * learning_rate
ps_item[:] = ps_item + vs_item
def gdmwd_compound(self, ps_item, us_item, vs_item, momentum_coef,
learning_rate, wd, epoch):
"""
Perform gradient descent update with momentum and weight decay.
Arguments: