本文整理匯總了Python中chainer.cuda.cudnn_enabled方法的典型用法代碼示例。如果您正苦於以下問題:Python cuda.cudnn_enabled方法的具體用法?Python cuda.cudnn_enabled怎麽用?Python cuda.cudnn_enabled使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類chainer.cuda
的用法示例。
在下文中一共展示了cuda.cudnn_enabled方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: backward_log_softmax
# 需要導入模塊: from chainer import cuda [as 別名]
# 或者: from chainer.cuda import cudnn_enabled [as 別名]
def backward_log_softmax(self, x, y, gy):
if cuda.cudnn_enabled:
cudnn = cuda.cudnn
libcudnn = cuda.cuda.cudnn
_algorithm = libcudnn.CUDNN_SOFTMAX_LOG
_mode = libcudnn.CUDNN_SOFTMAX_MODE_CHANNEL
xp = cuda.get_array_module(x)
if xp is not numpy and chainer.should_use_cudnn('>=auto', 3000):
oz_dtype = 'd' if x.dtype == 'd' else 'f'
one = numpy.array(1, dtype=oz_dtype).ctypes
zero = numpy.array(0, dtype=oz_dtype).ctypes
handle = cudnn.get_handle()
gx = xp.empty(x.shape, dtype=x.dtype)
gx_cube = gx.reshape(gx.shape[:2] + (-1, 1))
desc = cudnn.create_tensor_descriptor(gx_cube)
libcudnn.softmaxBackward(
handle, _algorithm, _mode, one.data, desc.value,
y.data.ptr, desc.value, gy.data.ptr, zero.data,
desc.value, gx.data.ptr)
else:
gx = gy - xp.exp(y) * gy.sum(axis=1, keepdims=True)
return gx
示例2: __init__
# 需要導入模塊: from chainer import cuda [as 別名]
# 或者: from chainer.cuda import cudnn_enabled [as 別名]
def __init__(self, eps=2e-5, mean=None, var=None, train=False,
decay=0.9, use_cudnn=True):
self.running_mean = mean
self.running_var = var
self.train = train
self.eps = eps
if cuda.cudnn_enabled and use_cudnn:
if eps <= 1e-5:
msg = 'cuDNN does not allow an eps value less than 1e-5.'
raise RuntimeError(msg)
self.use_cudnn = use_cudnn
self.mean_cache = None
self.decay = decay