本文整理匯總了Python中nervanagpu.NervanaGPU.exp方法的典型用法代碼示例。如果您正苦於以下問題:Python NervanaGPU.exp方法的具體用法?Python NervanaGPU.exp怎麽用?Python NervanaGPU.exp使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類nervanagpu.NervanaGPU
的用法示例。
在下文中一共展示了NervanaGPU.exp方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: GPU
# 需要導入模塊: from nervanagpu import NervanaGPU [as 別名]
# 或者: from nervanagpu.NervanaGPU import exp [as 別名]
#.........這裏部分代碼省略.........
self.start = drv.Event()
self.end = drv.Event()
self.flop_timer = FlopsDecorator(self)
self.flop_timer.decorate(decorate_fc=decorate_fc,
decorate_conv=decorate_conv,
decorate_ew=decorate_ew)
def flop_timinig_start(self):
"""
Start a new FLOP timer.
Returns:
None: dummy value (not used)
"""
return self.start.record()
def flop_timing_finish(self, start_time):
"""
Complete current FLOP timing.
Arguments:
start_time (unused): ignored.
Returns:
float: elapsed time in seconds since prior flop_timing_start call.
"""
self.end.record()
self.end.synchronize()
return self.end.time_since(self.start)
def uniform(self, low=0.0, high=1.0, shape=1, dtype=default_dtype,
persist_values=True, name=None, allocator=drv.mem_alloc):
"""
generate numpy random number and convert to a GPUTensor.
If called with dype=None it will probably explode
"""
ary = np.random.uniform(low, high, shape)
return GPUTensor(ary.shape, dtype, allocator=allocator, name=name,
rounding=self.ng.round_mode).set(ary)
def normal(self, loc=0.0, scale=1.0, size=1, dtype=default_dtype,
persist_values=True, name=None, allocator=drv.mem_alloc):
"""
Gaussian/Normal random number sample generation
"""
ary = np.random.normal(loc, scale, size)
return GPUTensor(ary.shape, dtype, allocator=allocator, name=name,
rounding=self.ng.round_mode).set(ary)
def fprop_fc(self, out, inputs, weights, layer=None):
"""
Forward propagate the inputs of a fully connected network layer to
produce output pre-activations (ready for transformation by an
activation function).
Arguments:
out (GPUTensor): Where to store the forward propagated results.
inputs (GPUTensor): Will be either the dataset input values (first
layer), or the outputs from the previous layer.
weights (GPUTensor): The weight coefficient values for this layer.
layer (Layer): The layer object.
"""
self.ng.dot(weights, inputs, out)
def bprop_fc(self, out, weights, deltas, layer=None):
"""
Backward propagate the error through a fully connected network layer.