本文整理匯總了Python中nervanagpu.NervanaGPU.bench方法的典型用法代碼示例。如果您正苦於以下問題:Python NervanaGPU.bench方法的具體用法?Python NervanaGPU.bench怎麽用?Python NervanaGPU.bench使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類nervanagpu.NervanaGPU
的用法示例。
在下文中一共展示了NervanaGPU.bench方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: print
# 需要導入模塊: from nervanagpu import NervanaGPU [as 別名]
# 或者: from nervanagpu.NervanaGPU import bench [as 別名]
if i > 1:
layer.init_deltas(shared=shared_deltas)
remain, total = drv.mem_get_info()
print("%.3fGB of %.3fGB Allocated (%.3fGB Remaining)" %
((total-remain)/1024.**3, total/1024.**3, remain/1024.**3))
if zeros:
layers[0].init_data()
else:
# give the first layer some data
layers[0].init_data(np.random.uniform(0.0, 1.0, layers[0].dimO))
# Scale the initial weights so activations are bound around 1.0
# We do this by running it through the forward pass and collecting mean stats
ng.bench = False
propagation = None
for layer in layers:
propagation = layer.fprop(propagation, scale_weights=.5)
ng.bench = layer_bench
start = drv.Event()
end = drv.Event()
fprop_time = 0
bprop_time = 0
fprop_flops = 0
bprop_flops = 0
# We throw away the first two runs as it includes pycuda kernel loading times and clock warmup.
# So add 1 to our loop count.