本文整理汇总了Python中nervanagpu.NervanaGPU.bench方法的典型用法代码示例。如果您正苦于以下问题:Python NervanaGPU.bench方法的具体用法?Python NervanaGPU.bench怎么用?Python NervanaGPU.bench使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类nervanagpu.NervanaGPU
的用法示例。
在下文中一共展示了NervanaGPU.bench方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: print
# 需要导入模块: from nervanagpu import NervanaGPU [as 别名]
# 或者: from nervanagpu.NervanaGPU import bench [as 别名]
if i > 1:
layer.init_deltas(shared=shared_deltas)
remain, total = drv.mem_get_info()
print("%.3fGB of %.3fGB Allocated (%.3fGB Remaining)" %
((total-remain)/1024.**3, total/1024.**3, remain/1024.**3))
if zeros:
layers[0].init_data()
else:
# give the first layer some data
layers[0].init_data(np.random.uniform(0.0, 1.0, layers[0].dimO))
# Scale the initial weights so activations are bound around 1.0
# We do this by running it through the forward pass and collecting mean stats
ng.bench = False
propagation = None
for layer in layers:
propagation = layer.fprop(propagation, scale_weights=.5)
ng.bench = layer_bench
start = drv.Event()
end = drv.Event()
fprop_time = 0
bprop_time = 0
fprop_flops = 0
bprop_flops = 0
# We throw away the first two runs as it includes pycuda kernel loading times and clock warmup.
# So add 1 to our loop count.