当前位置: 首页>>代码示例>>Python>>正文


Python NervanaGPU.add方法代码示例

本文整理汇总了Python中nervanagpu.NervanaGPU.add方法的典型用法代码示例。如果您正苦于以下问题:Python NervanaGPU.add方法的具体用法?Python NervanaGPU.add怎么用?Python NervanaGPU.add使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在nervanagpu.NervanaGPU的用法示例。


在下文中一共展示了NervanaGPU.add方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: GPU

# 需要导入模块: from nervanagpu import NervanaGPU [as 别名]
# 或者: from nervanagpu.NervanaGPU import add [as 别名]

#.........这里部分代码省略.........
            inputs (GPUTensor): Will be either the dataset input values (first
                                layer), or the outputs from the previous layer.
            weights (GPUTensor): The weight coefficient values for this layer.
            layer (Layer): The layer object.
        """
        self.ng.dot(weights, inputs, out)

    def bprop_fc(self, out, weights, deltas, layer=None):
        """
        Backward propagate the error through a fully connected network layer.

        Arguments:
            out (GPUTensor): Where to store the backward propagated errors.
            weights (GPUTensor): The weight coefficient values for this layer.
            deltas (GPUTensor): The error values for this layer
            layer (Layer): The layer object.
        """
        self.ng.dot(weights.T, deltas, out)

    def update_fc(self, out, inputs, deltas, layer=None):
        """
        Compute the updated gradient for a fully connected network layer.

        Arguments:
            out (GPUTensor): Where to store the updated gradient value.
            inputs (GPUTensor): Will be either the dataset input values (first
                                layer), or the outputs from the previous layer.
            deltas (GPUTensor): The error values for this layer
            layer (Layer): The layer object.
        """
        self.ng.dot(deltas, inputs.T, out)

    def fprop_conv(self, out, inputs, weights, ofmshape, ofmsize, ofmlocs,
                   ifmshape, links, nifm, padding, stride, ngroups, fpropbuf,
                   local=False):
        """
        Forward propagate the inputs of a convolutional network layer to
        produce output pre-activations (ready for transformation by an
        activation function).

        Arguments:
            out (GPUTensor): Where to store the forward propagated results.
            inputs (GPUTensor): Will be either the dataset input values (first
                             layer), or the outputs from the previous layer.
            weights (GPUTensor): The weight coefficient values for this layer.
            ofmshape (tuple): Dimensions of each output feature map (typically
                              number of height and width neurons).
            ofmsize (int): Total size of each output feature map.
            ofmlocs (GPUTensor): Indices giving the location of each element
                                  in each output feature map stored in out.
            ifmshape (tuple): Dimensions of each input feature map (typically
                              number of height and width neurons).  For this
                              backend we expect these values to be square.
            links (GPUTensor): Input receptive field indices.
            nifm (int): Total number of input feature maps.
            padding (int): Number of additional elements to include along each
                           dimension of each local receptive field during the
                           convolution operation.
            stride (int): Number of neurons to shift the filter at each step.
            ngroups (int): Number of groups.
            fpropbuf (GPUTensor): Temporary storage buffer used to hold the
                                  convolved outputs for a single receptive
                                  field.  Not used for this backend.
            local (bool, optional): Whether to do local filtering (True) or
                                    convolution (False, the default)
        """
开发者ID:YouVentures,项目名称:neon,代码行数:70,代码来源:gpu.py

示例2: MGPU

# 需要导入模块: from nervanagpu import NervanaGPU [as 别名]
# 或者: from nervanagpu.NervanaGPU import add [as 别名]

#.........这里部分代码省略.........
        assert hbuf.dtype == dbuf.dtype
        ndata = dbuf.size
        starts = [i * ndata for i in range(self.num_dev)]

        for dest, strm, ctx, doff in zip(dbuf.tlist, self.strms, self.ctxs,
                                         starts):
            src = hbuf.reshape((hbuf.size))[doff:(doff + ndata)]
            ctx.push()
            drv.memcpy_htod_async(dest.ptr, src, strm)
            ctx.pop()

        self.synchronize()

    def fprop_fc(self, out, inputs, weights, layer=None):
        """
        In this case, the weights are shards, the acts are replicas
        ubuf should be of size nout/num_dev x mbsz
        """
        ubuf = layer.mempool[0]
        assert ubuf.shape == (weights.shape[0], inputs.shape[1])

        if layer.use_biases:
            biases = layer.biases.tlist
        else:
            biases = [None for i in range(self.num_dev)]

        for dbuf, ibuf, wt, bs, strm, ctx in zip(ubuf.tlist, inputs.tlist,
                                                 weights.tlist, biases,
                                                 self.strms, self.ctxs):
            ctx.push()
            self.ng.stream = strm
            self.ng.dot(wt, ibuf, dbuf)
            if layer.use_biases:
                self.ng.add(dbuf, bs, out=dbuf)
            ctx.pop()

        # Note, should be safe not to sync because each fragment is computed
        # on the same stream that originates the copy
        # self.synchronize()
        self.fragment_to_replica(ubuf, out)

    def bprop_fc(self, out, weights, deltas, layer=None):
        """
        Backward propagate the error through a fully connected network layer.

        Arguments:
            out (GPUTensor): Where to store the backward propagated errors.
            weights (GPUTensor): The weight coefficient values for this layer.
            deltas (GPUTensor): The error values for this layer
            layer (Layer): The layer object.
        """
        ubuf = layer.mempool[1]
        wtsz = weights.shape[0]
        starts = [i * wtsz for i in range(self.num_dev)]
        assert out.shape == (weights.shape[1], deltas.shape[1])
        assert ubuf.shape == out.shape

        for dbuf, ibuf, wt, strm, ctx, off in zip(out.tlist, deltas.tlist,
                                                  weights.tlist, self.strms,
                                                  self.ctxs, starts):
            ctx.push()
            self.ng.stream = strm
            self.ng.dot(wt.T, ibuf[off:(off + wtsz)], dbuf)
            ctx.pop()

        # Note, should be safe not to sync because each fragment is computed
开发者ID:neuroidss,项目名称:neon,代码行数:70,代码来源:mgpu.py


注:本文中的nervanagpu.NervanaGPU.add方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。