當前位置: 首頁>>代碼示例>>Python>>正文


Python NervanaGPU.bprop_conv方法代碼示例

本文整理匯總了Python中nervanagpu.NervanaGPU.bprop_conv方法的典型用法代碼示例。如果您正苦於以下問題:Python NervanaGPU.bprop_conv方法的具體用法?Python NervanaGPU.bprop_conv怎麽用?Python NervanaGPU.bprop_conv使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在nervanagpu.NervanaGPU的用法示例。


在下文中一共展示了NervanaGPU.bprop_conv方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1:

# 需要導入模塊: from nervanagpu import NervanaGPU [as 別名]
# 或者: from nervanagpu.NervanaGPU import bprop_conv [as 別名]
    nlF = ng.empty(dimF, dtype=dtype)
    nlF[:] = cuF.T
    cuF = None

    nlE = ng.empty(dimO, dtype=dtype)
    nlE[:] = cuE.T
    cuE = None

    nlB = ng.empty(dimI, dtype=dtype)
    nlU = ng.empty(dimF, dtype=dtype)
    nlO = ng.empty(dimO, dtype=dtype)
    #print drv.mem_get_info()

    ng.fprop_conv (conv, nlI, nlF, nlO, alpha=alpha, repeat=repeat)
    ng.bprop_conv (conv, nlF, nlE, nlB, alpha=alpha, repeat=repeat)
    ng.update_conv(conv, nlI, nlE, nlU, alpha=alpha, repeat=repeat)

    nlI = nlF = nlE = None

    print "\ncudnn vs nervanaLib:"

    parO = ng.empty((N,1), dtype=np.float32)
    parB = ng.empty((N,1), dtype=np.float32)
    parU = ng.empty((K,1), dtype=np.float32)
    maxO = parO[0:1,0:1]
    maxB = parB[0:1,0:1]
    maxU = parU[0:1,0:1]

    maxo  = ng.max(abs(cuO - nlO.T), partial=parO, out=maxO).get()[0,0]
    maxb  = ng.max(abs(cuB - nlB.T), partial=parB, out=maxB).get()[0,0]
開發者ID:KayneWest,項目名稱:nervanagpu,代碼行數:32,代碼來源:cudnn.py

示例2: GPU

# 需要導入模塊: from nervanagpu import NervanaGPU [as 別名]
# 或者: from nervanagpu.NervanaGPU import bprop_conv [as 別名]

#.........這裏部分代碼省略.........
            ifmshape (tuple): Dimensions of each input feature map (typically
                              number of height and width neurons).  For this
                              backend we expect these values to be square.
            links (GPUTensor): Input receptive field indices.
            nifm (int): Total number of input feature maps.
            padding (int): Number of additional elements to include along each
                           dimension of each local receptive field during the
                           convolution operation.
            stride (int): Number of neurons to shift the filter at each step.
            ngroups (int): Number of groups.
            fpropbuf (GPUTensor): Temporary storage buffer used to hold the
                                  convolved outputs for a single receptive
                                  field.  Not used for this backend.
            local (bool, optional): Whether to do local filtering (True) or
                                    convolution (False, the default)
        """

        '''
        N: Number of images in mini-batch
        C: Number of input feature maps
        K: Number of output feature maps

        D: Depth  of input image
        H: Height of input image
        W: Width  of input image

        T: Depth  of filter kernel
        R: Height of filter kernel
        S: Width  of filter kernel
        '''
        self.ng.fprop_conv(layer=fpropbuf, I=inputs, F=weights, O=out,
                           alpha=1.0, repeat=1)

    def bprop_conv(self, out, weights, deltas, ofmshape, ofmsize, ofmlocs,
                   ifmshape, links, padding, stride, nifm, ngroups, bpropbuf,
                   local=False):
        """
        Backward propagate the error through a convolutional network layer.

        Arguments:
            out (GPUTensor): Where to store the backward propagated errors.
            weights (GPUTensor): The weight coefficient values for this layer.
            deltas (GPUTensor): The error values for this layer
            ofmshape (tuple): Dimensions of each output feature map (typically
                              height and width).
            ofmsize (int): Total size of each output feature map.
            ofmlocs (GPUTensor): Indices giving the location of each element in
                                 each output feature map stored in out.
            ifmshape (tuple): Dimensions of each input feature map (typically
                              height and width).
            links (GPUTensor): Input receptive field indices.
            nifm (int): Total number of input feature maps.
            padding (int): Number of additional elements to include along each
                           dimension of each local receptive field during the
                           convolution operation.
            stride (int): Number of neurons to shift the filter at each step.
            ngroups (int): Number of groups.
            bpropbuf (GPUTensor): Temporary storage buffer used to hold the
                                  backpropagated error for a single receptive
                                  field
            local (bool, optional): Whether to do local filtering (True) or
                                    convolution (False, the default)
        """
        self.ng.bprop_conv(layer=bpropbuf, F=weights, E=deltas, grad_I=out,
                           alpha=1.0, repeat=1)
開發者ID:YouVentures,項目名稱:neon,代碼行數:69,代碼來源:gpu.py

示例3: padding

# 需要導入模塊: from nervanagpu import NervanaGPU [as 別名]
# 或者: from nervanagpu.NervanaGPU import bprop_conv [as 別名]
cpuU = np.zeros(slicable(dimF),   dtype=np.float32)

# give gpu the input array without zero padding (not needed)
devI = ng.array(cpuI[:-1,:].reshape(dimI), dtype=dtype)
devF = ng.array(cpuF.reshape(dimF), dtype=dtype)
devE = ng.array(cpuE, dtype=dtype)

devO = devB = devU = 0

if "fprop"  in ops:
    devO = ng.empty(dimO, dtype=dtype)
    ng.fprop_conv(conv,  devI, devF, devO, alpha=1.0, repeat=repeat)

if "bprop"  in ops:
    devB = ng.empty(dimI, dtype=dtype)
    ng.bprop_conv(conv,  devF, devE, devB, alpha=1.0, repeat=repeat)

if "update" in ops:
    devU = ng.empty(dimF, dtype=dtype)
    ng.update_conv(conv, devI, devE, devU, alpha=1.0, repeat=repeat)


def pixel_indices(mt, pr, qs):

    T,R,S = conv.TRS
    D,H,W = conv.DHW
    C     = conv.C
    HW    = H*W
    DHW   = D*H*W
    imax  = C*DHW
開發者ID:KayneWest,項目名稱:nervanagpu,代碼行數:32,代碼來源:conv_test.py


注:本文中的nervanagpu.NervanaGPU.bprop_conv方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。