当前位置: 首页>>代码示例>>Python>>正文


Python sparse.structured_dot函数代码示例

本文整理汇总了Python中theano.sparse.structured_dot函数的典型用法代码示例。如果您正苦于以下问题:Python structured_dot函数的具体用法?Python structured_dot怎么用?Python structured_dot使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。


在下文中一共展示了structured_dot函数的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: buildgraph

 def buildgraph(spdata, sym_mat):
     csr = CSR(spdata, spmat.indices[:spmat.size],
             spmat.indptr, spmat.shape)
     assert csr.type.dtype == 'float64'
     rval = structured_dot(csr, sym_mat)
     assert rval.type.dtype == 'float64'
     return rval
开发者ID:HaniAlmousli,项目名称:Theano,代码行数:7,代码来源:test_basic.py

示例2: applySparseFilter

def applySparseFilter(kerns, kshp, nkern, images, imgshp, step=(1,1), bias=None, mode='valid'):
    """
    "images" is assumed to be a matrix of shape batch_size x img_size, where the second
    dimension represents each image in raster order

    Output feature map will have shape:

    .. code-block:: python

       batch_size x number of kernels * output_size

    .. note::

        IMPORTANT: note that this means that each feature map is contiguous in memory.
        The memory layout will therefore be:
        [ <feature_map_0> <feature_map_1> ... <feature_map_n>],
        where <feature_map> represents a "feature map" in raster order

    Note that the concept of feature map doesn't really apply to sparse filters without
    weight sharing. Basically, nkern=1 will generate one output img/feature map,
    nkern=2 a second feature map, etc.

    kerns is a 1D tensor, and assume to be of shape:

    .. code-block:: python

       nkern * N.prod(outshp) x N.prod(kshp)

    Each filter is applied seperately to consecutive output pixels.

    :param kerns: nkern*outsize*ksize vector containing kernels
    :param kshp: tuple containing actual dimensions of kernel (not symbolic)
    :param nkern: number of kernels to apply at each pixel in the input image.
                  nkern=1 will apply a single unique filter for each input pixel.
    :param images: bsize x imgsize matrix containing images on which to apply filters
    :param imgshp: tuple containing actual image dimensions (not symbolic)
    :param step: determines number of pixels between adjacent receptive fields
                 (tuple containing dx,dy values)
    :param mode: 'full', 'valid' see CSM.evaluate function for details
    :return: out1, symbolic result
    :return: out2, logical shape of the output img (nkern,height,width)
             (after dot product, not of the sparse matrix!)
    """

    # inshp contains either 2 entries (height,width) or 3 (nfeatures,h,w)
    # in the first case, default nfeatures to 1
    if numpy.size(imgshp)==2:
        imgshp = (1,)+imgshp

    # construct indices and index pointers for sparse matrix
    indices, indptr, spmat_shape, sptype, outshp, kmap = \
        convolution_indices.sparse_eval(imgshp, kshp, nkern, step, mode)

    # build a sparse weight matrix
    sparsew = theano.sparse.CSM(sptype, kmap)(kerns, indices, indptr, spmat_shape)
    output =  sparse.structured_dot(sparsew, images.T).T
    if bias is not None:
        output += bias

    return output, numpy.hstack((nkern,outshp))
开发者ID:glorotxa,项目名称:Theano,代码行数:60,代码来源:sp.py

示例3: __call__

    def __call__(self, inputs):
        """
        Compute and return the PCA transformation of sparse data.

        Precondition: self.mean has been subtracted from inputs.  The reason
        for this is that, as far as I can tell, there is no way to subtract a
        vector from a sparse matrix without constructing an intermediary dense
        matrix, in theano; even the hack used in train() won't do, because
        there is no way to symbolically construct a sparse matrix by repeating
        a vector (again, as far as I can tell).

        :type inputs: scipy.sparse matrix object, shape (n, d)
        :param inputs: sparse matrix on which to compute PCA

        TODO: docstring upgrade. Make it consistent with the numpy/pylearn
        standard.
        """

        # Update component cutoff, in case min_variance or num_components has
        # changed (or both).
        self._update_cutoff()

        Y = structured_dot(inputs, self.W[:, :self.component_cutoff])
        if self.whiten:
            Y /= tensor.sqrt(self.v[:self.component_cutoff])
        return Y
开发者ID:Alienfeel,项目名称:pylearn2,代码行数:26,代码来源:pca.py

示例4: buildgraphCSC

 def buildgraphCSC(spdata, sym_mat):
     csc = CSC(spdata, spmat.indices[:spmat.size],
             spmat.indptr, spmat.shape)
     assert csc.type.dtype == 'float32'
     rval = structured_dot(csc, sym_mat)
     assert rval.type.dtype == 'float32'
     return rval
开发者ID:HaniAlmousli,项目名称:Theano,代码行数:7,代码来源:test_basic.py

示例5: __call__

    def __call__(self, inputs):
        """
        Compute and return the PCA transformation of sparse data.

        Precondition: `self.mean` has been subtracted from inputs. The reason
        for this is that, as far as I can tell, there is no way to subtract a
        vector from a sparse matrix without constructing an intermediary dense
        matrix, in theano; even the hack used in `train()` won't do, because
        there is no way to symbolically construct a sparse matrix by repeating
        a vector (again, as far as I can tell).

        Parameters
        ----------
        inputs : scipy.sparse matrix object
            Sparse matrix of shape (n, d) on which to compute PCA

        Returns
        -------
        WRITEME
        """

        # Update component cutoff, in case min_variance or num_components has
        # changed (or both).
        self._update_cutoff()

        Y = structured_dot(inputs, self.W[:, :self.component_cutoff])
        if self.whiten:
            Y /= tensor.sqrt(self.v[:self.component_cutoff])
        return Y
开发者ID:AlexArgus,项目名称:pylearn2,代码行数:29,代码来源:pca.py

示例6: conv2d_channel_minor

def conv2d_channel_minor(images, kerns, ishp4, kshp4, subsample=(1,1),
             border_mode='valid'):
    # start by computing output dimensions, size, etc
    B, IR, IC, C = ishp4
    K, KR, KC, CH = kshp4
    assert C == CH # number of channels must match

    OR, OC = conv_out_shp(IR, IC, KR, KC, border_mode, subsample)
    oshp = (B, OR, OC, K)

    # construct indices and index pointers for sparse matrix, which, when multiplied
    # with input images will generate a stack of image patches
    patch_extractor = sp_extract_patches(IR, IC, KR, KC, CH,
            RasterOrders.row_col_channel,
            RasterOrders.row_col_channel,
            subsample,
            border_mode,
            flip_patches=True).tocsc()

    #print IR, IC, KR, KC, CH, patch_extractor.shape, patch_extractor.nnz
    patches = sparse.structured_dot(
            images.flatten(2),
            patch_extractor)

    # compute output of linear classifier
    patch_stack = patches.reshape((B*OR*OC, KR*KC*CH))

    # kern is of shape: nkern x ksize*number_of_input_features
    # output is thus of shape: bsize*outshp x nkern
    output = tensor.dot(patch_stack, kerns.flatten(2).T).reshape((B, OR, OC, K))

    return output, oshp
开发者ID:HaniAlmousli,项目名称:pylearn,代码行数:32,代码来源:spconv.py

示例7: test_structured_dot_grad

 def test_structured_dot_grad(self):
     # We also need the grad of CSM to be implemetned.
     raise SkipTest("infer_shape not implemented for the grad" " of structured_dot")
     for format, op in [("csc", StructuredDotGradCSC), ("csr", StructuredDotGradCSR)]:
         x = SparseType(format, dtype=config.floatX)()
         y = SparseType(format, dtype=config.floatX)()
         grads = tensor.grad(dense_from_sparse(structured_dot(x, y)).sum(), [x, y])
         self._compile_and_check(
             [x, y],
             [grads[0]],
             [
                 as_sparse_format(random_lil((4, 5), config.floatX, 3), format),
                 as_sparse_format(random_lil((5, 3), config.floatX, 3), format),
             ],
             op,
         )
         self._compile_and_check(
             [x, y],
             [grads[1]],
             [
                 as_sparse_format(random_lil((4, 5), config.floatX, 3), format),
                 as_sparse_format(random_lil((5, 3), config.floatX, 3), format),
             ],
             op,
         )
开发者ID:daien,项目名称:Theano,代码行数:25,代码来源:test_basic.py

示例8: test_upcast

    def test_upcast(self):

        typenames = ("float32", "int64", "int8", "int32", "int16", "float64", "complex64", "complex128")
        for dense_dtype in typenames:
            for sparse_dtype in typenames:
                correct_dtype = theano.scalar.upcast(sparse_dtype, dense_dtype)
                a = SparseType("csc", dtype=sparse_dtype)()
                b = tensor.matrix(dtype=dense_dtype)
                d = structured_dot(a, b)
                assert d.type.dtype == correct_dtype

                # compile and run a function

                f = theano.function([a, b], d)

                M, N, K, nnz = (4, 3, 5, 3)
                spmat = sp.csc_matrix(random_lil((M, N), sparse_dtype, nnz))
                # the following madness is necessary to workaround
                # an intc vs. int32 bug.
                # The lil makes an intc on my computer when sparse_dtype
                # is int32.
                spmat.dtype = numpy.dtype(sparse_dtype)
                mat = numpy.asarray(numpy.random.randn(N, K) * 9, dtype=dense_dtype)
                print "DTYPES", sparse_dtype, dense_dtype
                print "sym types", a.type, b.type
                print "dtype strings", spmat.dtype, mat.dtype
                print "numpy dtype num", mat.dtype.num
                print "scipy dtype num", spmat.data.dtype.num
                theano_result = f(spmat, mat)
                scipy_result = spmat * mat
                assert theano_result.shape == scipy_result.shape
                assert theano_result.dtype == scipy_result.dtype
                assert _allclose(theano_result, scipy_result)
开发者ID:daien,项目名称:Theano,代码行数:33,代码来源:test_basic.py

示例9: test_structured_dot

 def test_structured_dot(self):
     x = SparseType("csc", dtype=config.floatX)()
     y = SparseType("csc", dtype=config.floatX)()
     self._compile_and_check(
         [x, y],
         [structured_dot(x, y)],
         [sp.csc_matrix(random_lil((4, 5), config.floatX, 3)), sp.csc_matrix(random_lil((5, 3), config.floatX, 3))],
         StructuredDot,
     )
开发者ID:daien,项目名称:Theano,代码行数:9,代码来源:test_basic.py

示例10: test_infer_shape

 def test_infer_shape(self):
     a = SparseType('csc', dtype=config.floatX)()
     b = SparseType('csc', dtype=config.floatX)()
     f = theano.function([a, b], structured_dot(a, b).shape)
     topo = f.maker.env.toposort()
     assert not any(isinstance(t, self.__class__) for t in topo)
     x = sp.csc_matrix((4, 5), dtype=config.floatX)
     y = sp.csc_matrix((5, 3), dtype=config.floatX)
     assert numpy.all(f(x, y) == numpy.array((4, 3)))
开发者ID:mesnilgr,项目名称:Theano,代码行数:9,代码来源:test_basic.py

示例11: max_pool

def max_pool(images, imgshp, maxpoolshp):
    """Implements a max pooling layer

    Takes as input a 2D tensor of shape batch_size x img_size and
    performs max pooling.  Max pooling downsamples by taking the max
    value in a given area, here defined by maxpoolshp. Outputs a 2D
    tensor of shape batch_size x output_size.

    :param images: 2D tensor containing images on which to apply convolution.
                   Assumed to be of shape batch_size x img_size
    :param imgshp: tuple containing image dimensions
    :param maxpoolshp: tuple containing shape of area to max pool over

    :return: out1, symbolic result (2D tensor)
    :return: out2, logical shape of the output
    """
    N = numpy
    poolsize = N.int64(N.prod(maxpoolshp))

    # imgshp contains either 2 entries (height,width) or 3 (nfeatures,h,w)
    # in the first case, default nfeatures to 1
    if N.size(imgshp) == 2:
        imgshp = (1,) + imgshp

    # construct indices and index pointers for sparse matrix, which,
    # when multiplied with input images will generate a stack of image
    # patches
    indices, indptr, spmat_shape, sptype, outshp = \
            convolution_indices.conv_eval(imgshp, maxpoolshp,
                                          maxpoolshp, mode='valid')

#    print 'XXXXXXXXXXXXXXXX MAX POOLING LAYER XXXXXXXXXXXXXXXXXXXX'
#    print 'imgshp = ', imgshp
#    print 'maxpoolshp = ', maxpoolshp
#    print 'outshp = ', outshp

    # build sparse matrix, then generate stack of image patches
    csc = theano.sparse.CSM(sptype)(N.ones(indices.size), indices,
                                    indptr, spmat_shape)
    patches = sparse.structured_dot(csc, images.T).T

    pshape = tensor.stack([images.shape[0] *\
                               tensor.as_tensor(N.prod(outshp)),
                           tensor.as_tensor(imgshp[0]),
                           tensor.as_tensor(poolsize)])
    patch_stack = tensor.reshape(patches, pshape, ndim=3)

    out1 = tensor.max(patch_stack, axis=2)

    pshape = tensor.stack([images.shape[0],
                           tensor.as_tensor(N.prod(outshp)),
                           tensor.as_tensor(imgshp[0])])
    out2 = tensor.reshape(out1, pshape, ndim=3)

    out3 = tensor.DimShuffle(out2.broadcastable, (0, 2, 1))(out2)

    return tensor.flatten(out3, 2), outshp
开发者ID:12190143,项目名称:Theano,代码行数:57,代码来源:sp.py

示例12: get_output_for

    def get_output_for(self, input, **kwargs):
        if input.ndim > 2:
            # if the input has more than two dimensions, flatten it into a
            # batch of feature vectors.
            input = input.flatten(2)

        activation = sp.structured_dot(input, self.W)
        if self.b is not None:
            activation = activation + self.b.dimshuffle('x', 0)
        return self.nonlinearity(activation)
开发者ID:jkramar,项目名称:kaggle-walmart,代码行数:10,代码来源:sparse_layers.py

示例13: __call__

    def __call__(self, inputs):

        self._update_cutoff()

        Y = structured_dot(inputs, self.W[:, :self.component_cutoff])
        Z = Y - tensor.dot(self.mean, self.W[:, :self.component_cutoff])

        if self.whiten:
            Z /= tensor.sqrt(self.v[:self.component_cutoff])
        return Z
开发者ID:jaberg,项目名称:pylearn,代码行数:10,代码来源:pca.py

示例14: test_infer_shape_csr_csc_grad

 def test_infer_shape_csr_csc_grad(self):
     for sparsetype in ('csr', 'csc'):
         a = SparseType(sparsetype, dtype=config.floatX)()
         b = SparseType(sparsetype, dtype=config.floatX)()
         grads = tensor.grad(dense_from_sparse(structured_dot(a, b)).sum(),
                             [a, b])
         f = theano.function([a, b], [g.shape for g in grads])
         topo = f.maker.env.toposort()
         assert not any(isinstance(t, self.__class__) for t in topo)
         call = getattr(sp, sparsetype + '_matrix')
         x = call(random_lil((500, 300), config.floatX, 10))
         y = call(random_lil((300, 400), config.floatX, 5))
         out1, out2 = f(x, y)
         assert numpy.all(out1 == x.shape)
         assert numpy.all(out2 == y.shape)
开发者ID:mesnilgr,项目名称:Theano,代码行数:15,代码来源:test_basic.py

示例15: test_opt_unpack

    def test_opt_unpack(self):
        #
        # Test that a graph involving
        # structured_dot(assembled_csc_matrix) is optimized to be just
        # a structured_dot_csc Op and no assembly of a csc_matrix.
        #
        # The optimization from structured_dot -> structured_dot_csc
        # is currently disabled, So this test is not expected to pass

        return
        #
        kerns = tensor.Tensor(dtype='int64', broadcastable=[False])('kerns')
        spmat = sp.lil_matrix((4, 6), dtype='int64')
        for i in range(5):
            # set non-zeros in random locations (row x, col y)
            x = numpy.floor(numpy.random.rand() * spmat.shape[0])
            y = numpy.floor(numpy.random.rand() * spmat.shape[1])
            spmat[x, y] = numpy.random.rand() * 10
        spmat = sp.csc_matrix(spmat)

        images = tensor.Tensor(dtype='float32',
                               broadcastable=[False, False])(
            'images')

        cscmat = CSC(kerns, spmat.indices[:spmat.size],
                     spmat.indptr, spmat.shape)
        f = theano.function([kerns, images], structured_dot(cscmat, images.T))

        sdcscpresent = False
        for node in f.maker.env.toposort():
            print node.op
            assert not isinstance(node.op, CSM)
            assert not isinstance(node.op, CSMProperties)
            if isinstance(f.maker.env.toposort()[1].op, StructuredDotCSC):
                sdcscpresent = True
        assert sdcscpresent

        kernvals = numpy.array(spmat.data[:spmat.size])
        #print 'kdtype', kernvals.dtype, kernvals.shape,
        #print kernvals.ndim, kernvals.dtype.num
        #print 'type of kernvals = ', kernvals.dtype
        bsize = 3
        imvals = 1.0 * numpy.array(numpy.arange(bsize * spmat.shape[1]).\
                reshape(bsize, spmat.shape[1]), dtype='float32')
        outvals = f(kernvals, imvals)
        print outvals
开发者ID:HaniAlmousli,项目名称:Theano,代码行数:46,代码来源:test_basic.py


注:本文中的theano.sparse.structured_dot函数示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。