当前位置: 首页>>代码示例>>Python>>正文


Python PatchExtractor.reshape方法代码示例

本文整理汇总了Python中sklearn.feature_extraction.image.PatchExtractor.reshape方法的典型用法代码示例。如果您正苦于以下问题:Python PatchExtractor.reshape方法的具体用法?Python PatchExtractor.reshape怎么用?Python PatchExtractor.reshape使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在sklearn.feature_extraction.image.PatchExtractor的用法示例。


在下文中一共展示了PatchExtractor.reshape方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: convolutional_zca

# 需要导入模块: from sklearn.feature_extraction.image import PatchExtractor [as 别名]
# 或者: from sklearn.feature_extraction.image.PatchExtractor import reshape [as 别名]
def convolutional_zca(input, patch_size=(9, 9), max_patches=int(1e5)):
    """
    This is an implementation of the convolutional ZCA whitening presented by
    David Eigen in his phd thesis
    http://www.cs.nyu.edu/~deigen/deigen-thesis.pdf

    "Predicting Images using Convolutional Networks:
     Visual Scene Understanding with Pixel Maps"

    From paragraph 8.4:
    A simple adaptation of ZCA to convolutional application is to find the
    ZCA whitening transformation for a sample of local image patches across
    the dataset, and then apply this transform to every patch in a larger image.
    We then use the center pixel of each ZCA patch to create the conv-ZCA
    output image. The operations of applying local ZCA and selecting the center
    pixel can be combined into a single convolution kernel,
    resulting in the following algorithm
    (explained using RGB inputs and 9x9 kernel):

    1. Sample 10M random 9x9 image patches (each with 3 colors)
    2. Perform PCA on these to get eigenvectors V and eigenvalues D.
    3. Optionally remove small eigenvalues, so V has shape [npca x 3 x 9 x 9].
    4. Construct the whitening kernel k:
        for each pair of colors (ci,cj),
        set k[j,i, :, :] = V[:, j, x0, y0]^T * D^{-1/2} * V[:, i, :, :]

    where (x0, y0) is the center pixel location (e.g. (5,5) for a 9x9 kernel)


    :param input: 4D tensor of shape [batch_size, rows, col, channels]
    :param patch_size: size of the patches extracted from the dataset
    :param max_patches: max number of patches extracted from the dataset

    :return: conv-zca whitened dataset
    """

    # I don't know if it's correct or not.. but it seems to work
    mean = np.mean(input, axis=(0, 1, 2))
    input -= mean  # center the data

    n_imgs, h, w, n_channels = input.shape
    patch_size = (patch_size, patch_size)
    patches = PatchExtractor(patch_size=patch_size,
                             max_patches=max_patches).transform(input)
    pca = PCA()
    pca.fit(patches.reshape(patches.shape[0], -1))

    # Transpose the components into theano convolution filter type
    dim = (-1,) + patch_size + (n_channels,)
    V = shared(pca.components_.reshape(dim).
               transpose(0, 3, 1, 2).astype(input.dtype))
    D = T.nlinalg.diag(1. / np.sqrt(pca.explained_variance_))

    x_0 = int(np.floor(patch_size[0] / 2))
    y_0 = int(np.floor(patch_size[1] / 2))

    filter_shape = [n_channels, n_channels, patch_size[0], patch_size[1]]
    image_shape = [n_imgs, n_channels, h, w]
    kernel = T.zeros(filter_shape)
    VT = V.dimshuffle(2, 3, 1, 0)

    # V : 243 x 3 x 9 x 9
    # VT : 9 x 9 x 3 x 243

    # build the kernel
    for i in range(n_channels):
        for j in range(n_channels):
            a = T.dot(VT[x_0, y_0, j, :], D).reshape([1, -1])
            b = V[:, i, :, :].reshape([-1, patch_size[0] * patch_size[1]])
            c = T.dot(a, b).reshape([patch_size[0], patch_size[1]])
            kernel = T.set_subtensor(kernel[j, i, :, :], c)

    kernel = kernel.astype(floatX)
    input = input.astype(floatX)
    input_images = T.tensor4(dtype=floatX)
    conv_whitening = conv2d(input_images.dimshuffle((0, 3, 1, 2)),
                            kernel,
                            input_shape=image_shape,
                            filter_shape=filter_shape,
                            border_mode='full')
    s_crop = [(patch_size[0] - 1) // 2,
              (patch_size[1] - 1) // 2]
    # e_crop = [s_crop[0] if (s_crop[0] % 2) != 0 else s_crop[0] + 1,
    #           s_crop[1] if (s_crop[1] % 2) != 0 else s_crop[1] + 1]

    conv_whitening = conv_whitening[:, :, s_crop[0]:-s_crop[0], s_crop[
        1]:-s_crop[1]]
    conv_whitening = conv_whitening.dimshuffle(0, 2, 3, 1)
    f_convZCA = function([input_images], conv_whitening)

    return f_convZCA(input)
开发者ID:12190143,项目名称:reseg,代码行数:93,代码来源:helper_dataset.py


注:本文中的sklearn.feature_extraction.image.PatchExtractor.reshape方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。