当前位置: 首页>>代码示例>>Python>>正文


Python Dataset.fa['center_ids']方法代码示例

本文整理汇总了Python中mvpa2.datasets.base.Dataset.fa['center_ids']方法的典型用法代码示例。如果您正苦于以下问题:Python Dataset.fa['center_ids']方法的具体用法?Python Dataset.fa['center_ids']怎么用?Python Dataset.fa['center_ids']使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在mvpa2.datasets.base.Dataset的用法示例。


在下文中一共展示了Dataset.fa['center_ids']方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: _sl_call

# 需要导入模块: from mvpa2.datasets.base import Dataset [as 别名]
# 或者: from mvpa2.datasets.base.Dataset import fa['center_ids'] [as 别名]

#.........这里部分代码省略.........
                        "roi's are expensive at this point.  Get them from the "
                        ".ca value of the original instance before "
                        "calling again and using reuse_neighbors")
        else:
            raise RuntimeError("Should not be reachable")

        # Since this is ad-hoc implementation of the searchlight, we are not passing
        # those via ds.a  but rather assign directly to self.ca
        self.ca.roi_sizes = roi_sizes

        indexsum = self._indexsum
        if indexsum == 'sparse':
            if not self.reuse_neighbors or self.__roi_fids is None:
                if __debug__:
                    debug('SLC',
                          'Phase 4b. Converting neighbors to sparse matrix '
                          'representation')
                # convert to "sparse representation" where column j contains
                # 1s only at the roi_fids[j] indices
                roi_fids = inds_to_coo(roi_fids,
                                       shape=(dataset.nfeatures, nroi_fids))
            indexsum_fx = lastdim_columnsums_spmatrix
        elif indexsum == 'fancy':
            indexsum_fx = lastdim_columnsums_fancy_indexing
        else:
            raise ValueError, \
                  "Do not know how to deal with indexsum=%s" % indexsum

        # Store roi_fids
        if self.reuse_neighbors and self.__roi_fids is None:
            self.__roi_fids = roi_fids

        # 5. Lets do actual "splitting" and "classification"
        if __debug__:
            debug('SLC', 'Phase 5. Major loop' )


        for isplit, split in enumerate(splits):
            if __debug__:
                debug('SLC', ' Split %i out of %i' % (isplit, nsplits))
            # figure out for a given splits the blocks we want to work
            # with
            # sample_indicies
            training_sis = split[0].samples[:, 0]
            testing_sis = split[1].samples[:, 0]

            # That is the GNB specificity
            targets, predictions = self._sl_call_on_a_split(
                split, X,               # X2 might light to go
                training_sis, testing_sis,
                ## training_nsamples,      # GO? == np.sum(pl.nsamples)
                ## training_non0labels,
                ## pl.sums, pl.means, pl.sums2, pl.variances,
                # passing nroi_fids as well since in 'sparse' way it has no 'length'
                nroi_fids, roi_fids,
                indexsum_fx,
                labels_numeric,
                )

            # assess the errors
            if __debug__:
                debug('SLC', "  Assessing accuracies")

            if errorfx is mean_mismatch_error:
                results[isplit, :] = \
                    (predictions != targets[:, None]).sum(axis=0) \
                    / float(len(targets))
                all_cvfolds += [isplit]
            elif errorfx:
                # somewhat silly but a way which allows to use pre-crafted
                # error functions without a chance to screw up
                results.append(
                    np.array([errorfx(fpredictions, targets)
                              for fpredictions in predictions.T]))
                all_cvfolds += [isplit] * len(targets)

            else:
                # and if no errorfx -- we just need to assign original
                # labels to the predictions BUT keep in mind that it is a matrix
                results.append(assign_ulabels(predictions))
                all_targets += [ulabels[i] for i in targets]
                all_cvfolds += [isplit] * len(targets)

            pass  # end of the split loop

        if isinstance(results, list):
            # we have just collected them, now they need to be vstacked
            results = np.vstack(results)
            assert(results.ndim >= 2)

        if __debug__:
            debug('SLC', "%s._call() is done in %.3g sec" %
                  (self.__class__.__name__, time.time() - time_start))

        out = Dataset(results)
        if all_targets:
            out.sa['targets'] = all_targets
        out.sa['cvfolds'] = all_cvfolds
        out.fa['center_ids'] = roi_ids
        return out
开发者ID:beausievers,项目名称:PyMVPA,代码行数:104,代码来源:adhocsearchlightbase.py


注:本文中的mvpa2.datasets.base.Dataset.fa['center_ids']方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。