当前位置: 首页>>代码示例>>Python>>正文


Python Bunch.keys方法代码示例

本文整理汇总了Python中sklearn.datasets.base.Bunch.keys方法的典型用法代码示例。如果您正苦于以下问题:Python Bunch.keys方法的具体用法?Python Bunch.keys怎么用?Python Bunch.keys使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在sklearn.datasets.base.Bunch的用法示例。


在下文中一共展示了Bunch.keys方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: create_dataset

# 需要导入模块: from sklearn.datasets.base import Bunch [as 别名]
# 或者: from sklearn.datasets.base.Bunch import keys [as 别名]

#.........这里部分代码省略.........
        rv = remove(rv, 'remove')
        rv = remove(rv, 'boundary')
        rv = remove(rv, 'SyncOn')
        rv = remove(rv, 'Start')
        rv = remove(rv, 'Userdefined')
        rv = remove(rv, 'LowCorrelation')
        rv = remove(rv, 'TSTART')
        rv = remove(rv, 'TPEAK')
        rv = remove(rv, 'TEND')
        for i in range(len(rv)):
            if rv[i] == 'R128':
                rv[i] = '-99'
            rv[i] = rv[i].lstrip('S')
            rv[i] = int(rv[i])
        # remove stimulus codes for responses
        rv = remove_range(rv, 240)
        for idx, i in enumerate(rv):
            for idx2, i2 in enumerate(eegcodes):
                if i == i2:
                    rv[idx] = binary[idx2]            
        for idx, i in enumerate(rv):
            if i != -99:
                rv[idx-1] = i
                rv[idx] = 0
        # remove last TR as it was apparently not recorded
        rv[-1] = 0
        rv = remove(rv, 0)
        for idx, i in enumerate(rv):
            if i == -99:
                rv[idx] = 0
        
        # until now the list with negative / neutral labels also contains zeros, which we will want to get rid of. 
        # To do this, we will replace the zeros with the code shown prior
        # First two values will be deleted as well as first two TRs (after fmri_data_i gets assigned
        
        for idx, z in enumerate(rv):
            if idx <= 2 and z == 0:
                rv[idx] = -77
            if idx > 2 and z == 0:
                rv[idx] = rv[idx-1]
                
        for idx, z in enumerate(rv):
            if idx <= 1 and z != -77:
                print 'Warning, non-empty first two TRs were deleted.'
        
        rv = remove(rv, -77)
        unique = sorted(list(set(rv)))
        print 'Unique values in RV', unique  
        
        t = open('/gablab/p/eegfmri/analysis/iaps/pilot%s/machine_learning/neg-neutr_attributes_run%s.txt' %(subject_id, r), 'w')
        for i in range(len(rv)):
            t.write("%s %s" %(rv[i], r))
            t.write('\n')  
        t.close()
        
        print 'Labels Length:', len(rv)
        file_name = ['neg-neutr_attributes_run%s.txt' %(r), 'pilot%s_r0%s_bandpassed.nii.gz' %(subject_id, r)]
        fil = _get_dataset(dataset_name, file_name, data_dir='/gablab/p/eegfmri/analysis/iaps/pilot%s' %(subject_id), folder=None)
        ds_i = Bunch(func=fil[1], conditions_target=fil[0])
        labels_i = np.loadtxt(ds_i.conditions_target, dtype=np.str)
        bold_i = nb.load(ds_i.func)
        fmri_data_i = np.copy(bold_i.get_data())
        print 'Original fMRI data', fmri_data_i.shape
        
        fmri_data_i = fmri_data_i[...,2:]
        print fmri_data_i.shape
        
        affine = bold_i.get_affine()
        mean_img_i = np.mean(fmri_data_i, axis=3)
        session_data = np.append(session_data, labels_i[:,1])
        lab_data = np.append(lab_data, labels_i[:,0])
        img_data = np.concatenate((img_data, fmri_data_i), axis=3)        
        print '__________________________________________________________________________________________________________'
        
        
        if r == 3:
            img_data = img_data[...,1:]
            print 'fMRI image', img_data.shape
            print 'Label Vector Length:', len(lab_data), 'Session Vector Length:', len(session_data)
            ni_img = nb.Nifti1Image(img_data, affine=None, header=None)
            nb.save(ni_img, '/gablab/p/eegfmri/analysis/iaps/pilot%s/machine_learning/all_runs.nii' %(subject_id))
            f = open('/gablab/p/eegfmri/analysis/iaps/pilot%s/machine_learning/neg-neutr_attributes_all_runs.txt' %(subject_id), 'w')
            for i in range(len(lab_data)):
                f.write("%s %s" %(lab_data[i], session_data[i]))
                f.write('\n')  
            f.close()
            # set up concatenated dataset in nilearn format
            file_names = ['neg-neutr_attributes_all_runs.txt', 'all_runs.nii']
            files = _get_dataset(dataset_name, file_names, data_dir='/gablab/p/eegfmri/analysis/iaps/pilot%s' %(subject_id), folder=None)
            ds = Bunch(func=files[1], conditions_target=files[0])
            print ds.keys(), ds
            labels = np.loadtxt(ds.conditions_target, dtype=np.str)
            bold = nb.load(ds.func)
            fmri_data = np.copy(bold.get_data())
            print fmri_data.shape
            affine = bold_i.get_affine() # just choose one
            # Compute the mean EPI: we do the mean along the axis 3, which is time
            mean_img = np.mean(fmri_data, axis=3)
            
    return (ds, labels, bold, fmri_data, affine, mean_img) # later 'ds' will be sufficient
开发者ID:doreenr,项目名称:eegfmri,代码行数:104,代码来源:create_dataset.py


注:本文中的sklearn.datasets.base.Bunch.keys方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。