当前位置: 首页>>代码示例>>Python>>正文


Python Dataset.get_ids_by_features方法代码示例

本文整理汇总了Python中neurosynth.base.dataset.Dataset.get_ids_by_features方法的典型用法代码示例。如果您正苦于以下问题:Python Dataset.get_ids_by_features方法的具体用法?Python Dataset.get_ids_by_features怎么用?Python Dataset.get_ids_by_features使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在neurosynth.base.dataset.Dataset的用法示例。


在下文中一共展示了Dataset.get_ids_by_features方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: generate_maps

# 需要导入模块: from neurosynth.base.dataset import Dataset [as 别名]
# 或者: from neurosynth.base.dataset.Dataset import get_ids_by_features [as 别名]
def generate_maps(terms,output_dir):

    f,d = download_data()
    features = pandas.read_csv(f,sep="\t")  
    database = pandas.read_csv(d,sep="\t")  

    output_dir = "%s/maps" %(output_dir)

    print "Deriving pickled maps to extract relationships from..."
    dataset = Dataset(d)
    dataset.add_features(f)
    for t in range(len(terms)):
        term = terms[t]
        print "Generating P(term|activation) for term %s, %s of %s" %(term,t,len(terms))
        ids = dataset.get_ids_by_features(term)
        maps = meta.MetaAnalysis(dataset,ids)
        term_name = term.replace(" ","_")
        pickle.dump(maps.images["pFgA_z"],open("%s/%s_pFgA_z.pkl" %(output_dir,term_name),"wb"))
开发者ID:word-fish,项目名称:wordfish-plugins,代码行数:20,代码来源:functions.py

示例2: extract_relations

# 需要导入模块: from neurosynth.base.dataset import Dataset [as 别名]
# 或者: from neurosynth.base.dataset.Dataset import get_ids_by_features [as 别名]
def extract_relations(terms,maps_dir,output_dir):

    if isinstance(terms,str):
        terms = [terms]

    f,d = download_data()
    features = pandas.read_csv(f,sep="\t")  
    database = pandas.read_csv(d,sep="\t")  
    allterms = features.columns.tolist()
    allterms.pop(0)  #pmid

    dataset = Dataset(d)
    dataset.add_features(f)
    image_matrix = pandas.DataFrame(columns=range(228453))
    for t in range(len(allterms)):
        term = allterms[t]
        term_name = term.replace(" ","_")
        pickled_map = "%s/%s_pFgA_z.pkl" %(maps_dir,term_name)
        if not os.path.exists(pickled_map):
            print "Generating P(term|activation) for term %s" %(term)
            ids = dataset.get_ids_by_features(term)
            maps = meta.MetaAnalysis(dataset,ids)
            pickle.dump(maps.images["pFgA_z"],open(pickled_map,"wb"))
        map_data = pickle.load(open(pickled_map,"rb"))
        image_matrix.loc[term] = map_data

    sims = pandas.DataFrame(columns=image_matrix.index)
    tuples = []
    for t1 in range(len(terms)):
        term1 = terms[t1]
        print "Extracting NeuroSynth relationships for term %s..." %(term1)
        for t2 in range(len(terms)):
            term2 = terms[t2]
            if t1<t2:
                score = pearsonr(image_matrix.loc[term1],image_matrix.loc[term2])[0]
                tuples.append((term1,term2,score))

    save_relations(output_dir=output_dir,relations=tuples)
开发者ID:word-fish,项目名称:wordfish-plugins,代码行数:40,代码来源:functions.py

示例3: Dataset

# 需要导入模块: from neurosynth.base.dataset import Dataset [as 别名]
# 或者: from neurosynth.base.dataset.Dataset import get_ids_by_features [as 别名]
from neurosynth.base.dataset import Dataset
from neurosynth.analysis import meta
import os
dataset = Dataset('database.txt')
dataset.add_features('features.txt')
print dataset.get_feature_names()
ids = dataset.get_ids_by_features('emo*', threshold=0.001)
print len(ids)
ma = meta.MetaAnalysis(dataset, ids)
ma.save_results('emotion')
开发者ID:MQMQ0229,项目名称:neurosynth,代码行数:12,代码来源:example.py

示例4: print

# 需要导入模块: from neurosynth.base.dataset import Dataset [as 别名]
# 或者: from neurosynth.base.dataset.Dataset import get_ids_by_features [as 别名]
resource_dir = path.join(path.pardir, 'resources')

# make sure we have the data
dataset_dir = path.join(path.expanduser('~'), 'Documents', 'neurosynth-data')
database_path = path.join(dataset_dir, 'database_bregma.txt')
neurosynth_data_url = 'https://github.com/wmpauli/neurosynth-data'
if not path.exists(database_path):
    print("Please download dataset from %s and store it in %s" % (neurosynth_data_url, dataset_dir))

# load dataset, both image table and feature table
r = 1.0 # 1mm smoothing kernel
transform = {'BREGMA': transformations.bregma_to_whs()}
target = 'WHS'
masker_filename = path.join(resource_dir, 'WHS_SD_rat_brainmask_sm_v2.nii.gz')
dataset = Dataset(path.join(dataset_dir, 'database_bregma.txt'), masker=masker_filename, r=r, transform=transform, target=target)
dataset.feature_table = FeatureTable(dataset)
dataset.add_features(path.join(dataset_dir, "features_bregma.txt")) # add features
fn = dataset.get_feature_names()

# get the ids of studies where this feature occurs
ids = dataset.get_ids_by_features(('%s*' % feature), threshold=0.1)
ma = meta.MetaAnalysis(dataset, ids)
results_path = path.join('results', 'meta', feature)
if not path.exists(results_path):
    makedirs(results_path)

    print("saving results to: %s" % results_path)
ma.save_results(results_path)

# note, figure 2 of manuscript was used by plotting the z-score statistical maps for forward inference (pAgF_z.nii.gz) and reverse inference (pFgA_z.nii.gz)
开发者ID:wmpauli,项目名称:neurosynth,代码行数:32,代码来源:basic_ma.py

示例5: greater

# 需要导入模块: from neurosynth.base.dataset import Dataset [as 别名]
# 或者: from neurosynth.base.dataset.Dataset import get_ids_by_features [as 别名]
# Now that our Dataset has both activation data and some features, we're ready to start doing some analyses! By design, Neurosynth focuses on facilitating simple, fast, and modestly useful analyses. This means you probably won't break any new ground using Neurosynth, but you should be able to supplement results you've generated using other approaches with a bunch of nifty analyses that take just 2 - 3 lines of code.
# 
# ### Simple feature-based meta-analyses
# The most straightforward thing you can do with Neurosynth is use the features we just loaded above to perform automated large-scale meta-analyses of the literature. Let's see what features we have:

# <codecell>

dataset.get_feature_names()

# <markdowncell>

# If the loading process went smoothly, this should return a list of about 500 terms. We can use these terms--either in isolation or in combination--to select articles for inclusion in a meta-analysis. For example, suppose we want to run a meta-analysis of emotion studies. We could operationally define a study of emotion as one in which the authors used words starting with 'emo' with high frequency:

# <codecell>

ids = dataset.get_ids_by_features('emo*', threshold=0.001)

# <markdowncell>

# Here we're asking for a list of IDs of all studies that use words starting with 'emo' (e.g.,'emotion', 'emotional', 'emotionally', etc.) at a frequency of 1 in 1,000 words or greater (in other words, if an article has 5,000 words of text, it will only be included in our set if it uses words starting with 'emo' at least 5 times). Let's find out how many studies are in our list:

# <codecell>

len(ids)

# <markdowncell>

# The resulting set includes 639 studies.
# 
# Once we've got a set of studies we're happy with, we can run a simple meta-analysis, prefixing all output files with the string 'emotion' to distinguish them from other analyses we might run:
开发者ID:MQMQ0229,项目名称:neurosynth,代码行数:32,代码来源:neurosynth_demo.py


注:本文中的neurosynth.base.dataset.Dataset.get_ids_by_features方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。