本文整理汇总了Python中mvpa2.datasets.Dataset.fa['name']方法的典型用法代码示例。如果您正苦于以下问题:Python Dataset.fa['name']方法的具体用法?Python Dataset.fa['name']怎么用?Python Dataset.fa['name']使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类mvpa2.datasets.Dataset
的用法示例。
在下文中一共展示了Dataset.fa['name']方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: movie_dataset
# 需要导入模块: from mvpa2.datasets import Dataset [as 别名]
# 或者: from mvpa2.datasets.Dataset import fa['name'] [as 别名]
def movie_dataset(
subj, preproc=None,
base_path=os.curdir,
fname_tmpl='sub-%(subj)s/ses-movie/func/sub-%(subj)s_ses-movie_task-movie_run-%(run)i_recording-eyegaze_physio.tsv.gz'):
"""
Load eyegaze recordings from all runs a merge into a consecutive timeseries
When merging intersegment-overlap is removed.
Parameters
----------
subj : str
Subject code.
preproc : callable or None
Callable to preprocess a record array of the raw timeseries. The record
array has the field 'x', 'y', 'pupil', and 'movie_frame'. It needs to
return a record array with the same fields and must not change the
sampling rate or number of samples.
base_path : path
Base directory for input file discovery.
fname_tmpl : str
Template expression to match input files. Support dict expansion with
'subj' and 'run' keys.
Returns
-------
Dataset
The dataset contains a number of attributes, most of which should be
self-explanatory. The `ds.a.run_duration_deviation` attribute quantifies
the eyegaze recording duration difference from the expected value (in
seconds).
"""
# in frames (hand-verified by re-assembling in kdenlive -- using MELT
# underneath)
seg_offsets = (0, 22150, 43802, 65304, 89305, 112007, 133559, 160261)
movie_fps = 25.0
eyegaze_sr = 1000.0 # Hz
intersegment_overlap = 400 # frames
segments = []
for seg, offset in enumerate(seg_offsets):
raw = np.recfromcsv(
os.path.join(base_path, fname_tmpl % dict(subj=subj, run=seg + 1)),
delimiter='\t',
names=('x', 'y', 'pupil', 'movie_frame'))
if not preproc is None:
raw = preproc(raw)
# glue together to form a dataset
ds = Dataset(np.array((raw.x, raw.y, raw.pupil)).T,
sa=dict(movie_frame=raw.movie_frame))
ds.sa['movie_run_frame'] = ds.sa.movie_frame.copy()
# turn into movie frame ID for the entire unsegmented movie
ds.sa.movie_frame += offset
## truncate segment time series to remove overlap
if seg < 7:
# cut the end in a safe distance to the actual end, but inside the
# overlap
ds = ds[:-int(intersegment_overlap / movie_fps * eyegaze_sr)]
if seg > 0:
# cut the beginning to have a seamless start after the previous
# segment
ds = ds[ds.sa.movie_frame > segments[-1].sa.movie_frame.max()]
ds.sa['movie_run'] = [seg + 1] * len(ds)
segments.append(ds)
ds = vstack(segments)
# column names
ds.fa['name'] = ('x', 'y', 'pupil')
ds.a['sampling_rate'] = eyegaze_sr
ds.a['movie_fps'] = movie_fps
return ds