本文整理汇总了Python中neo.core.SpikeTrain.annotations['waveform_features']方法的典型用法代码示例。如果您正苦于以下问题:Python SpikeTrain.annotations['waveform_features']方法的具体用法?Python SpikeTrain.annotations['waveform_features']怎么用?Python SpikeTrain.annotations['waveform_features']使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类neo.core.SpikeTrain
的用法示例。
在下文中一共展示了SpikeTrain.annotations['waveform_features']方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: read_block
# 需要导入模块: from neo.core import SpikeTrain [as 别名]
# 或者: from neo.core.SpikeTrain import annotations['waveform_features'] [as 别名]
def read_block(self, lazy=False):
"""Returns a Block containing spike information.
There is no obvious way to infer the segment boundaries from
raw spike times, so for now all spike times are returned in one
big segment. The way around this would be to specify the segment
boundaries, and then change this code to put the spikes in the right
segments.
"""
assert not lazy, 'Do not support lazy'
# Create block and segment to hold all the data
block = Block()
# Search data directory for KlustaKwik files.
# If nothing found, return empty block
self._fetfiles = self._fp.read_filenames('fet')
self._clufiles = self._fp.read_filenames('clu')
if len(self._fetfiles) == 0:
return block
# Create a single segment to hold all of the data
seg = Segment(name='seg0', index=0, file_origin=self.filename)
block.segments.append(seg)
# Load spike times from each group and store in a dict, keyed
# by group number
self.spiketrains = dict()
for group in sorted(self._fetfiles.keys()):
# Load spike times
fetfile = self._fetfiles[group]
spks, features = self._load_spike_times(fetfile)
# Load cluster ids or generate
if group in self._clufiles:
clufile = self._clufiles[group]
uids = self._load_unit_id(clufile)
else:
# unclustered data, assume all zeros
uids = np.zeros(spks.shape, dtype=np.int32)
# error check
if len(spks) != len(uids):
raise ValueError("lengths of fet and clu files are different")
# Create Unit for each cluster
unique_unit_ids = np.unique(uids)
for unit_id in sorted(unique_unit_ids):
# Initialize the unit
u = Unit(name=('unit %d from group %d' % (unit_id, group)),
index=unit_id, group=group)
# Initialize a new SpikeTrain for the spikes from this unit
st = SpikeTrain(
times=spks[uids == unit_id] / self.sampling_rate,
units='sec', t_start=0.0,
t_stop=spks.max() / self.sampling_rate,
name=('unit %d from group %d' % (unit_id, group)))
st.annotations['cluster'] = unit_id
st.annotations['group'] = group
# put features in
if len(features) != 0:
st.annotations['waveform_features'] = features
# Link
u.spiketrains.append(st)
seg.spiketrains.append(st)
block.create_many_to_one_relationship()
return block