本文整理汇总了Python中neo.core.Segment.file_datetime方法的典型用法代码示例。如果您正苦于以下问题:Python Segment.file_datetime方法的具体用法?Python Segment.file_datetime怎么用?Python Segment.file_datetime使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类neo.core.Segment
的用法示例。
在下文中一共展示了Segment.file_datetime方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: read_segment
# 需要导入模块: from neo.core import Segment [as 别名]
# 或者: from neo.core.Segment import file_datetime [as 别名]
def read_segment(self, gid_list=None, time_unit=pq.ms, t_start=None,
t_stop=None, sampling_period=None, id_column_dat=0,
time_column_dat=1, value_columns_dat=2,
id_column_gdf=0, time_column_gdf=1, value_types=None,
value_units=None, lazy=False):
"""
Reads a Segment which contains SpikeTrain(s) with specified neuron IDs
from the GDF data.
Arguments
----------
gid_list : list, default: None
A list of GDF IDs of which to return SpikeTrain(s). gid_list must
be specified if the GDF file contains neuron IDs, the default None
then raises an error. Specify an empty list [] to retrieve the spike
trains of all neurons.
time_unit : Quantity (time), optional, default: quantities.ms
The time unit of recorded time stamps in DAT as well as GDF files.
t_start : Quantity (time), optional, default: 0 * pq.ms
Start time of SpikeTrain.
t_stop : Quantity (time), default: None
Stop time of SpikeTrain. t_stop must be specified, the default None
raises an error.
sampling_period : Quantity (frequency), optional, default: None
Sampling period of the recorded data.
id_column_dat : int, optional, default: 0
Column index of neuron IDs in the DAT file.
time_column_dat : int, optional, default: 1
Column index of time stamps in the DAT file.
value_columns_dat : int, optional, default: 2
Column index of the analog values recorded in the DAT file.
id_column_gdf : int, optional, default: 0
Column index of neuron IDs in the GDF file.
time_column_gdf : int, optional, default: 1
Column index of time stamps in the GDF file.
value_types : str, optional, default: None
Nest data type of the analog values recorded, eg.'V_m', 'I', 'g_e'
value_units : Quantity (amplitude), default: None
The physical unit of the recorded signal values.
lazy : bool, optional, default: False
Returns
-------
seg : Segment
The Segment contains one SpikeTrain and one AnalogSignal for
each ID in gid_list.
"""
assert not lazy, 'Do not support lazy'
if isinstance(gid_list, tuple):
if gid_list[0] > gid_list[1]:
raise ValueError('The second entry in gid_list must be '
'greater or equal to the first entry.')
gid_list = range(gid_list[0], gid_list[1] + 1)
# __read_xxx() needs a list of IDs
if gid_list is None:
gid_list = [None]
# create an empty Segment
seg = Segment(file_origin=",".join(self.filenames))
seg.file_datetime = datetime.fromtimestamp(os.stat(self.filenames[0]).st_mtime)
# todo: rather than take the first file for the timestamp, we should take the oldest
# in practice, there won't be much difference
# Load analogsignals and attach to Segment
if 'dat' in self.avail_formats:
seg.analogsignals = self.__read_analogsignals(
gid_list,
time_unit,
t_start,
t_stop,
sampling_period=sampling_period,
id_column=id_column_dat,
time_column=time_column_dat,
value_columns=value_columns_dat,
value_types=value_types,
value_units=value_units)
if 'gdf' in self.avail_formats:
seg.spiketrains = self.__read_spiketrains(
gid_list,
time_unit,
t_start,
t_stop,
id_column=id_column_gdf,
time_column=time_column_gdf)
return seg