本文整理汇总了Python中obspy.core.Stream.count方法的典型用法代码示例。如果您正苦于以下问题:Python Stream.count方法的具体用法?Python Stream.count怎么用?Python Stream.count使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类obspy.core.Stream
的用法示例。
在下文中一共展示了Stream.count方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: read_from_SDS
# 需要导入模块: from obspy.core import Stream [as 别名]
# 或者: from obspy.core.Stream import count [as 别名]
def read_from_SDS(self, sds_root, net_name, sta_name, comp_name,
starttime=None, endtime=None, rmean=False, taper=False,
pad_value=None):
"""
Read waveform data from an SDS structured archive. Simple overlaps and
adjacent traces are merged if possile.
:param sds_root: root of the SDS archive
:param net_name: network name
:param sta_name: station name
:param comp_name: component name
:param starttime: Start time of data to be read.
:param endtime: End time of data to be read.
:param rmean: If ``True`` removes the mean from the data upon reading.
If data are segmented, the mean will be removed from all segments
individually.
:param taper: If ``True`` applies a cosine taper to the data upon
reading. If data are segmented, tapers are applied to all segments
individually.
:param pad_value: If this parameter is set, points between
``starttime`` and the first point in the file, and points between
the last point in the file and ``endtime``, will be set to
``pad_value``. You may want to also use the ``rmean`` and
``taper`` parameters, depending on the nature of the data.
:type sds_root: string
:type net_name: string
:type sta_name: string
:type comp_name: string
:type starttime: ``obspy.core.utcdatetime.UTCDateTime`` object,
optional
:type endtime: ``obspy.core.utcdatetime.UTCDateTime`` object, optional
:type rmean: boolean, optional
:type taper: boolean, optional
:type pad_value: float, optional
:raises UserWarning: If there are no data between ``starttime`` and
``endtime``
"""
logging.info("Reading from SDS structure %s %s %s ..." %
(net_name, sta_name, comp_name))
# Get the complete file list. If a directory, get all the filenames.
filename = os.path.join(sds_root, net_name, sta_name,
"%s.D" % comp_name, "*")
logging.debug("Reading %s between %s and %s" %
(filename, starttime.isoformat(), endtime.isoformat()))
if os.path.isdir(glob.glob(filename)[0]):
filename = os.path.join(filename, "*")
file_glob = glob.glob(filename)
# read header from all files to keep only those within the time limits
fnames_within_times = []
for fname in file_glob:
st_head = stream.read(fname, headonly=True)
# retrieve first_start and last_end time for the stream
# without making any assumptions on order of traces
first_start = st_head[0].stats.starttime
last_end = st_head[0].stats.endtime
# find earliest start time and latest end time in stream
for tr in st_head:
if tr.stats.starttime < first_start:
first_start = tr.stats.starttime
if tr.stats.endtime > last_end:
last_end = tr.stats.endtime
# add to list if start or end time are within our requested limits
if (first_start < endtime and last_end > starttime):
fnames_within_times.append(fname)
logging.debug("Found %d files to read" % len(fnames_within_times))
# now read the full data only for the relevant files
st = Stream()
for fname in fnames_within_times:
st_tmp = read(fname, starttime=starttime, endtime=endtime)
for tr in st_tmp:
st.append(tr)
# and merge nicely
st.merge(method=-1)
if st.count() > 1: # There are gaps after sensible cleanup merging
logging.info("File contains gaps:")
st.printGaps()
# apply rmean if requested
if rmean:
logging.info("Removing the mean from single traces.")
st = stream_rmean(st)
# apply rmean if requested
if taper:
logging.info("Tapering single traces.")
st = stream_taper(st)
if not pad_value is None:
try:
first_tr = st.traces[0]
# save delta (to save typing)
#.........这里部分代码省略.........