本文整理汇总了Python中nltk.tokenize.RegexpTokenizer.span_tokenize方法的典型用法代码示例。如果您正苦于以下问题:Python RegexpTokenizer.span_tokenize方法的具体用法?Python RegexpTokenizer.span_tokenize怎么用?Python RegexpTokenizer.span_tokenize使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类nltk.tokenize.RegexpTokenizer
的用法示例。
在下文中一共展示了RegexpTokenizer.span_tokenize方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: tokenize_mail
# 需要导入模块: from nltk.tokenize import RegexpTokenizer [as 别名]
# 或者: from nltk.tokenize.RegexpTokenizer import span_tokenize [as 别名]
def tokenize_mail(self,mailtext):
"""
this function uses the RegexpTokeniser to split the mail text on the pattern defined
Each split is a separate mail.
Returns a list of mails contained in the given mailtext
"""
mails = []
#splits entire mail into parts matching the 'On <Date Time> <[email protected]> wrote:' pattern
tokenizer = RegexpTokenizer('\n[>|\s]*On[\s]* ([a-zA-Z0-9, :/<>@\.\"\[\]\r\n]*[\s]* wrote:)',gaps = True)
mail_indices = tokenizer.span_tokenize(mailtext)
#uses the splits' offset information from span_tokenize to split the actual mailcontent
#stores each split as an element of a list named 'mails'
start = end = 0
for index in mail_indices:
end = index[1]+1
mails.append(mailtext[start:end])
start = end
return mails #list of the contained mails within a single mailtext