借助NLTK tokenize.regexp()
模塊,我們可以通過使用正則表達式從字符串中提取標記RegexpTokenizer()
方法。
用法: tokenize.RegexpTokenizer()
返回: Return array of tokens using regular expression
範例1:
在這個例子中,我們使用RegexpTokenizer()
借助正則表達式提取令牌流的方法。
# import RegexpTokenizer() method from nltk
from nltk.tokenize import RegexpTokenizer
# Create a reference variable for Class RegexpTokenizer
tk = RegexpTokenizer('\s+', gaps = True)
# Create a string input
gfg = "I love Python"
# Use tokenize method
geek = tk.tokenize(gfg)
print(geek)
輸出:
[‘I’, ‘love’, ‘Python’]
範例2:
# import RegexpTokenizer() method from nltk
from nltk.tokenize import RegexpTokenizer
# Create a reference variable for Class RegexpTokenizer
tk = RegexpTokenizer('\s+', gaps = True)
# Create a string input
gfg = "Geeks for Geeks"
# Use tokenize method
geek = tk.tokenize(gfg)
print(geek)
輸出:
[‘Geeks’, ‘for’, ‘Geeks’]
相關用法
- Python NLTK nltk.tokenize.ConditionalFreqDist()用法及代碼示例
- Python nltk.tokenizer.word_tokenize()用法及代碼示例
- Python nltk.tokenize.LineTokenizer用法及代碼示例
- Python nltk.tokenize.SExprTokenizer()用法及代碼示例
- Python nltk.tokenize.StanfordTokenizer()用法及代碼示例
- Python nltk.tokenize.SpaceTokenizer()用法及代碼示例
- Python nltk.tokenize.TabTokenizer()用法及代碼示例
- Python nltk.WhitespaceTokenizer用法及代碼示例
- Python nltk.TweetTokenizer()用法及代碼示例
- Python nltk.tokenize.mwe()用法及代碼示例
- Python NLTK tokenize.WordPunctTokenizer()用法及代碼示例
注:本文由純淨天空篩選整理自Jitender_1998大神的英文原創作品 Python NLTK | tokenize.regexp()。非經特殊聲明,原始代碼版權歸原作者所有,本譯文未經允許或授權,請勿轉載或複製。