借助nltk.tokenize.WordPunctTokenizer()()
方法,我们能够通过使用以下命令从字母或非字母字符形式的单词或句子字符串中提取标记tokenize.WordPunctTokenizer()()
方法。
用法:tokenize.WordPunctTokenizer()()
返回: Return the tokens from a string of alphabetic or non-alphabetic character.
范例1:
在这个例子中,我们可以通过使用tokenize.WordPunctTokenizer()()
方法,我们能够从字母或非字母字符流中提取令牌。
# import WordPunctTokenizer() method from nltk
from nltk.tokenize import WordPunctTokenizer
# Create a reference variable for Class WordPunctTokenizer
tk = WordPunctTokenizer()
# Create a string input
gfg = "GeeksforGeeks...$$&* \nis\t for geeks"
# Use tokenize method
geek = tk.tokenize(gfg)
print(geek)
输出:
[‘GeeksforGeeks’, ‘…$$&*’, ‘is’, ‘for’, ‘geeks’]
范例2:
# import WordPunctTokenizer() method from nltk
from nltk.tokenize import WordPunctTokenizer
# Create a reference variable for Class WordPunctTokenizer
tk = WordPunctTokenizer()
# Create a string input
gfg = "The price\t of burger \nin BurgerKing is Rs.36.\n"
# Use tokenize method
geek = tk.tokenize(gfg)
print(geek)
输出:
[‘The’, ‘price’, ‘of’, ‘burger’, ‘in’, ‘BurgerKing’, ‘is’, ‘Rs’, ‘.’, ’36’, ‘.’]
相关用法
- Python nltk.tokenize.SExprTokenizer()用法及代码示例
- Python nltk.tokenize.LineTokenizer用法及代码示例
- Python NLTK nltk.tokenize.ConditionalFreqDist()用法及代码示例
- Python nltk.tokenize.SpaceTokenizer()用法及代码示例
- Python nltk.tokenize.TabTokenizer()用法及代码示例
- Python nltk.tokenize.StanfordTokenizer()用法及代码示例
- Python nltk.tokenizer.word_tokenize()用法及代码示例
- Python nltk.tokenize.mwe()用法及代码示例
- Python nltk.WhitespaceTokenizer用法及代码示例
- Python nltk.TweetTokenizer()用法及代码示例
- Python NLTK tokenize.regexp()用法及代码示例
注:本文由纯净天空筛选整理自Jitender_1998大神的英文原创作品 Python NLTK | tokenize.WordPunctTokenizer()。非经特殊声明,原始代码版权归原作者所有,本译文未经允许或授权,请勿转载或复制。