借助nltk.tokenize.TabTokenizer()
方法,我們可以使用它們之間的製表符從單詞字符串中提取標記tokenize.TabTokenizer()
方法。
用法:tokenize.TabTokenizer()
返回:Return the tokens of words.
範例1:
在這個例子中,我們可以通過使用tokenize.TabTokenizer()
方法,我們能夠從流中提取標記到單詞之間並帶有製表符的單詞。
# import TabTokenizer() method from nltk
from nltk.tokenize import TabTokenizer
# Create a reference variable for Class TabTokenizer
tk = TabTokenizer()
# Create a string input
gfg = "Geeksfor\tGeeks..\t.$$&* \nis\t for geeks"
# Use tokenize method
geek = tk.tokenize(gfg)
print(geek)
輸出:
[‘Geeksfor’, ‘Geeks..’, ‘.$$&* \nis’, ‘ for geeks’]
範例2:
# import TabTokenizer() method from nltk
from nltk.tokenize import TabTokenizer
# Create a reference variable for Class TabTokenizer
tk = TabTokenizer()
# Create a string input
gfg = "The price\t of burger \tin BurgerKing is Rs.36.\n"
# Use tokenize method
geek = tk.tokenize(gfg)
print(geek)
輸出:
[‘The price’, ‘ of burger ‘, ‘in BurgerKing is Rs.36.\n’]
相關用法
- Python nltk.tokenize.SpaceTokenizer()用法及代碼示例
- Python nltk.tokenize.LineTokenizer用法及代碼示例
- Python nltk.tokenize.SExprTokenizer()用法及代碼示例
- Python NLTK nltk.tokenize.ConditionalFreqDist()用法及代碼示例
- Python nltk.tokenize.StanfordTokenizer()用法及代碼示例
- Python nltk.tokenizer.word_tokenize()用法及代碼示例
- Python nltk.TweetTokenizer()用法及代碼示例
- Python nltk.tokenize.mwe()用法及代碼示例
- Python nltk.WhitespaceTokenizer用法及代碼示例
- Python NLTK tokenize.regexp()用法及代碼示例
- Python NLTK tokenize.WordPunctTokenizer()用法及代碼示例
注:本文由純淨天空篩選整理自Jitender_1998大神的英文原創作品 Python NLTK | nltk.tokenize.TabTokenizer()。非經特殊聲明,原始代碼版權歸原作者所有,本譯文未經允許或授權,請勿轉載或複製。