當前位置: 首頁>>代碼示例 >>用法及示例精選 >>正文


Python NLTK tokenize.WordPunctTokenizer()用法及代碼示例


借助nltk.tokenize.WordPunctTokenizer()()方法,我們能夠通過使用以下命令從字母或非字母字符形式的單詞或句子字符串中提取標記tokenize.WordPunctTokenizer()()方法。

用法: tokenize.WordPunctTokenizer()()
返回: Return the tokens from a string of alphabetic or non-alphabetic character.

範例1:
在這個例子中,我們可以通過使用tokenize.WordPunctTokenizer()()方法,我們能夠從字母或非字母字符流中提取令牌。


# import WordPunctTokenizer() method from nltk 
from nltk.tokenize import WordPunctTokenizer 
     
# Create a reference variable for Class WordPunctTokenizer 
tk = WordPunctTokenizer() 
     
# Create a string input 
gfg = "GeeksforGeeks...$$&* \nis\t for geeks"
     
# Use tokenize method 
geek = tk.tokenize(gfg) 
     
print(geek)

輸出:

[‘GeeksforGeeks’, ‘…$$&*’, ‘is’, ‘for’, ‘geeks’]

範例2:

# import WordPunctTokenizer() method from nltk 
from nltk.tokenize import WordPunctTokenizer 
     
# Create a reference variable for Class WordPunctTokenizer 
tk = WordPunctTokenizer() 
     
# Create a string input 
gfg = "The price\t of burger \nin BurgerKing is Rs.36.\n"
     
# Use tokenize method 
geek = tk.tokenize(gfg) 
     
print(geek)

輸出:

[‘The’, ‘price’, ‘of’, ‘burger’, ‘in’, ‘BurgerKing’, ‘is’, ‘Rs’, ‘.’, ’36’, ‘.’]



相關用法


注:本文由純淨天空篩選整理自Jitender_1998大神的英文原創作品 Python NLTK | tokenize.WordPunctTokenizer()。非經特殊聲明,原始代碼版權歸原作者所有,本譯文未經允許或授權,請勿轉載或複製。