本文整理匯總了Python中whoosh.analysis.Token方法的典型用法代碼示例。如果您正苦於以下問題:Python analysis.Token方法的具體用法?Python analysis.Token怎麽用?Python analysis.Token使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類whoosh.analysis
的用法示例。
在下文中一共展示了analysis.Token方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: __call__
# 需要導入模塊: from whoosh import analysis [as 別名]
# 或者: from whoosh.analysis import Token [as 別名]
def __call__(self, text, **kargs):
words = jieba.tokenize(text, mode="search")
token = Token()
for (w, start_pos, stop_pos) in words:
if not accepted_chars.match(w) and len(w) <= 1:
continue
token.original = token.text = w
token.pos = start_pos
token.startchar = start_pos
token.endchar = stop_pos
yield token
示例2: __call__
# 需要導入模塊: from whoosh import analysis [as 別名]
# 或者: from whoosh.analysis import Token [as 別名]
def __call__(self, text, **kargs):
words = jieba_fast.tokenize(text, mode="search")
token = Token()
for (w, start_pos, stop_pos) in words:
if not accepted_chars.match(w) and len(w) <= 1:
continue
token.original = token.text = w
token.pos = start_pos
token.startchar = start_pos
token.endchar = stop_pos
yield token