本文整理汇总了Python中whoosh.analysis.Token方法的典型用法代码示例。如果您正苦于以下问题:Python analysis.Token方法的具体用法?Python analysis.Token怎么用?Python analysis.Token使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类whoosh.analysis
的用法示例。
在下文中一共展示了analysis.Token方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: __call__
# 需要导入模块: from whoosh import analysis [as 别名]
# 或者: from whoosh.analysis import Token [as 别名]
def __call__(self, text, **kargs):
words = jieba.tokenize(text, mode="search")
token = Token()
for (w, start_pos, stop_pos) in words:
if not accepted_chars.match(w) and len(w) <= 1:
continue
token.original = token.text = w
token.pos = start_pos
token.startchar = start_pos
token.endchar = stop_pos
yield token
示例2: __call__
# 需要导入模块: from whoosh import analysis [as 别名]
# 或者: from whoosh.analysis import Token [as 别名]
def __call__(self, text, **kargs):
words = jieba_fast.tokenize(text, mode="search")
token = Token()
for (w, start_pos, stop_pos) in words:
if not accepted_chars.match(w) and len(w) <= 1:
continue
token.original = token.text = w
token.pos = start_pos
token.startchar = start_pos
token.endchar = stop_pos
yield token