本文整理汇总了Python中gensim.corpora.dictionary.Dictionary.token2id方法的典型用法代码示例。如果您正苦于以下问题:Python Dictionary.token2id方法的具体用法?Python Dictionary.token2id怎么用?Python Dictionary.token2id使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类gensim.corpora.dictionary.Dictionary
的用法示例。
在下文中一共展示了Dictionary.token2id方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: docs_to_gensim
# 需要导入模块: from gensim.corpora.dictionary import Dictionary [as 别名]
# 或者: from gensim.corpora.dictionary.Dictionary import token2id [as 别名]
def docs_to_gensim(spacy_docs, spacy_vocab, lemmatize=True,
filter_stops=True, filter_punct=True, filter_nums=False):
"""
Convert multiple ``spacy.Doc`` s into a gensim dictionary and bag-of-words corpus.
Args:
spacy_docs (list(``spacy.Doc``))
spacy_vocab (``spacy.Vocab``)
lemmatize (bool): if True, use lemmatized strings for words; otherwise,
use the original form of the string as it appears in ``doc``
filter_stops (bool): if True, remove stop words from word list
filter_punct (bool): if True, remove punctuation from word list
filter_nums (bool): if True, remove numbers from word list
Returns:
:class:`gensim.Dictionary <gensim.corpora.dictionary.Dictionary>`:
integer word ID to word string mapping
list(list((int, int))): list of bag-of-words documents, where each doc is
a list of (integer word ID, word count) 2-tuples
"""
gdict = Dictionary()
gcorpus = []
stringstore = StringStore()
doc_freqs = Counter()
for spacy_doc in spacy_docs:
if lemmatize is True:
bow = ((spacy_vocab[tok_id], count)
for tok_id, count in spacy_doc.count_by(attrs.LEMMA).items())
else:
bow = ((spacy_vocab[tok_id], count)
for tok_id, count in spacy_doc.count_by(attrs.ORTH).items())
if filter_stops is True:
bow = ((lex, count) for lex, count in bow if not lex.is_stop)
if filter_punct is True:
bow = ((lex, count) for lex, count in bow if not lex.is_punct)
if filter_nums is True:
bow = ((lex, count) for lex, count in bow if not lex.like_num)
bow = sorted(((stringstore[lex.orth_], count) for lex, count in bow),
key=itemgetter(0))
doc_freqs.update(tok_id for tok_id, _ in bow)
gdict.num_docs += 1
gdict.num_pos += sum(count for _, count in bow)
gdict.num_nnz += len(bow)
gcorpus.append(bow)
gdict.token2id = {s: i for i, s in enumerate(stringstore)}
gdict.dfs = dict(doc_freqs)
return (gdict, gcorpus)