本文整理匯總了Python中sklearn.datasets.base.Bunch.stpwrdlst方法的典型用法代碼示例。如果您正苦於以下問題:Python Bunch.stpwrdlst方法的具體用法?Python Bunch.stpwrdlst怎麽用?Python Bunch.stpwrdlst使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類sklearn.datasets.base.Bunch
的用法示例。
在下文中一共展示了Bunch.stpwrdlst方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: fetch_20newsgroups
# 需要導入模塊: from sklearn.datasets.base import Bunch [as 別名]
# 或者: from sklearn.datasets.base.Bunch import stpwrdlst [as 別名]
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
##################################################################
## 導入數據
categories = ["alt.atheism", "soc.religion.christian", "comp.graphics", "sci.med"] # 選取需要下載的新聞分類
data_set = fetch_20newsgroups(subset="train", categories=categories, shuffle=True, random_state=42) # 下載並獲取訓練數據, 也是先全部下載, 再提取部分
print(data_set.target_names) # ['alt.atheism', 'comp.graphics', 'sci.med', 'soc.religion.christian']
##################################################################
## 定義詞袋數據結構
# tdm:tf-idf 計算後詞袋
stpwrdlst = [] # 停用詞表為 空
wordbag = Bunch(target_name=[], label=[], filenames=[], tdm=[], vocabulary={}, stpwrdlst=[])
wordbag.target_name = data_set.target_names
wordbag.label = data_set.target
wordbag.filenames = data_set.filenames
wordbag.stpwrdlst = stpwrdlst
vectorizer = CountVectorizer(stop_words=stpwrdlst) # 使用 TfidfVectorizer 初始化向量空間模型--創建詞袋
transformer = TfidfTransformer() # 該類會統計每個詞語的 tf-idf 權值
fea_train = vectorizer.fit_transform(data_set.data) # 文本轉為詞頻矩陣
print(fea_train.shape) # (2257, 35788); 2257 篇文檔, 35788 個單詞
wordbag.tdm = fea_train # 為 tdm 賦值
wordbag.vocabulary = vectorizer.vocabulary_
##################################################################
## 創建詞袋的持久化
file_obj = open("tmp.data", "wb")
pickle.dump(wordbag, file_obj)
file_obj.close()
##################################################################
## 讀取