本文整理汇总了Python中nltk.tokenize.RegexpTokenizer.texts_to_sequences方法的典型用法代码示例。如果您正苦于以下问题:Python RegexpTokenizer.texts_to_sequences方法的具体用法?Python RegexpTokenizer.texts_to_sequences怎么用?Python RegexpTokenizer.texts_to_sequences使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类nltk.tokenize.RegexpTokenizer
的用法示例。
在下文中一共展示了RegexpTokenizer.texts_to_sequences方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: Tokenizer
# 需要导入模块: from nltk.tokenize import RegexpTokenizer [as 别名]
# 或者: from nltk.tokenize.RegexpTokenizer import texts_to_sequences [as 别名]
string = re.sub(r",", " , ", string)
string = re.sub(r"!", " ! ", string)
string = re.sub(r"\(", " \( ", string)
string = re.sub(r"\)", " \) ", string)
string = re.sub(r"\?", " \? ", string)
return string.strip()
myTexts=[]
for each in X:
myEach=clean_string(each)
myTexts +=[myEach]
# tokenize texts into tokens
tokenizer = Tokenizer(nb_words=800)
tokenizer.fit_on_texts(myTexts)
sequences = tokenizer.texts_to_sequences(myTexts)
word_index = tokenizer.word_index
# trim the length of each sequence to the same length, I set 300.
data = pad_sequences(sequences, maxlen=300)
y = np.zeros((len(myTexts), 1))
for i in range(len(myTexts)):
if i < 1000:
y[i]=[True] # positive
else:
y[i]=[False] # negative
embedding_matrix = np.zeros((len(word_index) + 1, 50))
for word, i in word_index.items():
embedding_vector = myDictionary.get(key)
if embedding_vector is not None: