当前位置: 首页>>代码示例>>Python>>正文


Python RegexpTokenizer.remove方法代码示例

本文整理汇总了Python中nltk.tokenize.RegexpTokenizer.remove方法的典型用法代码示例。如果您正苦于以下问题:Python RegexpTokenizer.remove方法的具体用法?Python RegexpTokenizer.remove怎么用?Python RegexpTokenizer.remove使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在nltk.tokenize.RegexpTokenizer的用法示例。


在下文中一共展示了RegexpTokenizer.remove方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: CASpeakTranslate

# 需要导入模块: from nltk.tokenize import RegexpTokenizer [as 别名]
# 或者: from nltk.tokenize.RegexpTokenizer import remove [as 别名]
class CASpeakTranslate():
def __init__(self, story):
self.story = story
locations = []
text_split = []
joined_text = None
translated_story = None
#take story and split it into a list but don't undo contractions using NLTK
def deconstruct_text(self):
self.text_split = RegexpTokenizer("[\w']+|[.,!?;:-]")
self.text_split = (self.text_split.tokenize(self.story))
return self.text_split
#replace words with slang from dictionary (expand to reading big excel file)
def replace_word_slang(self):
slang_dict = {'really': ['hella', 'totally', 'fully'], 'gross': ['grody', 'gag me']}
for word in self.text_split:
for slang in slang_dict:
if word == slang:
#print (word, random.choice(slang_dict[slang]))
slang_choice = random.choice(slang_dict[slang])
self.text_split[self.text_split.index(word)] = slang_choice
return self.text_split
#create random locations to replace with add-in words
def random_location(self):
count = 0
while count < 5:
location = random.randint(1, len(self.text_split))
self.locations.append(location)
count += 1
return self.locations
#use locations made above to insert add-ins
def random_add(self):
add_ins = ['so,', 'like,', 'OMG,']
for location in self.locations:
add = random.choice(add_ins)
self.text_split.insert(location, add)
return self.text_split
#add in Dude after each !
def add_in_dude(self):
for element in self.text_split:
if element == '!':
self.text_split.insert(self.text_split.index(element), ', Dude!')
self.text_split.remove('!')
return self.text_split
#join the text
def join_text(self):
self.joined_text = ' '.join(self.text_split)
return self.joined_text
#fix the punctuation
def fix_punct(self):
fix_1 = re.sub(r"(\s+\.)", ".", self.joined_text)
fix_2 = re.sub(r"(\s+!)", "!", fix_1)
fix_3 = re.sub(r"(\s+,)", ",", fix_2)
fix_4 = re.sub(r"(\s+\?)", "?", fix_3)
fix_5 = re.sub(r"(\s+,.)", ".", fix_4)
fix_6 = re.sub(r"(\s+,,)", ",", fix_5)
self.translated_story = re.sub(r"(\s+:)", ";", fix_6)
return self.translated_story
story = ca_speak_method()
ca_speak = CASpeakTranslate(story)
ca_speak.deconstruct_text()
ca_speak.replace_word_slang()
ca_speak.random_location()
ca_speak.random_add()
ca_speak.add_in_dude()
ca_speak.join_text()
ca_speak.fix_punct()
print ca_speak.translated_story
开发者ID:KBratland,项目名称:Bootcamp,代码行数:70,代码来源:test.py


注:本文中的nltk.tokenize.RegexpTokenizer.remove方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。