當前位置: 首頁>>代碼示例>>Python>>正文


Python corpus.stopwords方法代碼示例

本文整理匯總了Python中nltk.corpus.stopwords方法的典型用法代碼示例。如果您正苦於以下問題:Python corpus.stopwords方法的具體用法?Python corpus.stopwords怎麽用?Python corpus.stopwords使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在nltk.corpus的用法示例。


在下文中一共展示了corpus.stopwords方法的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: remove_stopwords

# 需要導入模塊: from nltk import corpus [as 別名]
# 或者: from nltk.corpus import stopwords [as 別名]
def remove_stopwords(tokens):
    """
    Returns a list of all words in tokens not found in stopwords

    Args:
        tokens: tokens to remove stopwords from
        stopwords: path to a stopwords text file. Expects each word on its own line

    Returns:
        lst with stopwords removed
    """
    filtered_list = []

    with open(PATH_TO_STOPWORDS, 'r') as f:
        stopwords_list = [x.strip() for x in f.readlines()]
        # use a set, lookup is quicker
        stopwords_set = set(stopwords_list)
        for word in tokens:
            if word not in stopwords_set:
                filtered_list.append(word)
    return filtered_list 
開發者ID:JohnGiorgi,項目名稱:Alfred,代碼行數:23,代碼來源:process_input.py

示例2: preprocess

# 需要導入模塊: from nltk import corpus [as 別名]
# 或者: from nltk.corpus import stopwords [as 別名]
def preprocess(tweet,stopwords):
    #print "ok"
    tweet = tweet.replace("#sarcasm","")
    tweet = tweet.replace("#sarcastic","")
    tweet = re.sub(r"(?<=^|(?<=[^a-zA-Z0-9-_\.]))@([A-Za-z]+[A-Za-z0-9]+)", "", tweet)
    tweet = re.sub(r'^https?:\/\/.*[\r\n]*', '', tweet, flags=re.MULTILINE)
    table = string.maketrans("","")
    tweet=tweet.translate(table, "?/:^&*()[email protected]$%:;',<.>-+*\{\}[]\"")
    stemmer = SnowballStemmer("english",ignore_stopwords=True)
    tokens = tweet.split()
    tokens = [ w for w in tokens if w not in stopwords]
    tokens = [item for item in tokens if item.isalpha()]
    tokens = [ stemmer.stem(w) for w in tokens ]
    return tokens 
開發者ID:priyanshu-bajpai,項目名稱:Sarcasm-Detection-on-Twitter,代碼行數:16,代碼來源:preproc.py

示例3: __init__

# 需要導入模塊: from nltk import corpus [as 別名]
# 或者: from nltk.corpus import stopwords [as 別名]
def __init__(self, text, stopwords):
        self.rawtext = self.filter(text)
        self.uniqueWords = []
        self.isContextBuilt = False
        self.unicodeErrors = 0
        self.stopwords = stopwords
        self.rawTokens = []
        self.tokenizeRawText() 
開發者ID:andersonpaac,項目名稱:smmry-alternate,代碼行數:10,代碼來源:Sentence.py

示例4: buildContext

# 需要導入模塊: from nltk import corpus [as 別名]
# 或者: from nltk.corpus import stopwords [as 別名]
def buildContext(self):
        if self.isContextBuilt == False:
            sometext = self.rawtext.translate(None, string.punctuation)
            st = LancasterStemmer()
            sometext = sometext.split()
            for each in sometext:
                try:
                    ev = st.stem(each.lower())
                    if ev not in self.stopwords:
                        self.uniqueWords.append(ev)
                except UnicodeDecodeError as e:
                    self.unicodeErrors = self.unicodeErrors + 1
            self.isContextBuilt = True
        return self.uniqueWords 
開發者ID:andersonpaac,項目名稱:smmry-alternate,代碼行數:16,代碼來源:Sentence.py


注:本文中的nltk.corpus.stopwords方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。