当前位置: 首页>>代码示例>>Python>>正文


Python FreqDist.freq方法代码示例

本文整理汇总了Python中nltk.FreqDist.freq方法的典型用法代码示例。如果您正苦于以下问题:Python FreqDist.freq方法的具体用法?Python FreqDist.freq怎么用?Python FreqDist.freq使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在nltk.FreqDist的用法示例。


在下文中一共展示了FreqDist.freq方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: vectorize_string_tfidf

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
def vectorize_string_tfidf(doc, idf):
	words = word_tokenize(doc)
	words = [word.lower() for word in words]
	words = [word for word in words if word not in stops]
	fdist = FreqDist(words)
	
	freqs = []
	# to address sparsity issues: currently uses dictionaries 
	for word in set(words):
		try: freqs += [(word, fdist.freq(word) / idf[word])]
		except KeyError: freqs += [(word, fdist.freq(word))]
	return dict(freqs)
开发者ID:zucxjo0415,项目名称:babywiki,代码行数:14,代码来源:commons.py

示例2: proto

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
    def proto(self, num, language, authors, token_vocab, token_df, lemma_vocab,
              pos_vocab, synset_vocab, stemmer):
        d = Document()
        assert language == self.lang

        if self._id:
            d.id = self._id
        else:
            d.id = num

        d.language = language
        d.title = self.title.strip()
        num_sentences = max(self._sentences) + 1

        tf_token = FreqDist()
        for ii in self.tokens():
            tf_token.inc(ii)

        for ii in xrange(num_sentences):
            s = d.sentences.add()
            for jj in self._sentences[ii]:
                w = s.words.add()
                w.token = token_vocab[jj.word]
                w.lemma = lemma_vocab[jj.lemma]
                w.pos = pos_vocab[jj.pos]
                w.relation = pos_vocab[jj.rel]
                w.parent = jj.parent
                w.offset = jj.offset
                w.tfidf = token_df.compute_tfidf(jj.word,
                                                 tf_token.freq(jj.word))
        return d
开发者ID:NetBUG,项目名称:topicmod,代码行数:33,代码来源:wacky.py

示例3: test

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
def test():
    global N, words, network

    print 'In testing.'

    gettysburg = """Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this. But, in a larger sense, we can not dedicate -- we can not consecrate -- we can not hallow -- this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us -- that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion -- that we here highly resolve that these dead shall not have died in vain -- that this nation, under God, shall have a new birth of freedom -- and that government of the people, by the people, for the people, shall not perish from the earth."""
    tokenizer = RegexpTokenizer('\w+')
    gettysburg_tokens = tokenizer.tokenize(gettysburg) 

    samples = []
    for token in gettysburg_tokens:
        word = token.lower()
        if word not in ENGLISH_STOP_WORDS and word not in punctuation:
            samples.append(word)

    dist = FreqDist(samples)
    V = Vol(1, 1, N, 0.0)
    for i, word in enumerate(words):
        V.w[i] = dist.freq(word)

    pred = network.forward(V).w
    topics = []
    while len(topics) != 5:
        max_act = max(pred)
        topic_idx = pred.index(max_act)
        topic = words[topic_idx]

        if topic in gettysburg_tokens:
            topics.append(topic)
    
        del pred[topic_idx]

    print 'Topics of the Gettysburg Address:'
    print topics
开发者ID:Aaronduino,项目名称:ConvNetPy,代码行数:36,代码来源:topics.py

示例4: vectorize_string

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
def vectorize_string(doc):
	words = word_tokenize(doc)
	words = [word.lower() for word in words]
	words = [word for word in words if word not in stops]
	fdist = FreqDist(words)
		
	# to address sparsity issues: currently uses dictionaries 
	freqs = [(word, fdist.freq(word)) for word in set(words)]
	return dict(freqs)
开发者ID:zucxjo0415,项目名称:babywiki,代码行数:11,代码来源:commons.py

示例5: parse

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
	def parse(self, response):
		"""
		The lines below is a spider contract. For more info see:
		http://doc.scrapy.org/en/latest/topics/contracts.html
		
		@url https://www.google.com/search?q=personal+nutrition
		@scrapes pages to depth<=3, using priority-score based BFS
		"""
		
		doc = clean_html(response.body_as_unicode())
		words = word_tokenize(doc)
		words = [word.lower() for word in words]
		words = [word for word in words if word not in self.stops]
		fdist = FreqDist(words)
		
		for word in set(words):
			if (fdist.freq(word) * fdist.N()) > 1:
				item = WordCount()
				item['word'] = word
				item['count'] = int(fdist.freq(word) * fdist.N())
				yield item 
		#for href in response.css("a::attr('href')"):
		#	url = response.urljoin(href.extract())
		#	yield scrapy.Request(url, callback=self.parse)
开发者ID:zucxjo0415,项目名称:babywiki,代码行数:26,代码来源:veblen.py

示例6: compute_features

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
    def compute_features(self, s, count):

        # preprocess
        tok_sent = nltk.tokenize.word_tokenize(s)
        stop_tok_sent = [x for x in tok_sent if x not in cachedStopWords]

        # location features
        P = 1.0/count
        F5 = 1 if count <=5 else 0
        LEN = len(stop_tok_sent)/30.0

        # language modelling
        LM = LModel.score(s)

        # pos tagging features
        tag_fd = FreqDist(map_tag("en-ptb", "universal",tag) if map_tag("en-ptb", "universal",tag) not in cachedStopPOStags else "OTHER" for (word, tag) in pos_tagger(tok_sent))
        NN = tag_fd.freq("NOUN")
        VB = tag_fd.freq("VERB")

        # headline-sentence similarity
        VS1 = 1 - spatial.distance.cosine(self.hl_vsv_1.toarray(), self.father.cv.transform([s]).toarray())
        TFIDF = 1 - spatial.distance.cosine(self.hl_tfidf.toarray(), self.father.tv.transform([s]).toarray())

        # topic description-sentence similarity
        CT = 1 - spatial.distance.cosine(self.father.desc_vsv.toarray(), self.father.cv.transform([s]).toarray())
        Q = 1 - spatial.distance.cosine(self.father.title_vsv.toarray(), self.father.cv.transform([s]).toarray())

        # security checks
        if math.isnan(VS1):
            VS1 = 0
            print self.father.code, self.id
        if math.isnan(CT):
            CT = 0
            print self.father.code, self.id
        if math.isnan(Q):
            Q = 0
            print self.father.code, self.id

        # active features
        return np.asarray([P, F5, LEN, LM, VS1, TFIDF, VB, NN, CT, Q])
开发者ID:mtthss,项目名称:experiments-in-summarization,代码行数:42,代码来源:data_structures.py

示例7: getdict

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
 def getdict(self, content):
   wnl = nltk.WordNetLemmatizer()
   begin = clock()
   print('begin')
   tokens = nltk.word_tokenize(content)
   wordlist = nltk.corpus.words.words()
   stopwords = nltk.corpus.stopwords.words('english')
   fdist = FreqDist(wnl.lemmatize(wnl.lemmatize(wnl.lemmatize(word.lower(),'a')), 'v') for word in tokens if word.isalpha() and word not in stopwords)
   print(clock() - begin)
   js = {'samples': fdist.B(), 'outcomes': fdist.N()}
   wdict = {}
   count = 1
   begin = clock()
   for w in fdist.most_common():
     d = {'index': count, 'word': w[0], 'count': w[1], 'freq': round(fdist.freq(w[0]), 4)}
     d['basic'] = self.getexp(w[0])
     wdict[w[0]] = d
     count = count + 1
   print(clock() - begin)
   wdict = sorted(wdict.items(),key=lambda t: t[1]['index'])
   js['words'] = wdict
   return js
开发者ID:myklory,项目名称:iEng,代码行数:24,代码来源:TextAnalyze.py

示例8: NumTranslationsFeatureExtractor

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
class NumTranslationsFeatureExtractor(FeatureExtractor):

    # .f2e file
    def __init__(self, lex_prob_file, corpus_file):
        self.lex_prob = defaultdict(list)
        for line in open(lex_prob_file):
            chunks = line[:-1].split()
            self.lex_prob[chunks[1]].append(float(chunks[2]))
        corpus = TextCorpus(input=corpus_file)
        self.corpus_freq = FreqDist([word for line in corpus.get_texts() for word in line])
        self.thresholds = [0.01, 0.05, 0.1, 0.2, 0.5]

    def get_features(self, context_obj):
        if 'source_token' not in context_obj or len(context_obj['source_token']) == 0:
            return [0.0 for i in range(len(self.thresholds)*2)]

        translations, translations_weighted = [], []
        for thr in self.thresholds:
            all_words, all_words_weighted = [], []
            for word in context_obj['source_token']:
                trans = [fl for fl in self.lex_prob[word] if fl >= thr]
                all_words.append(len(trans))
                all_words_weighted.append(len(trans)*self.corpus_freq.freq(word))
            translations.append(np.average(all_words))
            translations_weighted.append(np.average(all_words_weighted))
        return translations + translations_weighted

    def get_feature_names(self):
        return ['source_translations_001_freq',
                'source_translations_005_freq',
                'source_translations_01_freq',
                'source_translations_02_freq',
                'source_translations_05_freq',
                'source_translations_001_freq_weighted',
                'source_translations_005_freq_weighted',
                'source_translations_01_freq_weighted',
                'source_translations_02_freq_weighted',
                'source_translations_05_freq_weighted']
开发者ID:kepler,项目名称:marmot,代码行数:40,代码来源:num_translations_feature_extractor.py

示例9: list

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
#
# First
#
# Here we will determine the relative frequencies of English characters in the text
# Then we will calculate the entropy of the distribution

# here we use the expression list(var_name) to turn our string into a list
# this basically separates each character for us to make it so that it works
# directly in the freqdist function
english_unigram_fdist = FreqDist(list(english_model_content))

english_unigram_entropy = 0.0

# now loop and get the entropy for english unigrams
for unigram in english_unigram_fdist.samples():
    english_unigram_entropy += english_unigram_fdist.freq(unigram) * math.log(english_unigram_fdist.freq(unigram), 2)

english_unigram_entropy = -english_unigram_entropy

print "The English Unigram Entropy is: " + str(english_unigram_entropy)


#
# Second
#
# Here we will determine the relative frequencies of English bigrams in the text
# Then we will calculate the entropy of the bigram distribution

# create a list to store bigrams in
english_model_bigrams = []
开发者ID:skunath,项目名称:NLP_Examples,代码行数:32,代码来源:calc_info_measures.py

示例10: FreqDist

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
# Counting the number of characters in each word in a text
[len(w) for w in text1]

# Collocations are frequent bigrams from words that are not so common as unigrams. 
# This function returns nothing, just prints the collocations to screen
text1.collocations()

# Computing the frequency distribution of word lengths. Returns a dictionary.
fdistWordLength = FreqDist([len(w) for w in text1])

fdistWordLength.keys() # The different word lengths
fdistWordLength.values() # The frequency of each word length
fdistWordLength.items() # Shows both keys and values at the same time

fdist1['the']
fdist1.freq('the') # Frequency of the word ‘the’
fdist1.max()



#### MOVIE REVIEWS ####
import nltk
from nltk.corpus import movie_reviews

movie_reviews.categories()
movie_reviews.fileids('pos')
movie_reviews.fileids('neg')
movie_reviews.words('neg/cv729_10475.txt')
len(movie_reviews.words('neg/cv729_10475.txt'))

documents = [(list(movie_reviews.words(fileid)), category)
开发者ID:STIMALiU,项目名称:TextMiningCourse,代码行数:33,代码来源:Intro2NLTK.py

示例11: test_freq_freqdist

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
 def test_freq_freqdist(self):
     """Probabilities are indentical to using FreqDist."""
     freqdist = FreqDist(TEST_TOKENS)
     for word_type in set(TEST_TOKENS):
         self.assertEqual(self.model.prob(word_type, None),
                          freqdist.freq(word_type))
开发者ID:lingtools,项目名称:lingtools,代码行数:8,代码来源:test_ngram.py

示例12: load_book_features

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
def load_book_features(filename, smartStopWords={}, pronSet={}, conjSet={}):
    '''
    Load features for each book in the corpus. There are 4 + RANGE*4 features
    for each instance. These features are:
       ---------------------------------------------------------------------------------------------------------
       No. Feature Name                                                                         No. of features.
       ---------------------------------------------------------------------------------------------------------
       1.  number of hapax legomena divided by number of unique words                           1
       2.  number of dis legomena divided by number of unique words                             1
       3.  number of unique words divided by number of total words                              1
       4.  flesch readability score divided by 100                                              1

       5.  no. of sentences of length in the range [1, RANGE] divided by the                    RANGE
           number of total sentences
       6.  no. of words of length in the range [1, RANGE] divided by the                        RANGE
           number of total words
       7.  no. of nominative pronouns per sentence in the range [1, RANGE] divided by the       RANGE
           number of total sentences
       8.  no. of (coordinating + subordinating) conjunctions per sentence in the range         RANGE
           [1, RANGE] divided by the number of total sentences
    '''

    text = extract_book_contents(open(filename, 'r').read()).lower()

    contents = re.sub('\'s|(\r\n)|-+|["_]', ' ', text) # remove \r\n, apostrophes, and dashes
    sentenceList = sent_tokenize(contents.strip())

    cleanWords = []
    sentenceLenDist = []
    pronDist = []
    conjDist = []
    sentences = []
    totalWords = 0
    wordLenDist = []
    totalSyllables = 0
    for sentence in sentenceList:
        if sentence != ".":
            pronCount = 0
            conjCount = 0
            sentences.append(sentence)
            sentenceWords = re.findall(r"[\w']+", sentence)
            totalWords += len(sentenceWords) # record all words in sentence
            sentenceLenDist.append(len(sentenceWords)) # record length of sentence in words
            for word in sentenceWords:
                totalSyllables += count(word)
                wordLenDist.append(len(word)) # record length of word in chars
                if word in pronSet:
                    pronCount+=1 # record no. of pronouns in sentence
                if word in conjSet:
                    conjCount+=1 # record no. of conjunctions in sentence
                if word not in smartStopWords:
                    cleanWords.append(word)
            pronDist.append(pronCount)
            conjDist.append(conjCount)

    sentenceLengthFreqDist = FreqDist(sentenceLenDist)
    sentenceLengthDist = map(lambda x: sentenceLengthFreqDist.freq(x), range(1, RANGE))
    sentenceLengthDist.append(1-sum(sentenceLengthDist))

    pronounFreqDist = FreqDist(pronDist)
    pronounDist = map(lambda x: pronounFreqDist.freq(x), range(1, RANGE))
    pronounDist.append(1-sum(pronounDist))

    conjunctionFreqDist = FreqDist(conjDist)
    conjunctionDist = map(lambda x: conjunctionFreqDist.freq(x), range(1, RANGE))
    conjunctionDist.append(1-sum(conjunctionDist))

    wordLengthFreqDist= FreqDist(wordLenDist)
    wordLengthDist = map(lambda x: wordLengthFreqDist.freq(x), range(1, RANGE))
    wordLengthDist.append(1-sum(wordLengthDist))

    # calculate readability
    avgSentenceLength = np.mean(sentenceLenDist)
    avgSyllablesPerWord = float(totalSyllables)/totalWords
    readability = float(206.835 - (1.015 * avgSentenceLength) - (84.6 * avgSyllablesPerWord))/100

    wordsFreqDist = MyFreqDist(FreqDist(cleanWords))
    #sentenceDist = FreqDist(sentences)
    #print sentenceDist.keys()[:15] # most common sentences
    #print wordsFreqDist.keys()[:15] # most common words
    #print wordsFreqDist.keys()[-15:] # most UNcommon words

    numUniqueWords = len(wordsFreqDist.keys())
    numTotalWords = len(cleanWords)

    hapax = float(len(wordsFreqDist.hapaxes()))/numUniqueWords # no. words occurring once / total num. UNIQUE words
    dis = float(len(wordsFreqDist.dises()))/numUniqueWords # no. words occurring twice / total num. UNIQUE words
    richness = float(numUniqueWords)/numTotalWords # no. unique words / total num. words

    result = []
    result.append(hapax)
    result.append(dis)
    result.append(richness)
    result.append(readability)
    result.extend(sentenceLengthDist)
    result.extend(wordLengthDist)
    result.extend(pronounDist)
    result.extend(conjunctionDist)

    return result, numTotalWords
开发者ID:neerajrao,项目名称:hybrid-svm-author-attribution,代码行数:102,代码来源:svmAuthorRec.py

示例13: DirichletWords

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]

#.........这里部分代码省略.........
    if num_tables > self.max_tables:
      number_to_forget += (num_tables - self.max_tables)
    
    # change this to weight lower probability
    tables_to_forget = random.sample(xrange(num_tables), number_to_forget)
    words = self._words.keys()

    self.initialize_index()

    word_id = -1
    for ii in words:
      word_id += 1

      if not word_id in tables_to_forget:
        self.index(ii)
        continue

      count = self._words[ii]
      for jj in self._topics:
        self._topics[jj][ii] = 0
        del self._topics[jj][ii]

      for jj in ii:
        self._chars[jj] -= count
      self._words[ii] = 0
      del self._words[ii]

  def seq_prob(self, word):
    val = 1.0

    # Weighted monkeys at typewriter
    for ii in word:
      # Add in a threshold to make sure we don't have zero probability sequences
      val *= max(self._alphabet.freq(ii), CHAR_SMOOTHING) 

    # Normalize
    val /= 2**(len(word))
    return val

  def merge(self, otherlambda, rhot):
    ''' fold the word counts in another DirichletWords object into this
        one, weighted by rhot. assumes self.num_topics is the same for both
        objects. '''
    
    all_words = self._words.keys() + otherlambda._words.keys()
    distinct_words = list(set(all_words))

    # combines the probabilities, with otherlambda weighted by rho, and
    # generates a new count by combining the number of words in the old
    # (current) lambda with the number in the new. here we essentially take
    # the same steps as update_count but do so explicitly so we can weight the
    # terms appropriately. 
    total_words = float(self._words.N() + otherlambda._words.N())

    self_scale = (1.0-rhot)*total_words/float(self._words.N())
    other_scale = rhot*total_words/float(otherlambda._words.N())

    for word in distinct_words:
      self.index(word)
        
      # update word counts
      new_val = (self_scale*self._words[word] 
                 + other_scale*otherlambda._words[word])
      if new_val >= 1.0:
          self._words[word] = new_val
      else:
开发者ID:Mondego,项目名称:pyreco,代码行数:70,代码来源:allPythonContent.py

示例14: open

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
# dest = '/Users/asif/Sites/pmidx/journals.csv'
# f = open(dest, 'w+')
# f.write(journalsCSV)
# f.close()

# Tokenized titles
tokenized_titles = []
tokenized_titles = [word_tokenize(titles[x]) for x in xrange(0,len(titles))]
tkTitlesList = []
for n in xrange(0,len(tokenized_titles)):
	tkTitlesList = tkTitlesList + tokenized_titles[n]
stops=['a','the','had','.','(',')','and','of',':',',','in','[',']','for','by','--','?','an','\'','\'s','to','on','is','as','from','-','at','can','does','or','but','use','its','with','using','during']
tokenizedTitles = [token.lower() for token in tkTitlesList if token.lower() not in stops]
fdist = FreqDist(tokenizedTitles)
sortedTitleWords = fdist.keys()
sortedTitleProb = [fdist.freq(token) for token in sortedTitleWords]
sortedTitleN = fdist.N()
sortedTitleCounts = [int(prob*sortedTitleN) for prob in sortedTitleProb]
titlesCounter = {}
for x in xrange(0,60):
	titlesCounter[sortedTitleWords[x]] = sortedTitleCounts[x]

# Returns collaborators as a dictionary matrix
def collaborators_matrix(authors):
	coll = {}
	for x in xrange(0,len(authors)):
		if authors[x]:
			for y in xrange(0,len(authors[x])):
				for z in xrange(0,len(authors[x])):
					if authors[x][y] != authors[x][z]:
						if authors[x][y] in coll.keys(): # first author
开发者ID:ethanyishchan,项目名称:Research-analytics,代码行数:33,代码来源:pmanalyze_v2.py

示例15: FreqDist

# 需要导入模块: from nltk import FreqDist [as 别名]
# 或者: from nltk.FreqDist import freq [as 别名]
		return ' '.join( tl[tl.index('[') + 1 : tl.index(']')] )
	else:
		return ' '.join( tl[ 0 : 5 ] )
	return

# 3. Unigrams
from nltk import FreqDist
# a. Lowercase the tokens in emma and create a frequency distribution from them. 
# (Do not throw away punctuation.) Store the result in fd1.
fd1 = FreqDist( list( t.lower() for t in emma) )

# b. Set A3b to the count of the word 'town' in fd1.
A3b = fd1['town']

# c. Set A3c to the relative frequency (probability) of the word 'town' in ud.
A3c = fd1.freq('town')

# d. Set A3d to the number of hapaxes in the distribution fd1.
A3d = len( list( x for x in fd1 if fd1[x] == 1 ) )


# 4. When one formats floating-point numbers, one can specify the number of
# digits after the decimal point as follows:
# >>> '{:.4}'.format(1/7)
# >>> '0.1429'

# Write a function print_uni that takes a FreqDist as input and prints a table with 
# three columns: a word, its count, and its relative frequency. It should print the 
# words in alphabetic order. The first column should be 10 characters wide. If a word 
# is more than 10 characters long, truncate it to 10 characters. The second column 
# should be five characters wide, and the relative frequency should be printed with 
开发者ID:caiqizhe,项目名称:Archive,代码行数:33,代码来源:hw5.py


注:本文中的nltk.FreqDist.freq方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。