当前位置: 首页>>代码示例>>Python>>正文


Python FreqDist.hapaxes方法代码示例

本文整理汇总了Python中nltk.probability.FreqDist.hapaxes方法的典型用法代码示例。如果您正苦于以下问题:Python FreqDist.hapaxes方法的具体用法?Python FreqDist.hapaxes怎么用?Python FreqDist.hapaxes使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在nltk.probability.FreqDist的用法示例。


在下文中一共展示了FreqDist.hapaxes方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: ProcessaArquivo

# 需要导入模块: from nltk.probability import FreqDist [as 别名]
# 或者: from nltk.probability.FreqDist import hapaxes [as 别名]
def ProcessaArquivo(f):
    """Calcula estatísticas do arquivo dado."""
    print "Processando arquivo %s..." % f
    corpus=CriaLeitorDeCorpus(arquivo=f)
    tokens=corpus.words()
    print "Quantidade de tokens: %d." % len(tokens)
    alfabeticas=ExtraiAlfabeticas(tokens)
    print "Quantidade de tokens alfabéticos: %d." % len(alfabeticas)
    freq=FreqDist(alfabeticas)
    print "Diversidade lexical: %.2f%%" % CalculaDiversidadeLexical(freq)
    print "Quantidade de hapaxes: %d.\n\n\n" % len(freq.hapaxes())
开发者ID:CompLin,项目名称:Aelius,代码行数:13,代码来源:CalculaEstatisticasLexicais.py

示例2: contains_digits

# 需要导入模块: from nltk.probability import FreqDist [as 别名]
# 或者: from nltk.probability.FreqDist import hapaxes [as 别名]
ascii_tokens = []
for token in corpus_tokenized:
    try:
        token.decode('ascii')
        if not contains_digits(token):
            ascii_tokens.append(token)
    except:
        continue

ascii_tokens_lowered = []
for token in ascii_tokens:
    ascii_tokens_lowered.append(token.lower())
fdist = FreqDist(ascii_tokens)
fdist_lowered = FreqDist(ascii_tokens_lowered)
hapaxes = fdist.hapaxes()
print('Number of hapaxes before trimming: ' + str(len(hapaxes)))
lowered_hapaxes = fdist_lowered.hapaxes()
lowered_hapax_dict = {}
for lowered_hapax in lowered_hapaxes:
    lowered_hapax_dict[lowered_hapax] = True
tmp_hapaxes = [] # necessary because removing from hapaxes while looping through it caused subtle bug
for hapax in hapaxes:
    # Remove hapaxes which are only hapaxes because of capitalization
    if hapax.lower() in lowered_hapax_dict:
        tmp_hapaxes.append(hapax)
hapaxes = tmp_hapaxes
print('Number of hapaxes after trimming: ' + str(len(hapaxes)))

# Tweet a random hapax
开发者ID:ihinsdale,项目名称:hitchens-lexicon,代码行数:31,代码来源:bot.py

示例3: len

# 需要导入模块: from nltk.probability import FreqDist [as 别名]
# 或者: from nltk.probability.FreqDist import hapaxes [as 别名]
from nltk.corpus import brown
import matplotlib.pyplot as plot
import pylab
from math import log

# Get the case insensitive words from the brown corpus
case_inses_words = [word.lower() for word in brown.words()]
no_of_tokens = len(case_inses_words)
print("Total No of Tokens in Brown Corpus ", no_of_tokens)

# Pass it on to FreqDist to get Frequency Distributions
fdist = FreqDist(case_inses_words)
print(fdist)

# Compute the Percentage of Hapax Legomena's Occurrences and the longest in them
hapax_legomenas = fdist.hapaxes() # Get the list of words that appeared just once in corpus
hapax_legomena_counts = len(hapax_legomenas) # Get the count of them
percentage_of_hapax_legomena = (hapax_legomena_counts/no_of_tokens)*100 # Compute percentage
print("Percentage of Hapax Legomena Occurrences", percentage_of_hapax_legomena)
max_len_happax_legomena = max([len(word) for word in hapax_legomenas])
print("Longest happax Legomena's are", [word for word in hapax_legomenas if len(word) == max_len_happax_legomena])

# Compute the Percentage of dis legomena Occurrences and the longest in them
dis_legomenas = [key for key, value in fdist.items() if value == 2] # Get the words that occurred just twice
dis_legomena_counts = len(dis_legomenas) * 2 # Get their counts
percentage_of_dis_legomena = (dis_legomena_counts/no_of_tokens)*100 # Compute percentage
print("Percentage of Dis Legomena Occurrences", percentage_of_dis_legomena)
max_len_dis_legomena = max([len(word) for word in dis_legomenas])
print("Longest Dis Legomena's are ", [word for word in dis_legomenas if len(word) == max_len_dis_legomena])

# Plot the r vs Nr graph
开发者ID:GaddipatiAsish,项目名称:Natural-Language-Processing,代码行数:33,代码来源:Ex3_part1.py


注:本文中的nltk.probability.FreqDist.hapaxes方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。