本文整理汇总了Python中nltk.probability.FreqDist.r_Nr方法的典型用法代码示例。如果您正苦于以下问题:Python FreqDist.r_Nr方法的具体用法?Python FreqDist.r_Nr怎么用?Python FreqDist.r_Nr使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类nltk.probability.FreqDist
的用法示例。
在下文中一共展示了FreqDist.r_Nr方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: len
# 需要导入模块: from nltk.probability import FreqDist [as 别名]
# 或者: from nltk.probability.FreqDist import r_Nr [as 别名]
# Compute the Percentage of Hapax Legomena's Occurrences and the longest in them
hapax_legomenas = fdist.hapaxes() # Get the list of words that appeared just once in corpus
hapax_legomena_counts = len(hapax_legomenas) # Get the count of them
percentage_of_hapax_legomena = (hapax_legomena_counts/no_of_tokens)*100 # Compute percentage
print("Percentage of Hapax Legomena Occurrences", percentage_of_hapax_legomena)
max_len_happax_legomena = max([len(word) for word in hapax_legomenas])
print("Longest happax Legomena's are", [word for word in hapax_legomenas if len(word) == max_len_happax_legomena])
# Compute the Percentage of dis legomena Occurrences and the longest in them
dis_legomenas = [key for key, value in fdist.items() if value == 2] # Get the words that occurred just twice
dis_legomena_counts = len(dis_legomenas) * 2 # Get their counts
percentage_of_dis_legomena = (dis_legomena_counts/no_of_tokens)*100 # Compute percentage
print("Percentage of Dis Legomena Occurrences", percentage_of_dis_legomena)
max_len_dis_legomena = max([len(word) for word in dis_legomenas])
print("Longest Dis Legomena's are ", [word for word in dis_legomenas if len(word) == max_len_dis_legomena])
# Plot the r vs Nr graph
fdist.plot(50)
# Compute the log scaled version of r vs Nr
log_rvsNr = {log(key):log(value) for key, value in (fdist.r_Nr()).items() if value!=0}
# Plot the graph of log(r) vs log(Nr)
plot.plot(log_rvsNr.keys(), log_rvsNr.values(), 'r.')
plot.axis([-1, 11, -1, 11])
plot.xlabel('log(r)')
plot.ylabel('log(Nr)')
plot.title('log(r) vs log(Nr) Brown Corpus')
plot.show()