本文整理汇总了Python中networkx.DiGraph.add_weighted_edges_from方法的典型用法代码示例。如果您正苦于以下问题:Python DiGraph.add_weighted_edges_from方法的具体用法?Python DiGraph.add_weighted_edges_from怎么用?Python DiGraph.add_weighted_edges_from使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类networkx.DiGraph
的用法示例。
在下文中一共展示了DiGraph.add_weighted_edges_from方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: test_write_pajek
# 需要导入模块: from networkx import DiGraph [as 别名]
# 或者: from networkx.DiGraph import add_weighted_edges_from [as 别名]
def test_write_pajek():
g = DiGraph()
g.add_weighted_edges_from([(1,2,0.5), (3,1,0.75)])
with tempfile.NamedTemporaryFile(delete=False) as f:
pajek.write_pajek(g, f)
content = open(f.name).read()
os.unlink(f.name)
nt.assert_true(re.search(r'\*vertices 3', content))
nt.assert_true(re.search(r'\*arcs', content))
# The infomap code barfs if the '*network' line is present, check for that
nt.assert_false(re.search(r'\*network', content))
示例2: Markov
# 需要导入模块: from networkx import DiGraph [as 别名]
# 或者: from networkx.DiGraph import add_weighted_edges_from [as 别名]
class Markov(object):
def __init__(self, text):
'''Build a graph representing a Markov chain from a training corpus
text: Training text, must be an iterable of 'statements', where a
sentence is a meaningful grouping of words, e.g. a sentence,
tweet, source code line etc. '''
# This is weighted directed graph, where each vertex v_i
# is a word, and each edge (v_i, v_j) models v_j appearing in the text
# after v_i. Edge weights w_ij denote frequency, that is w_ij > w_ik
# implies v_j appears more frequently after v_k
def word_pairs():
'''Generate pairs of words, eventually edges.
The first and last words of the sentence are paired with START and
END sentinel strings respectively.'''
for line in text:
words = line.split()
yield (zip(([START] + words), words + [END]))
counted_pairs = Counter(chain(*word_pairs())).items()
weighted_edges = ((u, v, w) for ((u, v), w) in counted_pairs)
self.graph = DiGraph()
self.graph.add_weighted_edges_from(weighted_edges)
def sentence(self):
'''Generate a 'sentence' from the training data'''
# We simply walk the graph from the START vertex, picking a random
# edge to follow, biased by weight (i.e. frequency in training text)
# until we reach an END edge.
def normalize(xs):
'''Fit list of values to into a probability distribution'''
s = sum(xs)
return [i/s for i in xs]
current = START
while True:
neighbours = self.graph[current]
words = list(neighbours.keys())
weights = [attr['weight'] for attr in neighbours.values()]
current = choice(words, p=normalize(weights))
if current == END:
break
else:
yield current