本文整理汇总了Python中libdvid.DVIDNodeService.create_graph方法的典型用法代码示例。如果您正苦于以下问题:Python DVIDNodeService.create_graph方法的具体用法?Python DVIDNodeService.create_graph怎么用?Python DVIDNodeService.create_graph使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类libdvid.DVIDNodeService
的用法示例。
在下文中一共展示了DVIDNodeService.create_graph方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: execute
# 需要导入模块: from libdvid import DVIDNodeService [as 别名]
# 或者: from libdvid.DVIDNodeService import create_graph [as 别名]
def execute(self):
from DVIDSparkServices.reconutils import SimpleGraph
from libdvid import DVIDNodeService
from pyspark import SparkContext
from pyspark import StorageLevel
if "chunk-size" in self.config_data["options"]:
self.chunksize = self.config_data["options"]["chunk-size"]
# grab ROI
distrois = self.sparkdvid_context.parallelize_roi(self.config_data["dvid-info"]["roi"], self.chunksize)
num_partitions = distrois.getNumPartitions()
# map ROI to label volume (1 pixel overlap)
label_chunks = self.sparkdvid_context.map_labels64(
distrois, self.config_data["dvid-info"]["label-name"], 1, self.config_data["dvid-info"]["roi"]
)
# map labels to graph data -- external program (eventually convert neuroproof metrics and graph to a python library) ?!
sg = SimpleGraph.SimpleGraph(self.config_data["options"])
# extract graph
graph_elements = label_chunks.flatMap(sg.build_graph)
# group data for vertices and edges
graph_elements_red = graph_elements.reduceByKey(lambda a, b: a + b)
# repartition by first vertex to better group edges together
graph_elements_red = graph_elements_red.partitionBy(num_partitions, lambda a: hash(a[0]))
graph_elements_red.persist(StorageLevel.MEMORY_ONLY) # ??
graph_vertices = graph_elements_red.filter(sg.is_vertex)
graph_edges = graph_elements_red.filter(sg.is_edge)
# create graph
node_service = DVIDNodeService(
str(self.config_data["dvid-info"]["dvid-server"]), str(self.config_data["dvid-info"]["uuid"])
)
node_service.create_graph(str(self.config_data["dvid-info"]["graph-name"]))
# dump graph -- should this be wrapped through utils or through sparkdvid ??
# will this result in too many request (should they be accumulated) ??
# currently looking at one partitioning at a time to try to group requests
self.sparkdvid_context.foreachPartition_graph_elements(
graph_vertices, self.config_data["dvid-info"]["graph-name"]
)
self.sparkdvid_context.foreachPartition_graph_elements(graph_edges, self.config_data["dvid-info"]["graph-name"])
if "debug" in self.config_data["options"] and self.config_data["options"]["debug"]:
num_elements = graph_elements.count()
print "DEBUG: ", num_elements
graph_elements_red.unpersist()