本文整理汇总了Python中dragnn.python.dragnn_ops.init_component_data方法的典型用法代码示例。如果您正苦于以下问题:Python dragnn_ops.init_component_data方法的具体用法?Python dragnn_ops.init_component_data怎么用?Python dragnn_ops.init_component_data使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类dragnn.python.dragnn_ops
的用法示例。
在下文中一共展示了dragnn_ops.init_component_data方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: build_inference
# 需要导入模块: from dragnn.python import dragnn_ops [as 别名]
# 或者: from dragnn.python.dragnn_ops import init_component_data [as 别名]
def build_inference(self, handle, use_moving_average=False):
"""Builds an inference pipeline.
This always uses the whole pipeline.
Args:
handle: Handle tensor for the ComputeSession.
use_moving_average: Whether or not to read from the moving
average variables instead of the true parameters. Note: it is not
possible to make gradient updates when this is True.
Returns:
handle: Handle after annotation.
"""
self.read_from_avg = use_moving_average
network_states = {}
for comp in self.components:
network_states[comp.name] = component.NetworkState()
handle = dragnn_ops.init_component_data(
handle, beam_size=comp.inference_beam_size, component=comp.name)
master_state = component.MasterState(handle,
dragnn_ops.batch_size(
handle, component=comp.name))
with tf.control_dependencies([handle]):
handle = comp.build_greedy_inference(master_state, network_states)
handle = dragnn_ops.write_annotations(handle, component=comp.name)
self.read_from_avg = False
return handle
示例2: build_inference
# 需要导入模块: from dragnn.python import dragnn_ops [as 别名]
# 或者: from dragnn.python.dragnn_ops import init_component_data [as 别名]
def build_inference(self,
handle,
use_moving_average=False,
build_runtime_graph=False):
"""Builds an inference pipeline.
This always uses the whole pipeline.
Args:
handle: Handle tensor for the ComputeSession.
use_moving_average: Whether or not to read from the moving
average variables instead of the true parameters. Note: it is not
possible to make gradient updates when this is True.
build_runtime_graph: Whether to build a graph for use by the runtime.
Returns:
handle: Handle after annotation.
"""
self.read_from_avg = use_moving_average
self.build_runtime_graph = build_runtime_graph
network_states = {}
for comp in self.components:
network_states[comp.name] = component.NetworkState()
handle = dragnn_ops.init_component_data(
handle, beam_size=comp.inference_beam_size, component=comp.name)
if build_runtime_graph:
batch_size = 1 # runtime uses singleton batches
else:
batch_size = dragnn_ops.batch_size(handle, component=comp.name)
master_state = component.MasterState(handle, batch_size)
with tf.control_dependencies([handle]):
handle = comp.build_greedy_inference(master_state, network_states)
handle = dragnn_ops.write_annotations(handle, component=comp.name)
self.read_from_avg = False
self.build_runtime_graph = False
return handle