本文整理汇总了Python中dragnn.python.dragnn_ops.bulk_advance_from_oracle方法的典型用法代码示例。如果您正苦于以下问题:Python dragnn_ops.bulk_advance_from_oracle方法的具体用法?Python dragnn_ops.bulk_advance_from_oracle怎么用?Python dragnn_ops.bulk_advance_from_oracle使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类dragnn.python.dragnn_ops
的用法示例。
在下文中一共展示了dragnn_ops.bulk_advance_from_oracle方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: build_greedy_training
# 需要导入模块: from dragnn.python import dragnn_ops [as 别名]
# 或者: from dragnn.python.dragnn_ops import bulk_advance_from_oracle [as 别名]
def build_greedy_training(self, state, network_states):
"""Advances a batch using oracle paths, returning the overall CE cost.
Args:
state: MasterState from the 'AdvanceMaster' op that advances the
underlying master to this component.
network_states: dictionary of component NetworkState objects
Returns:
(state handle, cost, correct, total): TF ops corresponding to the final
state after unrolling, the total cost, the total number of correctly
predicted actions, and the total number of actions.
Raises:
RuntimeError: if fixed features are configured.
"""
logging.info('Building component: %s', self.spec.name)
if self.spec.fixed_feature:
raise RuntimeError(
'Fixed features are not compatible with bulk annotation. '
'Use the "bulk-features" component instead.')
linked_embeddings = [
fetch_linked_embedding(self, network_states, spec)
for spec in self.spec.linked_feature
]
stride = state.current_batch_size * self.training_beam_size
with tf.variable_scope(self.name, reuse=True):
network_tensors = self.network.create([], linked_embeddings, None, None,
True, stride)
update_network_states(self, network_tensors, network_states, stride)
logits = self.network.get_logits(network_tensors)
state.handle, gold = dragnn_ops.bulk_advance_from_oracle(
state.handle, component=self.name)
cost, correct, total = build_cross_entropy_loss(logits, gold)
cost = self.add_regularizer(cost)
return state.handle, cost, correct, total
示例2: build_greedy_training
# 需要导入模块: from dragnn.python import dragnn_ops [as 别名]
# 或者: from dragnn.python.dragnn_ops import bulk_advance_from_oracle [as 别名]
def build_greedy_training(self, state, network_states):
"""Advances a batch using oracle paths, returning the overall CE cost.
Args:
state: MasterState from the 'AdvanceMaster' op that advances the
underlying master to this component.
network_states: dictionary of component NetworkState objects
Returns:
(state handle, cost, correct, total): TF ops corresponding to the final
state after unrolling, the total cost, the total number of correctly
predicted actions, and the total number of actions.
Raises:
RuntimeError: if fixed features are configured.
"""
logging.info('Building component: %s', self.spec.name)
if self.spec.fixed_feature:
raise RuntimeError(
'Fixed features are not compatible with bulk annotation. '
'Use the "bulk-features" component instead.')
linked_embeddings = [
fetch_linked_embedding(self, network_states, spec)
for spec in self.spec.linked_feature
]
stride = state.current_batch_size * self.training_beam_size
self.network.pre_create(stride)
with tf.variable_scope(self.name, reuse=True):
network_tensors = self.network.create([], linked_embeddings, None, None,
True, stride)
update_network_states(self, network_tensors, network_states, stride)
state.handle, gold = dragnn_ops.bulk_advance_from_oracle(
state.handle, component=self.name)
cost, correct, total = self.network.compute_bulk_loss(
stride, network_tensors, gold)
if cost is None:
# The network does not have a custom bulk loss; default to softmax.
logits = self.network.get_logits(network_tensors)
cost, correct, total = build_cross_entropy_loss(logits, gold)
cost = self.add_regularizer(cost)
return state.handle, cost, correct, total