本文整理匯總了Python中dragnn.python.dragnn_ops.bulk_advance_from_oracle方法的典型用法代碼示例。如果您正苦於以下問題:Python dragnn_ops.bulk_advance_from_oracle方法的具體用法?Python dragnn_ops.bulk_advance_from_oracle怎麽用?Python dragnn_ops.bulk_advance_from_oracle使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類dragnn.python.dragnn_ops
的用法示例。
在下文中一共展示了dragnn_ops.bulk_advance_from_oracle方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: build_greedy_training
# 需要導入模塊: from dragnn.python import dragnn_ops [as 別名]
# 或者: from dragnn.python.dragnn_ops import bulk_advance_from_oracle [as 別名]
def build_greedy_training(self, state, network_states):
"""Advances a batch using oracle paths, returning the overall CE cost.
Args:
state: MasterState from the 'AdvanceMaster' op that advances the
underlying master to this component.
network_states: dictionary of component NetworkState objects
Returns:
(state handle, cost, correct, total): TF ops corresponding to the final
state after unrolling, the total cost, the total number of correctly
predicted actions, and the total number of actions.
Raises:
RuntimeError: if fixed features are configured.
"""
logging.info('Building component: %s', self.spec.name)
if self.spec.fixed_feature:
raise RuntimeError(
'Fixed features are not compatible with bulk annotation. '
'Use the "bulk-features" component instead.')
linked_embeddings = [
fetch_linked_embedding(self, network_states, spec)
for spec in self.spec.linked_feature
]
stride = state.current_batch_size * self.training_beam_size
with tf.variable_scope(self.name, reuse=True):
network_tensors = self.network.create([], linked_embeddings, None, None,
True, stride)
update_network_states(self, network_tensors, network_states, stride)
logits = self.network.get_logits(network_tensors)
state.handle, gold = dragnn_ops.bulk_advance_from_oracle(
state.handle, component=self.name)
cost, correct, total = build_cross_entropy_loss(logits, gold)
cost = self.add_regularizer(cost)
return state.handle, cost, correct, total
示例2: build_greedy_training
# 需要導入模塊: from dragnn.python import dragnn_ops [as 別名]
# 或者: from dragnn.python.dragnn_ops import bulk_advance_from_oracle [as 別名]
def build_greedy_training(self, state, network_states):
"""Advances a batch using oracle paths, returning the overall CE cost.
Args:
state: MasterState from the 'AdvanceMaster' op that advances the
underlying master to this component.
network_states: dictionary of component NetworkState objects
Returns:
(state handle, cost, correct, total): TF ops corresponding to the final
state after unrolling, the total cost, the total number of correctly
predicted actions, and the total number of actions.
Raises:
RuntimeError: if fixed features are configured.
"""
logging.info('Building component: %s', self.spec.name)
if self.spec.fixed_feature:
raise RuntimeError(
'Fixed features are not compatible with bulk annotation. '
'Use the "bulk-features" component instead.')
linked_embeddings = [
fetch_linked_embedding(self, network_states, spec)
for spec in self.spec.linked_feature
]
stride = state.current_batch_size * self.training_beam_size
self.network.pre_create(stride)
with tf.variable_scope(self.name, reuse=True):
network_tensors = self.network.create([], linked_embeddings, None, None,
True, stride)
update_network_states(self, network_tensors, network_states, stride)
state.handle, gold = dragnn_ops.bulk_advance_from_oracle(
state.handle, component=self.name)
cost, correct, total = self.network.compute_bulk_loss(
stride, network_tensors, gold)
if cost is None:
# The network does not have a custom bulk loss; default to softmax.
logits = self.network.get_logits(network_tensors)
cost, correct, total = build_cross_entropy_loss(logits, gold)
cost = self.add_regularizer(cost)
return state.handle, cost, correct, total