本文整理匯總了Python中dragnn.python.dragnn_ops.bulk_advance_from_prediction方法的典型用法代碼示例。如果您正苦於以下問題:Python dragnn_ops.bulk_advance_from_prediction方法的具體用法?Python dragnn_ops.bulk_advance_from_prediction怎麽用?Python dragnn_ops.bulk_advance_from_prediction使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類dragnn.python.dragnn_ops
的用法示例。
在下文中一共展示了dragnn_ops.bulk_advance_from_prediction方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: build_greedy_inference
# 需要導入模塊: from dragnn.python import dragnn_ops [as 別名]
# 或者: from dragnn.python.dragnn_ops import bulk_advance_from_prediction [as 別名]
def build_greedy_inference(self, state, network_states,
during_training=False):
"""Annotates a batch of documents using network scores.
Args:
state: MasterState from the 'AdvanceMaster' op that advances the
underlying master to this component.
network_states: dictionary of component NetworkState objects
during_training: whether the graph is being constructed during training
Returns:
Handle to the state once inference is complete for this Component.
Raises:
RuntimeError: if fixed features are configured
"""
logging.info('Building component: %s', self.spec.name)
if self.spec.fixed_feature:
raise RuntimeError(
'Fixed features are not compatible with bulk annotation. '
'Use the "bulk-features" component instead.')
linked_embeddings = [
fetch_linked_embedding(self, network_states, spec)
for spec in self.spec.linked_feature
]
if during_training:
stride = state.current_batch_size * self.training_beam_size
else:
stride = state.current_batch_size * self.inference_beam_size
with tf.variable_scope(self.name, reuse=True):
network_tensors = self.network.create(
[], linked_embeddings, None, None, during_training, stride)
update_network_states(self, network_tensors, network_states, stride)
logits = self.network.get_logits(network_tensors)
return dragnn_ops.bulk_advance_from_prediction(
state.handle, logits, component=self.name)
示例2: build_greedy_inference
# 需要導入模塊: from dragnn.python import dragnn_ops [as 別名]
# 或者: from dragnn.python.dragnn_ops import bulk_advance_from_prediction [as 別名]
def build_greedy_inference(self, state, network_states,
during_training=False):
"""Annotates a batch of documents using network scores.
Args:
state: MasterState from the 'AdvanceMaster' op that advances the
underlying master to this component.
network_states: dictionary of component NetworkState objects
during_training: whether the graph is being constructed during training
Returns:
Handle to the state once inference is complete for this Component.
Raises:
RuntimeError: if fixed features are configured
"""
logging.info('Building component: %s', self.spec.name)
if self.spec.fixed_feature:
raise RuntimeError(
'Fixed features are not compatible with bulk annotation. '
'Use the "bulk-features" component instead.')
linked_embeddings = [
fetch_linked_embedding(self, network_states, spec)
for spec in self.spec.linked_feature
]
if during_training:
stride = state.current_batch_size * self.training_beam_size
else:
stride = state.current_batch_size * self.inference_beam_size
self.network.pre_create(stride)
with tf.variable_scope(self.name, reuse=True):
network_tensors = self.network.create([], linked_embeddings, None, None,
during_training, stride)
update_network_states(self, network_tensors, network_states, stride)
logits = self.network.get_bulk_predictions(stride, network_tensors)
if logits is None:
# The network does not produce custom bulk predictions; default to logits.
logits = self.network.get_logits(network_tensors)
logits = tf.cond(self.locally_normalize,
lambda: tf.nn.log_softmax(logits), lambda: logits)
if self._output_as_probabilities:
logits = tf.nn.softmax(logits)
handle = dragnn_ops.bulk_advance_from_prediction(
state.handle, logits, component=self.name)
self._add_runtime_hooks()
return handle