本文整理汇总了Python中torch.LongTensor.long方法的典型用法代码示例。如果您正苦于以下问题:Python LongTensor.long方法的具体用法?Python LongTensor.long怎么用?Python LongTensor.long使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类torch.LongTensor
的用法示例。
在下文中一共展示了LongTensor.long方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: forward
# 需要导入模块: from torch import LongTensor [as 别名]
# 或者: from torch.LongTensor import long [as 别名]
def forward(self, # type: ignore
tokens: Dict[str, torch.LongTensor],
verb_indicator: torch.LongTensor,
tags: torch.LongTensor = None,
metadata: List[Dict[str, Any]] = None) -> Dict[str, torch.Tensor]:
# pylint: disable=arguments-differ
"""
Parameters
----------
tokens : Dict[str, torch.LongTensor], required
The output of ``TextField.as_array()``, which should typically be passed directly to a
``TextFieldEmbedder``. This output is a dictionary mapping keys to ``TokenIndexer``
tensors. At its most basic, using a ``SingleIdTokenIndexer`` this is: ``{"tokens":
Tensor(batch_size, num_tokens)}``. This dictionary will have the same keys as were used
for the ``TokenIndexers`` when you created the ``TextField`` representing your
sequence. The dictionary is designed to be passed directly to a ``TextFieldEmbedder``,
which knows how to combine different word representations into a single vector per
token in your input.
verb_indicator: torch.LongTensor, required.
An integer ``SequenceFeatureField`` representation of the position of the verb
in the sentence. This should have shape (batch_size, num_tokens) and importantly, can be
all zeros, in the case that the sentence has no verbal predicate.
tags : torch.LongTensor, optional (default = None)
A torch tensor representing the sequence of integer gold class labels
of shape ``(batch_size, num_tokens)``
metadata : ``List[Dict[str, Any]]``, optional, (default = None)
metadata containg the original words in the sentence and the verb to compute the
frame for, under 'words' and 'verb' keys, respectively.
Returns
-------
An output dictionary consisting of:
logits : torch.FloatTensor
A tensor of shape ``(batch_size, num_tokens, tag_vocab_size)`` representing
unnormalised log probabilities of the tag classes.
class_probabilities : torch.FloatTensor
A tensor of shape ``(batch_size, num_tokens, tag_vocab_size)`` representing
a distribution of the tag classes per word.
loss : torch.FloatTensor, optional
A scalar loss to be optimised.
"""
embedded_text_input = self.embedding_dropout(self.text_field_embedder(tokens))
mask = get_text_field_mask(tokens)
embedded_verb_indicator = self.binary_feature_embedding(verb_indicator.long())
# Concatenate the verb feature onto the embedded text. This now
# has shape (batch_size, sequence_length, embedding_dim + binary_feature_dim).
embedded_text_with_verb_indicator = torch.cat([embedded_text_input, embedded_verb_indicator], -1)
batch_size, sequence_length, _ = embedded_text_with_verb_indicator.size()
encoded_text = self.encoder(embedded_text_with_verb_indicator, mask)
logits = self.tag_projection_layer(encoded_text)
reshaped_log_probs = logits.view(-1, self.num_classes)
class_probabilities = F.softmax(reshaped_log_probs, dim=-1).view([batch_size,
sequence_length,
self.num_classes])
output_dict = {"logits": logits, "class_probabilities": class_probabilities}
if tags is not None:
loss = sequence_cross_entropy_with_logits(logits,
tags,
mask,
label_smoothing=self._label_smoothing)
if not self.ignore_span_metric:
self.span_metric(class_probabilities, tags, mask)
output_dict["loss"] = loss
# We need to retain the mask in the output dictionary
# so that we can crop the sequences to remove padding
# when we do viterbi inference in self.decode.
output_dict["mask"] = mask
words, verbs = zip(*[(x["words"], x["verb"]) for x in metadata])
if metadata is not None:
output_dict["words"] = list(words)
output_dict["verb"] = list(verbs)
return output_dict