本文整理匯總了Python中torch.LongTensor.new_zeros方法的典型用法代碼示例。如果您正苦於以下問題:Python LongTensor.new_zeros方法的具體用法?Python LongTensor.new_zeros怎麽用?Python LongTensor.new_zeros使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類torch.LongTensor
的用法示例。
在下文中一共展示了LongTensor.new_zeros方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: forward
# 需要導入模塊: from torch import LongTensor [as 別名]
# 或者: from torch.LongTensor import new_zeros [as 別名]
def forward(self, # type: ignore
words: Dict[str, torch.LongTensor],
pos_tags: torch.LongTensor,
metadata: List[Dict[str, Any]],
head_tags: torch.LongTensor = None,
head_indices: torch.LongTensor = None) -> Dict[str, torch.Tensor]:
# pylint: disable=arguments-differ
"""
Parameters
----------
words : Dict[str, torch.LongTensor], required
The output of ``TextField.as_array()``, which should typically be passed directly to a
``TextFieldEmbedder``. This output is a dictionary mapping keys to ``TokenIndexer``
tensors. At its most basic, using a ``SingleIdTokenIndexer`` this is: ``{"tokens":
Tensor(batch_size, sequence_length)}``. This dictionary will have the same keys as were used
for the ``TokenIndexers`` when you created the ``TextField`` representing your
sequence. The dictionary is designed to be passed directly to a ``TextFieldEmbedder``,
which knows how to combine different word representations into a single vector per
token in your input.
pos_tags : ``torch.LongTensor``, required.
The output of a ``SequenceLabelField`` containing POS tags.
POS tags are required regardless of whether they are used in the model,
because they are used to filter the evaluation metric to only consider
heads of words which are not punctuation.
head_tags : torch.LongTensor, optional (default = None)
A torch tensor representing the sequence of integer gold class labels for the arcs
in the dependency parse. Has shape ``(batch_size, sequence_length)``.
head_indices : torch.LongTensor, optional (default = None)
A torch tensor representing the sequence of integer indices denoting the parent of every
word in the dependency parse. Has shape ``(batch_size, sequence_length)``.
Returns
-------
An output dictionary consisting of:
loss : ``torch.FloatTensor``, optional
A scalar loss to be optimised.
arc_loss : ``torch.FloatTensor``
The loss contribution from the unlabeled arcs.
loss : ``torch.FloatTensor``, optional
The loss contribution from predicting the dependency
tags for the gold arcs.
heads : ``torch.FloatTensor``
The predicted head indices for each word. A tensor
of shape (batch_size, sequence_length).
head_types : ``torch.FloatTensor``
The predicted head types for each arc. A tensor
of shape (batch_size, sequence_length).
mask : ``torch.LongTensor``
A mask denoting the padded elements in the batch.
"""
embedded_text_input = self.text_field_embedder(words)
if pos_tags is not None and self._pos_tag_embedding is not None:
embedded_pos_tags = self._pos_tag_embedding(pos_tags)
embedded_text_input = torch.cat([embedded_text_input, embedded_pos_tags], -1)
elif self._pos_tag_embedding is not None:
raise ConfigurationError("Model uses a POS embedding, but no POS tags were passed.")
mask = get_text_field_mask(words)
embedded_text_input = self._input_dropout(embedded_text_input)
encoded_text = self.encoder(embedded_text_input, mask)
batch_size, _, encoding_dim = encoded_text.size()
head_sentinel = self._head_sentinel.expand(batch_size, 1, encoding_dim)
# Concatenate the head sentinel onto the sentence representation.
encoded_text = torch.cat([head_sentinel, encoded_text], 1)
mask = torch.cat([mask.new_ones(batch_size, 1), mask], 1)
if head_indices is not None:
head_indices = torch.cat([head_indices.new_zeros(batch_size, 1), head_indices], 1)
if head_tags is not None:
head_tags = torch.cat([head_tags.new_zeros(batch_size, 1), head_tags], 1)
float_mask = mask.float()
encoded_text = self._dropout(encoded_text)
# shape (batch_size, sequence_length, arc_representation_dim)
head_arc_representation = self._dropout(self.head_arc_feedforward(encoded_text))
child_arc_representation = self._dropout(self.child_arc_feedforward(encoded_text))
# shape (batch_size, sequence_length, tag_representation_dim)
head_tag_representation = self._dropout(self.head_tag_feedforward(encoded_text))
child_tag_representation = self._dropout(self.child_tag_feedforward(encoded_text))
# shape (batch_size, sequence_length, sequence_length)
attended_arcs = self.arc_attention(head_arc_representation,
child_arc_representation)
minus_inf = -1e8
minus_mask = (1 - float_mask) * minus_inf
attended_arcs = attended_arcs + minus_mask.unsqueeze(2) + minus_mask.unsqueeze(1)
if self.training or not self.use_mst_decoding_for_validation:
predicted_heads, predicted_head_tags = self._greedy_decode(head_tag_representation,
child_tag_representation,
attended_arcs,
mask)
else:
predicted_heads, predicted_head_tags = self._mst_decode(head_tag_representation,
child_tag_representation,
attended_arcs,
mask)
if head_indices is not None and head_tags is not None:
#.........這裏部分代碼省略.........