本文整理匯總了Python中torch.LongTensor.new_full方法的典型用法代碼示例。如果您正苦於以下問題:Python LongTensor.new_full方法的具體用法?Python LongTensor.new_full怎麽用?Python LongTensor.new_full使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類torch.LongTensor
的用法示例。
在下文中一共展示了LongTensor.new_full方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: greedy_predict
# 需要導入模塊: from torch import LongTensor [as 別名]
# 或者: from torch.LongTensor import new_full [as 別名]
def greedy_predict(self,
final_encoder_output: torch.LongTensor,
target_embedder: Embedding,
decoder_cell: GRUCell,
output_projection_layer: Linear) -> torch.Tensor:
"""
Greedily produces a sequence using the provided ``decoder_cell``.
Returns the predicted sequence.
Parameters
----------
final_encoder_output : ``torch.LongTensor``, required
Vector produced by ``self._encoder``.
target_embedder : ``Embedding``, required
Used to embed the target tokens.
decoder_cell: ``GRUCell``, required
The recurrent cell used at each time step.
output_projection_layer: ``Linear``, required
Linear layer mapping to the desired number of classes.
"""
num_decoding_steps = self._max_decoding_steps
decoder_hidden = final_encoder_output
batch_size = final_encoder_output.size()[0]
predictions = [final_encoder_output.new_full(
(batch_size,), fill_value=self._start_index, dtype=torch.long
)]
for _ in range(num_decoding_steps):
input_choices = predictions[-1]
decoder_input = target_embedder(input_choices)
decoder_hidden = decoder_cell(decoder_input, decoder_hidden)
# (batch_size, num_classes)
output_projections = output_projection_layer(decoder_hidden)
class_probabilities = F.softmax(output_projections, dim=-1)
_, predicted_classes = torch.max(class_probabilities, 1)
predictions.append(predicted_classes)
all_predictions = torch.cat([ps.unsqueeze(1) for ps in predictions], 1)
# Drop start symbol and return.
return all_predictions[:, 1:]