本文整理汇总了Python中torch.LongTensor.new_full方法的典型用法代码示例。如果您正苦于以下问题:Python LongTensor.new_full方法的具体用法?Python LongTensor.new_full怎么用?Python LongTensor.new_full使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类torch.LongTensor
的用法示例。
在下文中一共展示了LongTensor.new_full方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: greedy_predict
# 需要导入模块: from torch import LongTensor [as 别名]
# 或者: from torch.LongTensor import new_full [as 别名]
def greedy_predict(self,
final_encoder_output: torch.LongTensor,
target_embedder: Embedding,
decoder_cell: GRUCell,
output_projection_layer: Linear) -> torch.Tensor:
"""
Greedily produces a sequence using the provided ``decoder_cell``.
Returns the predicted sequence.
Parameters
----------
final_encoder_output : ``torch.LongTensor``, required
Vector produced by ``self._encoder``.
target_embedder : ``Embedding``, required
Used to embed the target tokens.
decoder_cell: ``GRUCell``, required
The recurrent cell used at each time step.
output_projection_layer: ``Linear``, required
Linear layer mapping to the desired number of classes.
"""
num_decoding_steps = self._max_decoding_steps
decoder_hidden = final_encoder_output
batch_size = final_encoder_output.size()[0]
predictions = [final_encoder_output.new_full(
(batch_size,), fill_value=self._start_index, dtype=torch.long
)]
for _ in range(num_decoding_steps):
input_choices = predictions[-1]
decoder_input = target_embedder(input_choices)
decoder_hidden = decoder_cell(decoder_input, decoder_hidden)
# (batch_size, num_classes)
output_projections = output_projection_layer(decoder_hidden)
class_probabilities = F.softmax(output_projections, dim=-1)
_, predicted_classes = torch.max(class_probabilities, 1)
predictions.append(predicted_classes)
all_predictions = torch.cat([ps.unsqueeze(1) for ps in predictions], 1)
# Drop start symbol and return.
return all_predictions[:, 1:]