當前位置: 首頁>>代碼示例>>Python>>正文


Python util.get_final_encoder_states方法代碼示例

本文整理匯總了Python中allennlp.nn.util.get_final_encoder_states方法的典型用法代碼示例。如果您正苦於以下問題:Python util.get_final_encoder_states方法的具體用法?Python util.get_final_encoder_states怎麽用?Python util.get_final_encoder_states使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在allennlp.nn.util的用法示例。


在下文中一共展示了util.get_final_encoder_states方法的13個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: embedd_encode_and_aggregate_text_field

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import get_final_encoder_states [as 別名]
def embedd_encode_and_aggregate_text_field(question: Dict[str, torch.LongTensor],
                                           text_field_embedder,
                                           embeddings_dropout,
                                           encoder,
                                           aggregation_type,
                                           get_last_states=False):
    embedded_question = text_field_embedder(question)
    question_mask = get_text_field_mask(question).float()
    embedded_question = embeddings_dropout(embedded_question)

    encoded_question = encoder(embedded_question, question_mask)

    # aggregate sequences to a single item
    encoded_question_aggregated = seq2vec_seq_aggregate(encoded_question, question_mask, aggregation_type,
                                                        encoder.is_bidirectional(), 1)  # bs X d

    last_hidden_states = None
    if get_last_states:
        last_hidden_states = get_final_encoder_states(encoded_question, question_mask, encoder.is_bidirectional())

    return encoded_question_aggregated, last_hidden_states 
開發者ID:allenai,項目名稱:OpenBookQA,代碼行數:23,代碼來源:knowledge.py

示例2: embed_encode_and_aggregate_text_field_with_feats

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import get_final_encoder_states [as 別名]
def embed_encode_and_aggregate_text_field_with_feats(question: Dict[str, torch.LongTensor],
                                           text_field_embedder,
                                           embeddings_dropout,
                                           encoder,
                                           aggregation_type,
                                           token_features=None,
                                           get_last_states=False):
    embedded_question = text_field_embedder(question)
    question_mask = get_text_field_mask(question).float()
    embedded_question = embeddings_dropout(embedded_question)
    if token_features is not None:
        embedded_question = torch.cat([embedded_question, token_features], dim=-1)

    encoded_question = encoder(embedded_question, question_mask)

    # aggregate sequences to a single item
    encoded_question_aggregated = seq2vec_seq_aggregate(encoded_question, question_mask, aggregation_type,
                                                        encoder.is_bidirectional(), 1)  # bs X d

    last_hidden_states = None
    if get_last_states:
        last_hidden_states = get_final_encoder_states(encoded_question, question_mask, encoder.is_bidirectional())

    return encoded_question_aggregated, last_hidden_states 
開發者ID:allenai,項目名稱:OpenBookQA,代碼行數:26,代碼來源:knowledge.py

示例3: embed_encode_and_aggregate_text_field_with_feats_only

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import get_final_encoder_states [as 別名]
def embed_encode_and_aggregate_text_field_with_feats_only(question: Dict[str, torch.LongTensor],
                                           text_field_embedder,
                                           embeddings_dropout,
                                           encoder,
                                           aggregation_type,
                                           token_features=None,
                                           get_last_states=False):
    embedded_question = text_field_embedder(question)
    question_mask = get_text_field_mask(question).float()
    embedded_question = embeddings_dropout(embedded_question)
    if token_features is not None:
        embedded_question = torch.cat([token_features], dim=-1)

    encoded_question = encoder(embedded_question, question_mask)

    # aggregate sequences to a single item
    encoded_question_aggregated = seq2vec_seq_aggregate(encoded_question, question_mask, aggregation_type,
                                                        encoder.is_bidirectional(), 1)  # bs X d

    last_hidden_states = None
    if get_last_states:
        last_hidden_states = get_final_encoder_states(encoded_question, question_mask, encoder.is_bidirectional())

    return encoded_question_aggregated, last_hidden_states 
開發者ID:allenai,項目名稱:OpenBookQA,代碼行數:26,代碼來源:knowledge.py

示例4: _init_decoder_state

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import get_final_encoder_states [as 別名]
def _init_decoder_state(
        self, state: Dict[str, torch.Tensor]
    ) -> Dict[str, torch.Tensor]:
        """
        Initialize the encoded state to be passed to the first decoding time step.
        """
        batch_size, _ = state["source_mask"].size()

        # Initialize the decoder hidden state with the final output of the encoder,
        # and the decoder context with zeros.
        # shape: (batch_size, encoder_output_dim)
        final_encoder_output = util.get_final_encoder_states(
            state["encoder_outputs"],
            state["source_mask"],
            self._encoder.is_bidirectional(),
        )
        # shape: (batch_size, decoder_output_dim)
        state["decoder_hidden"] = final_encoder_output
        # shape: (batch_size, decoder_output_dim)
        state["decoder_context"] = state["encoder_outputs"].new_zeros(
            batch_size, self.decoder_output_dim
        )

        return state 
開發者ID:epwalsh,項目名稱:nlp-models,代碼行數:26,代碼來源:copynet.py

示例5: _init_decoder_state

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import get_final_encoder_states [as 別名]
def _init_decoder_state(self, state: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:
        batch_size = state["source_mask"].size(0)
        # shape: (batch_size, encoder_output_dim)
        final_encoder_output = util.get_final_encoder_states(
                state["encoder_outputs"],
                state["source_mask"],
                self._encoder.is_bidirectional())
        # Initialize the decoder hidden state with the final output of the encoder.
        # shape: (batch_size, decoder_output_dim)
        state["decoder_hidden"] = final_encoder_output

        encoder_outputs = state["encoder_outputs"]
        state["decoder_context"] = encoder_outputs.new_zeros(batch_size, self._decoder_output_dim)
        if self._embed_attn_to_output:
            state["attn_context"] = encoder_outputs.new_zeros(encoder_outputs.size(0), encoder_outputs.size(2))
        if self._use_coverage:
            state["coverage"] = encoder_outputs.new_zeros(batch_size, encoder_outputs.size(1))
        return state 
開發者ID:IlyaGusev,項目名稱:summarus,代碼行數:20,代碼來源:pgn.py

示例6: __init__

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import get_final_encoder_states [as 別名]
def __init__(self, vocab: Vocabulary,
                 text_field_embedder: TextFieldEmbedder,
                 text_encoder: Seq2SeqEncoder,
                 classifier_feedforward: FeedForward,
                 verbose_metrics: False,
                 initializer: InitializerApplicator = InitializerApplicator(),
                 regularizer: Optional[RegularizerApplicator] = None,
                 ) -> None:
        super(TextClassifier, self).__init__(vocab, regularizer)

        self.text_field_embedder = text_field_embedder
        self.num_classes = self.vocab.get_vocab_size("labels")
        self.text_encoder = text_encoder
        self.classifier_feedforward = classifier_feedforward
        self.prediction_layer = torch.nn.Linear(self.classifier_feedforward.get_output_dim()  , self.num_classes)

        self.label_accuracy = CategoricalAccuracy()
        self.label_f1_metrics = {}

        self.verbose_metrics = verbose_metrics

        for i in range(self.num_classes):
            self.label_f1_metrics[vocab.get_token_from_index(index=i, namespace="labels")] = F1Measure(positive_label=i)
        self.loss = torch.nn.CrossEntropyLoss()

        self.pool = lambda text, mask: util.get_final_encoder_states(text, mask, bidirectional=True)

        initializer(self) 
開發者ID:allenai,項目名稱:scibert,代碼行數:30,代碼來源:text_classifier.py

示例7: seq2vec_seq_aggregate

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import get_final_encoder_states [as 別名]
def seq2vec_seq_aggregate(seq_tensor, mask, aggregate, bidirectional, dim=1):
    """
        Takes the aggregation of sequence tensor

        :param seq_tensor: Batched sequence requires [batch, seq, hs]
        :param mask: binary mask with shape batch, seq_len, 1
        :param aggregate: max, avg, sum
        :param dim: The dimension to take the max. for batch, seq, hs it is 1
        :return:
    """

    seq_tensor_masked = seq_tensor * mask.unsqueeze(-1)
    aggr_func = None
    if aggregate == "last":
        seq = get_final_encoder_states(seq_tensor, mask, bidirectional)
    elif aggregate == "max":
        aggr_func = torch.max
        seq, _ = aggr_func(seq_tensor_masked, dim=dim)
    elif aggregate == "min":
        aggr_func = torch.min
        seq, _ = aggr_func(seq_tensor_masked, dim=dim)
    elif aggregate == "sum":
        aggr_func = torch.sum
        seq = aggr_func(seq_tensor_masked, dim=dim)
    elif aggregate == "avg":
        aggr_func = torch.sum
        seq = aggr_func(seq_tensor_masked, dim=dim)
        seq_lens = torch.sum(mask, dim=dim)  # this returns batch_size, 1
        seq = seq / seq_lens.view([-1, 1])

    return seq 
開發者ID:allenai,項目名稱:OpenBookQA,代碼行數:33,代碼來源:util.py

示例8: forward

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import get_final_encoder_states [as 別名]
def forward(self, tokens: torch.Tensor, mask: torch.BoolTensor = None):
        # tokens is assumed to have shape (batch_size, sequence_length, embedding_dim).
        # mask is assumed to have shape (batch_size, sequence_length) with all 1s preceding all 0s.
        if not self._cls_is_last_token:
            return tokens[:, 0, :]
        else:  # [CLS] at the end
            if mask is None:
                raise ValueError("Must provide mask for transformer models with [CLS] at the end.")
            return get_final_encoder_states(tokens, mask) 
開發者ID:allenai,項目名稱:allennlp,代碼行數:11,代碼來源:cls_pooler.py

示例9: test_get_final_encoder_states

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import get_final_encoder_states [as 別名]
def test_get_final_encoder_states(self):
        encoder_outputs = torch.Tensor(
            [
                [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]],
                [[13, 14, 15, 16], [17, 18, 19, 20], [21, 22, 23, 24]],
            ]
        )
        mask = torch.tensor([[True, True, True], [True, True, False]])
        final_states = util.get_final_encoder_states(encoder_outputs, mask, bidirectional=False)
        assert_almost_equal(final_states.data.numpy(), [[9, 10, 11, 12], [17, 18, 19, 20]])
        final_states = util.get_final_encoder_states(encoder_outputs, mask, bidirectional=True)
        assert_almost_equal(final_states.data.numpy(), [[9, 10, 3, 4], [17, 18, 15, 16]]) 
開發者ID:allenai,項目名稱:allennlp,代碼行數:14,代碼來源:util_test.py

示例10: _get_initial_rnn_state

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import get_final_encoder_states [as 別名]
def _get_initial_rnn_state(self, sentence                             ):
        embedded_input = self._sentence_embedder(sentence)
        # (batch_size, sentence_length)
        sentence_mask = util.get_text_field_mask(sentence).float()

        batch_size = embedded_input.size(0)

        # (batch_size, sentence_length, encoder_output_dim)
        encoder_outputs = self._dropout(self._encoder(embedded_input, sentence_mask))

        final_encoder_output = util.get_final_encoder_states(encoder_outputs,
                                                             sentence_mask,
                                                             self._encoder.is_bidirectional())
        memory_cell = encoder_outputs.new_zeros(batch_size, self._encoder.get_output_dim())
        attended_sentence = self._decoder_step.attend_on_sentence(final_encoder_output,
                                                                  encoder_outputs, sentence_mask)
        encoder_outputs_list = [encoder_outputs[i] for i in range(batch_size)]
        sentence_mask_list = [sentence_mask[i] for i in range(batch_size)]
        initial_rnn_state = []
        for i in range(batch_size):
            initial_rnn_state.append(RnnState(final_encoder_output[i],
                                              memory_cell[i],
                                              self._first_action_embedding,
                                              attended_sentence[i],
                                              encoder_outputs_list,
                                              sentence_mask_list))
        return initial_rnn_state 
開發者ID:plasticityai,項目名稱:magnitude,代碼行數:29,代碼來源:nlvr_semantic_parser.py

示例11: test_get_final_encoder_states

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import get_final_encoder_states [as 別名]
def test_get_final_encoder_states(self):
        encoder_outputs = torch.Tensor([[[1, 2, 3, 4],
                                         [5, 6, 7, 8],
                                         [9, 10, 11, 12]],
                                        [[13, 14, 15, 16],
                                         [17, 18, 19, 20],
                                         [21, 22, 23, 24]]])
        mask = torch.Tensor([[1, 1, 1], [1, 1, 0]])
        final_states = util.get_final_encoder_states(encoder_outputs, mask, bidirectional=False)
        assert_almost_equal(final_states.data.numpy(), [[9, 10, 11, 12], [17, 18, 19, 20]])
        final_states = util.get_final_encoder_states(encoder_outputs, mask, bidirectional=True)
        assert_almost_equal(final_states.data.numpy(), [[9, 10, 3, 4], [17, 18, 15, 16]]) 
開發者ID:plasticityai,項目名稱:magnitude,代碼行數:14,代碼來源:util_test.py

示例12: _get_initial_rnn_state

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import get_final_encoder_states [as 別名]
def _get_initial_rnn_state(self, sentence: Dict[str, torch.LongTensor]):
        embedded_input = self._sentence_embedder(sentence)
        # (batch_size, sentence_length)
        sentence_mask = util.get_text_field_mask(sentence)

        batch_size = embedded_input.size(0)

        # (batch_size, sentence_length, encoder_output_dim)
        encoder_outputs = self._dropout(self._encoder(embedded_input, sentence_mask))

        final_encoder_output = util.get_final_encoder_states(
            encoder_outputs, sentence_mask, self._encoder.is_bidirectional()
        )
        memory_cell = encoder_outputs.new_zeros(batch_size, self._encoder.get_output_dim())
        attended_sentence, _ = self._decoder_step.attend_on_question(
            final_encoder_output, encoder_outputs, sentence_mask
        )
        encoder_outputs_list = [encoder_outputs[i] for i in range(batch_size)]
        sentence_mask_list = [sentence_mask[i] for i in range(batch_size)]
        initial_rnn_state = []
        for i in range(batch_size):
            initial_rnn_state.append(
                RnnStatelet(
                    final_encoder_output[i],
                    memory_cell[i],
                    self._first_action_embedding,
                    attended_sentence[i],
                    encoder_outputs_list,
                    sentence_mask_list,
                )
            )
        return initial_rnn_state 
開發者ID:allenai,項目名稱:allennlp-semparse,代碼行數:34,代碼來源:nlvr_semantic_parser.py

示例13: _get_initial_state

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import get_final_encoder_states [as 別名]
def _get_initial_state(
        self, encoder_outputs: torch.Tensor, mask: torch.Tensor, actions: List[List[ProductionRule]]
    ) -> GrammarBasedState:

        batch_size = encoder_outputs.size(0)
        # This will be our initial hidden state and memory cell for the decoder LSTM.
        final_encoder_output = util.get_final_encoder_states(
            encoder_outputs, mask, self._encoder.is_bidirectional()
        )
        memory_cell = encoder_outputs.new_zeros(batch_size, self._encoder.get_output_dim())
        initial_score = encoder_outputs.data.new_zeros(batch_size)

        # To make grouping states together in the decoder easier, we convert the batch dimension in
        # all of our tensors into an outer list.  For instance, the encoder outputs have shape
        # `(batch_size, utterance_length, encoder_output_dim)`.  We need to convert this into a list
        # of `batch_size` tensors, each of shape `(utterance_length, encoder_output_dim)`.  Then we
        # won't have to do any index selects, or anything, we'll just do some `torch.cat()`s.
        initial_score_list = [initial_score[i] for i in range(batch_size)]
        encoder_output_list = [encoder_outputs[i] for i in range(batch_size)]
        utterance_mask_list = [mask[i] for i in range(batch_size)]
        initial_rnn_state = []
        for i in range(batch_size):
            initial_rnn_state.append(
                RnnStatelet(
                    final_encoder_output[i],
                    memory_cell[i],
                    self._first_action_embedding,
                    self._first_attended_utterance,
                    encoder_output_list,
                    utterance_mask_list,
                )
            )

        initial_grammar_state = [self._create_grammar_state(actions[i]) for i in range(batch_size)]

        initial_state = GrammarBasedState(
            batch_indices=list(range(batch_size)),
            action_history=[[] for _ in range(batch_size)],
            score=initial_score_list,
            rnn_state=initial_rnn_state,
            grammar_state=initial_grammar_state,
            possible_actions=actions,
            debug_info=None,
        )
        return initial_state 
開發者ID:allenai,項目名稱:allennlp-semparse,代碼行數:47,代碼來源:text2sql_parser.py


注:本文中的allennlp.nn.util.get_final_encoder_states方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。