當前位置: 首頁>>代碼示例>>Python>>正文


Python token_embedders.ElmoTokenEmbedder方法代碼示例

本文整理匯總了Python中allennlp.modules.token_embedders.ElmoTokenEmbedder方法的典型用法代碼示例。如果您正苦於以下問題:Python token_embedders.ElmoTokenEmbedder方法的具體用法?Python token_embedders.ElmoTokenEmbedder怎麽用?Python token_embedders.ElmoTokenEmbedder使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在allennlp.modules.token_embedders的用法示例。


在下文中一共展示了token_embedders.ElmoTokenEmbedder方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: _run_test

# 需要導入模塊: from allennlp.modules import token_embedders [as 別名]
# 或者: from allennlp.modules.token_embedders import ElmoTokenEmbedder [as 別名]
def _run_test(self, requires_grad):
        embedder = ElmoTokenEmbedder(
            self.options_file, self.weight_file, requires_grad=requires_grad
        )
        batch_size = 3
        seq_len = 4
        char_ids = torch.from_numpy(numpy.random.randint(0, 262, (batch_size, seq_len, 50)))
        embeddings = embedder(char_ids)
        loss = embeddings.sum()
        loss.backward()

        elmo_grads = [
            param.grad for name, param in embedder.named_parameters() if "_elmo_lstm" in name
        ]
        if requires_grad:
            # None of the elmo grads should be None.
            assert all(grad is not None for grad in elmo_grads)
        else:
            # All of the elmo grads should be None.
            assert all(grad is None for grad in elmo_grads) 
開發者ID:allenai,項目名稱:allennlp,代碼行數:22,代碼來源:elmo_test.py

示例2: _run_test

# 需要導入模塊: from allennlp.modules import token_embedders [as 別名]
# 或者: from allennlp.modules.token_embedders import ElmoTokenEmbedder [as 別名]
def _run_test(self, requires_grad):
        embedder = ElmoTokenEmbedder(self.options_file, self.weight_file, requires_grad=requires_grad)
        batch_size = 3
        seq_len = 4
        char_ids = torch.from_numpy(numpy.random.randint(0, 262, (batch_size, seq_len, 50)))
        embeddings = embedder(char_ids)
        loss = embeddings.sum()
        loss.backward()

        elmo_grads = [param.grad for name, param in embedder.named_parameters() if u'_elmo_lstm' in name]
        if requires_grad:
            # None of the elmo grads should be None.
            assert all([grad is not None for grad in elmo_grads])
        else:
            # All of the elmo grads should be None.
            assert all([grad is None for grad in elmo_grads]) 
開發者ID:plasticityai,項目名稱:magnitude,代碼行數:18,代碼來源:elmo_test.py

示例3: forward

# 需要導入模塊: from allennlp.modules import token_embedders [as 別名]
# 或者: from allennlp.modules.token_embedders import ElmoTokenEmbedder [as 別名]
def forward(self,  # type: ignore
                hypothesis0: Dict[str, torch.LongTensor],
                hypothesis1: Dict[str, torch.LongTensor],
                hypothesis2: Dict[str, torch.LongTensor],
                hypothesis3: Dict[str, torch.LongTensor],
                label: torch.IntTensor = None,
                ) -> Dict[str, torch.Tensor]:
        # pylint: disable=arguments-differ
        """
        Parameters
        ----------
        Returns
        -------
        An output dictionary consisting of:
        logits : torch.FloatTensor
            A tensor of shape ``(batch_size, num_tokens, tag_vocab_size)`` representing
            unnormalised log probabilities of the tag classes.
        class_probabilities : torch.FloatTensor
            A tensor of shape ``(batch_size, num_tokens, tag_vocab_size)`` representing
            a distribution of the tag classes per word.
        loss : torch.FloatTensor, optional
            A scalar loss to be optimised.

        """
        logits = []
        for tokens in [hypothesis0, hypothesis1, hypothesis2, hypothesis3]:
            if isinstance(self.text_field_embedder, ElmoTokenEmbedder):
                self.text_field_embedder._elmo._elmo_lstm._elmo_lstm.reset_states()

            embedded_text_input = self.embedding_dropout(self.text_field_embedder(tokens))
            mask = get_text_field_mask(tokens)

            batch_size, sequence_length, _ = embedded_text_input.size()

            encoded_text = self.encoder(embedded_text_input, mask)

            logits.append(self.output_prediction(encoded_text.max(1)[0]))

        logits = torch.cat(logits, -1)
        class_probabilities = F.softmax(logits, dim=-1).view([batch_size, 4])
        output_dict = {"label_logits": logits, "label_probs": class_probabilities}

        if label is not None:
            loss = self._loss(logits, label.long().view(-1))
            self._accuracy(logits, label.squeeze(-1))
            output_dict["loss"] = loss

        return output_dict 
開發者ID:rowanz,項目名稱:swagaf,代碼行數:50,代碼來源:lstm_swag.py


注:本文中的allennlp.modules.token_embedders.ElmoTokenEmbedder方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。