当前位置: 首页>>代码示例>>Python>>正文


Python token_embedders.ElmoTokenEmbedder方法代码示例

本文整理汇总了Python中allennlp.modules.token_embedders.ElmoTokenEmbedder方法的典型用法代码示例。如果您正苦于以下问题:Python token_embedders.ElmoTokenEmbedder方法的具体用法?Python token_embedders.ElmoTokenEmbedder怎么用?Python token_embedders.ElmoTokenEmbedder使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在allennlp.modules.token_embedders的用法示例。


在下文中一共展示了token_embedders.ElmoTokenEmbedder方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: _run_test

# 需要导入模块: from allennlp.modules import token_embedders [as 别名]
# 或者: from allennlp.modules.token_embedders import ElmoTokenEmbedder [as 别名]
def _run_test(self, requires_grad):
        embedder = ElmoTokenEmbedder(
            self.options_file, self.weight_file, requires_grad=requires_grad
        )
        batch_size = 3
        seq_len = 4
        char_ids = torch.from_numpy(numpy.random.randint(0, 262, (batch_size, seq_len, 50)))
        embeddings = embedder(char_ids)
        loss = embeddings.sum()
        loss.backward()

        elmo_grads = [
            param.grad for name, param in embedder.named_parameters() if "_elmo_lstm" in name
        ]
        if requires_grad:
            # None of the elmo grads should be None.
            assert all(grad is not None for grad in elmo_grads)
        else:
            # All of the elmo grads should be None.
            assert all(grad is None for grad in elmo_grads) 
开发者ID:allenai,项目名称:allennlp,代码行数:22,代码来源:elmo_test.py

示例2: _run_test

# 需要导入模块: from allennlp.modules import token_embedders [as 别名]
# 或者: from allennlp.modules.token_embedders import ElmoTokenEmbedder [as 别名]
def _run_test(self, requires_grad):
        embedder = ElmoTokenEmbedder(self.options_file, self.weight_file, requires_grad=requires_grad)
        batch_size = 3
        seq_len = 4
        char_ids = torch.from_numpy(numpy.random.randint(0, 262, (batch_size, seq_len, 50)))
        embeddings = embedder(char_ids)
        loss = embeddings.sum()
        loss.backward()

        elmo_grads = [param.grad for name, param in embedder.named_parameters() if u'_elmo_lstm' in name]
        if requires_grad:
            # None of the elmo grads should be None.
            assert all([grad is not None for grad in elmo_grads])
        else:
            # All of the elmo grads should be None.
            assert all([grad is None for grad in elmo_grads]) 
开发者ID:plasticityai,项目名称:magnitude,代码行数:18,代码来源:elmo_test.py

示例3: forward

# 需要导入模块: from allennlp.modules import token_embedders [as 别名]
# 或者: from allennlp.modules.token_embedders import ElmoTokenEmbedder [as 别名]
def forward(self,  # type: ignore
                hypothesis0: Dict[str, torch.LongTensor],
                hypothesis1: Dict[str, torch.LongTensor],
                hypothesis2: Dict[str, torch.LongTensor],
                hypothesis3: Dict[str, torch.LongTensor],
                label: torch.IntTensor = None,
                ) -> Dict[str, torch.Tensor]:
        # pylint: disable=arguments-differ
        """
        Parameters
        ----------
        Returns
        -------
        An output dictionary consisting of:
        logits : torch.FloatTensor
            A tensor of shape ``(batch_size, num_tokens, tag_vocab_size)`` representing
            unnormalised log probabilities of the tag classes.
        class_probabilities : torch.FloatTensor
            A tensor of shape ``(batch_size, num_tokens, tag_vocab_size)`` representing
            a distribution of the tag classes per word.
        loss : torch.FloatTensor, optional
            A scalar loss to be optimised.

        """
        logits = []
        for tokens in [hypothesis0, hypothesis1, hypothesis2, hypothesis3]:
            if isinstance(self.text_field_embedder, ElmoTokenEmbedder):
                self.text_field_embedder._elmo._elmo_lstm._elmo_lstm.reset_states()

            embedded_text_input = self.embedding_dropout(self.text_field_embedder(tokens))
            mask = get_text_field_mask(tokens)

            batch_size, sequence_length, _ = embedded_text_input.size()

            encoded_text = self.encoder(embedded_text_input, mask)

            logits.append(self.output_prediction(encoded_text.max(1)[0]))

        logits = torch.cat(logits, -1)
        class_probabilities = F.softmax(logits, dim=-1).view([batch_size, 4])
        output_dict = {"label_logits": logits, "label_probs": class_probabilities}

        if label is not None:
            loss = self._loss(logits, label.long().view(-1))
            self._accuracy(logits, label.squeeze(-1))
            output_dict["loss"] = loss

        return output_dict 
开发者ID:rowanz,项目名称:swagaf,代码行数:50,代码来源:lstm_swag.py


注:本文中的allennlp.modules.token_embedders.ElmoTokenEmbedder方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。