當前位置: 首頁>>代碼示例>>Python>>正文


Python transformers.BertModel方法代碼示例

本文整理匯總了Python中transformers.BertModel方法的典型用法代碼示例。如果您正苦於以下問題:Python transformers.BertModel方法的具體用法?Python transformers.BertModel怎麽用?Python transformers.BertModel使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在transformers的用法示例。


在下文中一共展示了transformers.BertModel方法的6個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: __init__

# 需要導入模塊: import transformers [as 別名]
# 或者: from transformers import BertModel [as 別名]
def __init__(self, config):
        super().__init__(config, num_labels=config.num_labels)
        self.bert = BertModel(config)
        self.dropout = nn.Dropout(config.hidden_dropout_prob)
        self.init_weights() 
開發者ID:castorini,項目名稱:hedwig,代碼行數:7,代碼來源:sentence_encoder.py

示例2: __init__

# 需要導入模塊: import transformers [as 別名]
# 或者: from transformers import BertModel [as 別名]
def __init__(self):
        super().__init__()
        config = BertConfig.from_pretrained("bert-base-uncased")
        self.model = BertModel(config) 
開發者ID:bhoov,項目名稱:exbert,代碼行數:6,代碼來源:modeling_bertabs.py

示例3: __init__

# 需要導入模塊: import transformers [as 別名]
# 或者: from transformers import BertModel [as 別名]
def __init__(self):
        super(Bert, self).__init__()
        config = BertConfig.from_pretrained("bert-base-uncased")
        self.model = BertModel(config) 
開發者ID:kaushaltrivedi,項目名稱:fast-bert,代碼行數:6,代碼來源:modeling_bertabs.py

示例4: make_model

# 需要導入模塊: import transformers [as 別名]
# 或者: from transformers import BertModel [as 別名]
def make_model(self, src_vocab, tgt_vocab, N_enc=6, N_dec=6, 
               d_model=512, d_ff=2048, h=8, dropout=0.1):
        "Helper: Construct a model from hyperparameters."
        enc_config = BertConfig(vocab_size=1,
                                hidden_size=d_model,
                                num_hidden_layers=N_enc,
                                num_attention_heads=h,
                                intermediate_size=d_ff,
                                hidden_dropout_prob=dropout,
                                attention_probs_dropout_prob=dropout,
                                max_position_embeddings=1,
                                type_vocab_size=1)
        dec_config = BertConfig(vocab_size=tgt_vocab,
                                hidden_size=d_model,
                                num_hidden_layers=N_dec,
                                num_attention_heads=h,
                                intermediate_size=d_ff,
                                hidden_dropout_prob=dropout,
                                attention_probs_dropout_prob=dropout,
                                max_position_embeddings=17,
                                type_vocab_size=1,
                                is_decoder=True)
        encoder = BertModel(enc_config)
        def return_embeds(*args, **kwargs):
            return kwargs['inputs_embeds']
        del encoder.embeddings; encoder.embeddings = return_embeds
        decoder = BertModel(dec_config)
        model = EncoderDecoder(
            encoder,
            decoder,
            Generator(d_model, tgt_vocab))
        return model 
開發者ID:ruotianluo,項目名稱:self-critical.pytorch,代碼行數:34,代碼來源:BertCapModel.py

示例5: load

# 需要導入模塊: import transformers [as 別名]
# 或者: from transformers import BertModel [as 別名]
def load(self):
        self.model = transformers.BertModel.from_pretrained(self.load_path, config=self.config).eval().to(self.device)
        self.dim = self.model.config.hidden_size 
開發者ID:deepmipt,項目名稱:DeepPavlov,代碼行數:5,代碼來源:transformers_embedder.py

示例6: __init__

# 需要導入模塊: import transformers [as 別名]
# 或者: from transformers import BertModel [as 別名]
def __init__(self, bert_config):
        """

        :param bert_config: configuration for bert model
        """
        super(BertABSATagger, self).__init__(bert_config)
        self.num_labels = bert_config.num_labels
        self.tagger_config = TaggerConfig()
        self.tagger_config.absa_type = bert_config.absa_type.lower()
        if bert_config.tfm_mode == 'finetune':
            # initialized with pre-trained BERT and perform finetuning
            # print("Fine-tuning the pre-trained BERT...")
            self.bert = BertModel(bert_config)
        else:
            raise Exception("Invalid transformer mode %s!!!" % bert_config.tfm_mode)
        self.bert_dropout = nn.Dropout(bert_config.hidden_dropout_prob)
        # fix the parameters in BERT and regard it as feature extractor
        if bert_config.fix_tfm:
            # fix the parameters of the (pre-trained or randomly initialized) transformers during fine-tuning
            for p in self.bert.parameters():
                p.requires_grad = False

        self.tagger = None
        if self.tagger_config.absa_type == 'linear':
            # hidden size at the penultimate layer
            penultimate_hidden_size = bert_config.hidden_size
        else:
            self.tagger_dropout = nn.Dropout(self.tagger_config.hidden_dropout_prob)
            if self.tagger_config.absa_type == 'lstm':
                self.tagger = LSTM(input_size=bert_config.hidden_size,
                                   hidden_size=self.tagger_config.hidden_size,
                                   bidirectional=self.tagger_config.bidirectional)
            elif self.tagger_config.absa_type == 'gru':
                self.tagger = GRU(input_size=bert_config.hidden_size,
                                  hidden_size=self.tagger_config.hidden_size,
                                  bidirectional=self.tagger_config.bidirectional)
            elif self.tagger_config.absa_type == 'tfm':
                # transformer encoder layer
                self.tagger = nn.TransformerEncoderLayer(d_model=bert_config.hidden_size,
                                                         nhead=12,
                                                         dim_feedforward=4*bert_config.hidden_size,
                                                         dropout=0.1)
            elif self.tagger_config.absa_type == 'san':
                # vanilla self attention networks
                self.tagger = SAN(d_model=bert_config.hidden_size, nhead=12, dropout=0.1)
            elif self.tagger_config.absa_type == 'crf':
                self.tagger = CRF(num_tags=self.num_labels)
            else:
                raise Exception('Unimplemented downstream tagger %s...' % self.tagger_config.absa_type)
            penultimate_hidden_size = self.tagger_config.hidden_size
        self.classifier = nn.Linear(penultimate_hidden_size, bert_config.num_labels) 
開發者ID:lixin4ever,項目名稱:BERT-E2E-ABSA,代碼行數:53,代碼來源:absa_layer.py


注:本文中的transformers.BertModel方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。