本文整理汇总了Python中modeling.BertConfig方法的典型用法代码示例。如果您正苦于以下问题:Python modeling.BertConfig方法的具体用法?Python modeling.BertConfig怎么用?Python modeling.BertConfig使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类modeling
的用法示例。
在下文中一共展示了modeling.BertConfig方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: test_config_to_json_string
# 需要导入模块: import modeling [as 别名]
# 或者: from modeling import BertConfig [as 别名]
def test_config_to_json_string(self):
config = modeling.BertConfig(vocab_size=99, hidden_size=37)
obj = json.loads(config.to_json_string())
self.assertEqual(obj["vocab_size"], 99)
self.assertEqual(obj["hidden_size"], 37)
示例2: test_config_to_json_string
# 需要导入模块: import modeling [as 别名]
# 或者: from modeling import BertConfig [as 别名]
def test_config_to_json_string(self):
config = modeling.BertConfig(vocab_size=99, hidden_size=37)
obj = json.loads(config.to_json_string())
self.assertEqual(obj["vocab_size"], 99)
self.assertEqual(obj["hidden_size"], 37)
开发者ID:Nagakiran1,项目名称:Extending-Google-BERT-as-Question-and-Answering-model-and-Chatbot,代码行数:7,代码来源:modeling_test.py
示例3: test_config_to_json_string
# 需要导入模块: import modeling [as 别名]
# 或者: from modeling import BertConfig [as 别名]
def test_config_to_json_string(self):
config = modeling.BertConfig(vocab_size=99, hidden_size=32)
obj = json.loads(config.to_json_string())
self.assertEqual(obj["vocab_size"], 99)
self.assertEqual(obj["hidden_size"], 32)
示例4: bert_train_fn
# 需要导入模块: import modeling [as 别名]
# 或者: from modeling import BertConfig [as 别名]
def bert_train_fn():
is_training=True
hidden_size = 768
num_labels = 10
#batch_size=128
max_seq_length=512
use_one_hot_embeddings = False
bert_config = modeling.BertConfig(vocab_size=21128, hidden_size=hidden_size, num_hidden_layers=12,
num_attention_heads=12,intermediate_size=3072)
input_ids = tf.placeholder(tf.int32, [batch_size, max_seq_length], name="input_ids")
input_mask = tf.placeholder(tf.int32, [batch_size, max_seq_length], name="input_mask")
segment_ids = tf.placeholder(tf.int32, [batch_size,max_seq_length],name="segment_ids")
label_ids = tf.placeholder(tf.float32, [batch_size,num_labels], name="label_ids")
loss, per_example_loss, logits, probabilities, model = create_model(bert_config, is_training, input_ids, input_mask,
segment_ids, label_ids, num_labels,
use_one_hot_embeddings)
# 1. generate or load training/validation/test data. e.g. train:(X,y). X is input_ids,y is labels.
# 2. train the model by calling create model, get loss
gpu_config = tf.ConfigProto()
gpu_config.gpu_options.allow_growth = True
sess = tf.Session(config=gpu_config)
sess.run(tf.global_variables_initializer())
for i in range(1000):
input_ids_=np.ones((batch_size,max_seq_length),dtype=np.int32)
input_mask_=np.ones((batch_size,max_seq_length),dtype=np.int32)
segment_ids_=np.ones((batch_size,max_seq_length),dtype=np.int32)
label_ids_=np.ones((batch_size,num_labels),dtype=np.float32)
feed_dict = {input_ids: input_ids_, input_mask: input_mask_,segment_ids:segment_ids_,label_ids:label_ids_}
loss_ = sess.run([loss], feed_dict)
print("loss:",loss_)
# 3. eval the model from time to time