当前位置: 首页>>代码示例>>Python>>正文


Python optimization.AdamWeightDecayOptimizer方法代码示例

本文整理汇总了Python中optimization.AdamWeightDecayOptimizer方法的典型用法代码示例。如果您正苦于以下问题:Python optimization.AdamWeightDecayOptimizer方法的具体用法?Python optimization.AdamWeightDecayOptimizer怎么用?Python optimization.AdamWeightDecayOptimizer使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在optimization的用法示例。


在下文中一共展示了optimization.AdamWeightDecayOptimizer方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: test_adam

# 需要导入模块: import optimization [as 别名]
# 或者: from optimization import AdamWeightDecayOptimizer [as 别名]
def test_adam(self):
        with self.test_session() as sess:
            w = tf.get_variable(
                "w",
                shape=[3],
                initializer=tf.constant_initializer([0.1, -0.2, -0.1]))
            x = tf.constant([0.4, 0.2, -0.5])
            loss = tf.reduce_mean(tf.square(x - w))
            tvars = tf.trainable_variables()
            grads = tf.gradients(loss, tvars)
            global_step = tf.train.get_or_create_global_step()
            optimizer = optimization.AdamWeightDecayOptimizer(learning_rate=0.2)
            train_op = optimizer.apply_gradients(zip(grads, tvars), global_step)
            init_op = tf.group(tf.global_variables_initializer(),
                               tf.local_variables_initializer())
            sess.run(init_op)
            for _ in range(100):
                sess.run(train_op)
            w_np = sess.run(w)
            self.assertAllClose(w_np.flat, [0.4, 0.2, -0.5], rtol=1e-2, atol=1e-2) 
开发者ID:Socialbird-AILab,项目名称:BERT-Classification-Tutorial,代码行数:22,代码来源:optimization_test.py

示例2: test_adam

# 需要导入模块: import optimization [as 别名]
# 或者: from optimization import AdamWeightDecayOptimizer [as 别名]
def test_adam(self):
    with self.test_session() as sess:
      w = tf.get_variable(
          "w",
          shape=[3],
          initializer=tf.constant_initializer([0.1, -0.2, -0.1]))
      x = tf.constant([0.4, 0.2, -0.5])
      loss = tf.reduce_mean(tf.square(x - w))
      tvars = tf.trainable_variables()
      grads = tf.gradients(loss, tvars)
      global_step = tf.train.get_or_create_global_step()
      optimizer = optimization.AdamWeightDecayOptimizer(learning_rate=0.2)
      train_op = optimizer.apply_gradients(zip(grads, tvars), global_step)
      init_op = tf.group(tf.global_variables_initializer(),
                         tf.local_variables_initializer())
      sess.run(init_op)
      for _ in range(100):
        sess.run(train_op)
      w_np = sess.run(w)
      self.assertAllClose(w_np.flat, [0.4, 0.2, -0.5], rtol=1e-2, atol=1e-2) 
开发者ID:Nagakiran1,项目名称:Extending-Google-BERT-as-Question-and-Answering-model-and-Chatbot,代码行数:22,代码来源:optimization_test.py

示例3: test_adam

# 需要导入模块: import optimization [as 别名]
# 或者: from optimization import AdamWeightDecayOptimizer [as 别名]
def test_adam(self):
    with self.test_session() as sess:
      w = tf.compat.v1.get_variable(
          "w",
          shape=[3],
          initializer=tf.compat.v1.constant_initializer([0.1, -0.2, -0.1]))
      x = tf.constant([0.4, 0.2, -0.5])
      loss = tf.reduce_mean(input_tensor=tf.square(x - w))
      tvars = tf.compat.v1.trainable_variables()
      grads = tf.gradients(ys=loss, xs=tvars)
      global_step = tf.compat.v1.train.get_or_create_global_step()
      optimizer = optimization.AdamWeightDecayOptimizer(learning_rate=0.2)
      train_op = optimizer.apply_gradients(zip(grads, tvars), global_step)
      init_op = tf.group(tf.compat.v1.global_variables_initializer(),
                         tf.compat.v1.local_variables_initializer())
      sess.run(init_op)
      for _ in range(100):
        sess.run(train_op)
      w_np = sess.run(w)
      self.assertAllClose(w_np.flat, [0.4, 0.2, -0.5], rtol=1e-2, atol=1e-2) 
开发者ID:IntelAI,项目名称:models,代码行数:22,代码来源:optimization_test.py


注:本文中的optimization.AdamWeightDecayOptimizer方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。