本文整理匯總了Python中tensorflow.contrib.tpu.CrossShardOptimizer方法的典型用法代碼示例。如果您正苦於以下問題:Python tpu.CrossShardOptimizer方法的具體用法?Python tpu.CrossShardOptimizer怎麽用?Python tpu.CrossShardOptimizer使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類tensorflow.contrib.tpu
的用法示例。
在下文中一共展示了tpu.CrossShardOptimizer方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: get_cross_shard_optimizer
# 需要導入模塊: from tensorflow.contrib import tpu [as 別名]
# 或者: from tensorflow.contrib.tpu import CrossShardOptimizer [as 別名]
def get_cross_shard_optimizer(optimizer, disable_for_cpu_debugging=False):
if disable_for_cpu_debugging:
return optimizer
return contrib_tpu.CrossShardOptimizer(optimizer)
示例2: get_train_op
# 需要導入模塊: from tensorflow.contrib import tpu [as 別名]
# 或者: from tensorflow.contrib.tpu import CrossShardOptimizer [as 別名]
def get_train_op(self, loss, # pylint: disable=missing-docstring
var_list=None,
add_reg_loss=True,
use_tpu=False):
if add_reg_loss:
l2_loss = tf.reduce_sum(tf.losses.get_regularization_losses())
loss += l2_loss
optimizer = FLAGS.optimizer
if optimizer == 'sgd':
optimizer = tf.train.MomentumOptimizer(learning_rate=self.lr,
momentum=0.9)
elif optimizer == 'adam':
optimizer = tf.train.AdamOptimizer(learning_rate=self.lr)
else:
raise ValueError('Unknown optimizer: %s' % optimizer)
if use_tpu:
# Wrap optimizer in CrossShardOptimizer which takes care of
# synchronizing the weight updates between TPU cores.
optimizer = CrossShardOptimizer(optimizer)
opt_step = optimizer.minimize(loss, var_list=var_list,
colocate_gradients_with_ops=True)
if self.update_batchnorm_params:
opt_step = tf.group([opt_step] +
tf.get_collection(tf.GraphKeys.UPDATE_OPS))
opt_step = tf.group([opt_step, self.global_step_inc])
return opt_step