本文整理匯總了Python中tensorflow.enable_resource_variables方法的典型用法代碼示例。如果您正苦於以下問題:Python tensorflow.enable_resource_variables方法的具體用法?Python tensorflow.enable_resource_variables怎麽用?Python tensorflow.enable_resource_variables使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類tensorflow
的用法示例。
在下文中一共展示了tensorflow.enable_resource_variables方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: __init__
# 需要導入模塊: import tensorflow [as 別名]
# 或者: from tensorflow import enable_resource_variables [as 別名]
def __init__(self, ncf_dataset, params, num_train_steps, num_eval_steps,
use_while_loop):
self._num_train_steps = num_train_steps
self._num_eval_steps = num_eval_steps
self._use_while_loop = use_while_loop
with tf.Graph().as_default() as self._graph:
if params["use_xla_for_gpu"]:
# The XLA functions we use require resource variables.
tf.enable_resource_variables()
self._ncf_dataset = ncf_dataset
self._global_step = tf.train.create_global_step()
self._train_model_properties = self._build_model(params, num_train_steps,
is_training=True)
self._eval_model_properties = self._build_model(params, num_eval_steps,
is_training=False)
initializer = tf.global_variables_initializer()
self._graph.finalize()
self._session = tf.Session(graph=self._graph)
self._session.run(initializer)
示例2: main
# 需要導入模塊: import tensorflow [as 別名]
# 或者: from tensorflow import enable_resource_variables [as 別名]
def main(argv):
del argv # Unused.
# If using update_damping_immediately resource variables must be enabled.
# (Although they probably will be by default on TPUs.)
if FLAGS.update_damping_immediately:
tf.enable_resource_variables()
tf.set_random_seed(FLAGS.seed)
# Invert using cholesky decomposition + triangular solve. This is the only
# code path for matrix inversion supported on TPU right now.
kfac.utils.set_global_constants(posdef_inv_method='cholesky')
kfac.fisher_factors.set_global_constants(
eigenvalue_decomposition_threshold=10000)
if not FLAGS.use_sua_approx:
if FLAGS.use_custom_patches_op:
kfac.fisher_factors.set_global_constants(
use_patches_second_moment_op=True
)
else:
# Temporary measure to save memory with giant batches:
kfac.fisher_factors.set_global_constants(
sub_sample_inputs=True,
inputs_to_extract_patches_factor=0.1)
config = make_tpu_run_config(
FLAGS.master, FLAGS.seed, FLAGS.model_dir, FLAGS.iterations_per_loop,
FLAGS.save_checkpoints_steps)
estimator = contrib_tpu.TPUEstimator(
use_tpu=True,
model_fn=_model_fn,
config=config,
train_batch_size=FLAGS.batch_size,
eval_batch_size=1024)
estimator.train(
input_fn=mnist_input_fn,
max_steps=FLAGS.train_steps,
hooks=[])