本文整理汇总了Python中utils.center方法的典型用法代码示例。如果您正苦于以下问题:Python utils.center方法的具体用法?Python utils.center怎么用?Python utils.center使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类utils
的用法示例。
在下文中一共展示了utils.center方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: _create_baseline
# 需要导入模块: import utils [as 别名]
# 或者: from utils import center [as 别名]
def _create_baseline(self, n_output=1, n_hidden=100,
is_zero_init=False,
collection='BASELINE'):
# center input
h = self._x
if self.mean_xs is not None:
h -= self.mean_xs
if is_zero_init:
initializer = init_ops.zeros_initializer()
else:
initializer = slim.variance_scaling_initializer()
with slim.arg_scope([slim.fully_connected],
variables_collections=[collection, Q_COLLECTION],
trainable=False,
weights_initializer=initializer):
h = slim.fully_connected(h, n_hidden, activation_fn=tf.nn.tanh)
baseline = slim.fully_connected(h, n_output, activation_fn=None)
if n_output == 1:
baseline = tf.reshape(baseline, [-1]) # very important to reshape
return baseline
示例2: _create_loss
# 需要导入模块: import utils [as 别名]
# 或者: from utils import center [as 别名]
def _create_loss(self):
# Hard loss
logQHard, samples = self._recognition_network()
reinforce_learning_signal, reinforce_model_grad = self._generator_network(samples, logQHard)
logQHard = tf.add_n(logQHard)
# REINFORCE
learning_signal = tf.stop_gradient(U.center(reinforce_learning_signal))
self.optimizerLoss = -(learning_signal*logQHard +
reinforce_model_grad)
self.lHat = map(tf.reduce_mean, [
reinforce_learning_signal,
U.rms(learning_signal),
])
return reinforce_learning_signal