当前位置: 首页>>代码示例>>Python>>正文


Python layers.apply_regularization方法代码示例

本文整理汇总了Python中tensorflow.contrib.layers.apply_regularization方法的典型用法代码示例。如果您正苦于以下问题:Python layers.apply_regularization方法的具体用法?Python layers.apply_regularization怎么用?Python layers.apply_regularization使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tensorflow.contrib.layers的用法示例。


在下文中一共展示了layers.apply_regularization方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: loss

# 需要导入模块: from tensorflow.contrib import layers [as 别名]
# 或者: from tensorflow.contrib.layers import apply_regularization [as 别名]
def loss(self, data, labels):
    """The loss to minimize while training."""

    if self.is_regression:
      diff = self.training_inference_graph(data) - math_ops.to_float(labels)
      mean_squared_error = math_ops.reduce_mean(diff * diff)
      root_mean_squared_error = math_ops.sqrt(mean_squared_error, name="loss")
      loss = root_mean_squared_error
    else:
      loss = math_ops.reduce_mean(
          nn_ops.sparse_softmax_cross_entropy_with_logits(
              labels=array_ops.squeeze(math_ops.to_int32(labels)),
              logits=self.training_inference_graph(data)),
          name="loss")
    if self.regularizer:
      loss += layers.apply_regularization(self.regularizer,
                                          variables.trainable_variables())
    return loss 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:20,代码来源:hybrid_model.py

示例2: loss

# 需要导入模块: from tensorflow.contrib import layers [as 别名]
# 或者: from tensorflow.contrib.layers import apply_regularization [as 别名]
def loss(self, data, labels):
    """The loss to minimize while training."""

    if self.is_regression:
      diff = self.training_inference_graph(data) - math_ops.to_float(labels)
      mean_squared_error = math_ops.reduce_mean(diff * diff)
      root_mean_squared_error = math_ops.sqrt(mean_squared_error, name="loss")
      loss = root_mean_squared_error
    else:
      loss = math_ops.reduce_mean(
          nn_ops.sparse_softmax_cross_entropy_with_logits(
              self.training_inference_graph(data),
              array_ops.squeeze(math_ops.to_int32(labels))),
          name="loss")
    if self.regularizer:
      loss += layers.apply_regularization(self.regularizer,
                                          variables.trainable_variables())
    return loss 
开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:20,代码来源:hybrid_model.py

示例3: build_graph

# 需要导入模块: from tensorflow.contrib import layers [as 别名]
# 或者: from tensorflow.contrib.layers import apply_regularization [as 别名]
def build_graph(self):

        self.construct_weights()

        saver, logits = self.forward_pass()
        log_softmax_var = tf.nn.log_softmax(logits)

        # per-user average negative log-likelihood
        neg_ll = -tf.reduce_mean(tf.reduce_sum(
            log_softmax_var * self.input_ph, axis=1))
        # apply regularization to weights
        reg = l2_regularizer(self.lam)
        reg_var = apply_regularization(reg, self.weights)
        # tensorflow l2 regularization multiply 0.5 to the l2 norm
        # multiply 2 so that it is back in the same scale
        loss = neg_ll + 2 * reg_var

        train_op = tf.train.AdamOptimizer(self.lr).minimize(loss)

        # add summary statistics
        tf.summary.scalar('negative_multi_ll', neg_ll)
        tf.summary.scalar('loss', loss)
        merged = tf.summary.merge_all()
        return saver, logits, loss, train_op, merged 
开发者ID:MaurizioFD,项目名称:RecSys2019_DeepLearning_Evaluation,代码行数:26,代码来源:MultiVae_Dae.py

示例4: add_loss

# 需要导入模块: from tensorflow.contrib import layers [as 别名]
# 或者: from tensorflow.contrib.layers import apply_regularization [as 别名]
def add_loss(self):
        """
        Add loss computation to the graph.

        Uses:
          self.logits_start: shape (batch_size, context_len)
            IMPORTANT: Assumes that self.logits_start is masked (i.e. has -large in masked locations).
            That's because the tf.nn.softmax_cross_entropy_with_logits
            function applies softmax and then computes cross-entropy loss.
            So you need to apply masking to the logits (by subtracting large
            number in the padding location) BEFORE you pass to the
            softmax_cross_entropy_with_logits function.

          self.ans_start: shape (batch_size, context_len). One-hot with true answer start.
          self.ans_end: shape (batch_size, context_len). One-hot with true answer end.

        Defines:
          self.loss_start, self.loss_end, self.loss: all scalar tensors
        """
        with tf.variable_scope("loss"):

            # Calculate loss for prediction of start position
            loss_start = tf.nn.softmax_cross_entropy_with_logits_v2(logits=self.logits_start,
                                                                    labels=self.ans_start)
            self.loss_start = tf.reduce_mean(loss_start)      # scalar. avg across batch
            tf.summary.scalar('loss_start', self.loss_start)  # log to tensorboard

            # Calculate loss for prediction of end position
            loss_end = tf.nn.softmax_cross_entropy_with_logits_v2(logits=self.logits_end,
                                                                  labels=self.ans_end)
            self.loss_end = tf.reduce_mean(loss_end)
            tf.summary.scalar('loss_end', self.loss_end)

            # Calculate the L2 regularization loss
            regularization_loss_vars = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
            regularizer = tf_layers.l2_regularizer(scale=self.flags.l2_lambda)
            self.l2_loss = tf_layers.apply_regularization(regularizer, regularization_loss_vars)

            # Add the loss components
            self.loss = self.loss_start + self.loss_end + self.l2_loss
            tf.summary.scalar('loss', self.loss)

            # Apply EMA decay (https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage)
            ema_op = self.ema.apply(tf.trainable_variables())

            with tf.control_dependencies([ema_op]):
                self.loss = tf.identity(self.loss) 
开发者ID:chrischute,项目名称:squad-transformer,代码行数:49,代码来源:model.py


注:本文中的tensorflow.contrib.layers.apply_regularization方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。