当前位置: 首页>>代码示例>>Python>>正文


Python tensorflow.log函数代码示例

本文整理汇总了Python中tensorflow.log函数的典型用法代码示例。如果您正苦于以下问题:Python log函数的具体用法?Python log怎么用?Python log使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。


在下文中一共展示了log函数的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: cross_entropy

def cross_entropy(output, target):
    """Returns the cost function of Cross-entropy of two distributions, implement
    softmax internally.

    Parameters
    ----------
    output : Tensorflow variable
        A distribution with shape: [None, n_feature].
    target : Tensorflow variable
        A distribution with shape: [None, n_feature].

    Examples
    --------
    >>> ce = tf.cost.cross_entropy(y_logits, y_target_logits)

    Notes
    -----
    About cross-entropy: `wiki <https://en.wikipedia.org/wiki/Cross_entropy>`_.\n
    The code is borrowed from: `here <https://en.wikipedia.org/wiki/Cross_entropy>`_.
    """
    with tf.name_scope("cross_entropy_loss"):
        net_output_tf = output
        target_tf = target
        cross_entropy = tf.add(tf.mul(tf.log(net_output_tf, name=None),target_tf),
                             tf.mul(tf.log(1 - net_output_tf), (1 - target_tf)))
        return -1 * tf.reduce_mean(tf.reduce_sum(cross_entropy, 1), name='cross_entropy_mean')
开发者ID:shorxp,项目名称:tensorlayer,代码行数:26,代码来源:cost.py

示例2: _compute_log_moment

  def _compute_log_moment(self, sigma, q, moment_order):
    """Compute high moment of privacy loss.

    Args:
      sigma: the noise sigma, in the multiples of the sensitivity.
      q: the sampling ratio.
      moment_order: the order of moment.
    Returns:
      log E[exp(moment_order * X)]
    """
    assert moment_order <= self._max_moment_order, ("The order of %d is out "
                                                    "of the upper bound %d."
                                                    % (moment_order,
                                                       self._max_moment_order))
    binomial_table = tf.slice(self._binomial_table, [moment_order, 0],
                              [1, moment_order + 1])
    # qs = [1 q q^2 ... q^L] = exp([0 1 2 ... L] * log(q))
    qs = tf.exp(tf.constant([i * 1.0 for i in range(moment_order + 1)],
                            dtype=tf.float64) * tf.cast(
                                tf.log(q), dtype=tf.float64))
    moments0 = self._differential_moments(sigma, 0.0, moment_order)
    term0 = tf.reduce_sum(binomial_table * qs * moments0)
    moments1 = self._differential_moments(sigma, 1.0, moment_order)
    term1 = tf.reduce_sum(binomial_table * qs * moments1)
    return tf.squeeze(tf.log(tf.cast(q * term0 + (1.0 - q) * term1,
                                     tf.float64)))
开发者ID:ZhangShiyue,项目名称:models,代码行数:26,代码来源:accountant.py

示例3: inverse_transform_box

def inverse_transform_box(bbox, height, width):
    """ Transform the bounding box format 
        Args:
            bbox: [N X 4] input N bbox
                  format = [left top right bottom]                  
            height: height of original image
            width: width of original image

        Return:
            bbox: [N X 4] output rounded N bbox
                  fromat = [cx, cy, log(w/W), log(h/H)]
    """
    x1, y1, x2, y2 = tf.split(1, 4, bbox)

    w = x2 - x1
    h = y2 - y1
    x = x1 + w / 2
    y = y1 + h / 2

    x /= width / 2
    y /= height / 2
    x -= 1
    y -= 1
    w = tf.log(w / width)
    h = tf.log(h / height)

    bbox_out = tf.concat(1, [x, y, h, w])

    return bbox_out
开发者ID:renmengye,项目名称:deep-tracker,代码行数:29,代码来源:build_deep_tracker.py

示例4: _create_loss_optimizer

 def _create_loss_optimizer(self):
     # The loss is composed of two terms:
     # 1.) The reconstruction loss (the negative log probability
     #     of the input under the reconstructed Bernoulli distribution
     #     induced by the decoder in the data space).
     #     This can be interpreted as the number of "nats" required
     #     for reconstructing the input when the activation in latent
     #     is given.
     # Adding 1e-10 to avoid evaluatio of log(0.0)
     reconstr_loss = \
         -tf.reduce_sum(self.x * tf.log(1e-10 + self.x_reconstr_mean)
                        + (1 - self.x) * tf.log(1e-10 + 1 - self.x_reconstr_mean),
                        1)
     # 2.) The latent loss, which is defined as the Kullback Leibler divergence
     #     between the distribution in latent space induced by the encoder on
     #     the data and some prior. This acts as a kind of regularizer.
     #     This can be interpreted as the number of "nats" required
     #     for transmitting the the latent space distribution given
     #     the prior.
     latent_loss = -0.5 * tf.reduce_sum(1 + self.z_log_sigma_sq
                                        - tf.square(self.z_mean)
                                        - tf.exp(self.z_log_sigma_sq), 1)
     self.cost = tf.reduce_mean(reconstr_loss + latent_loss)  # average over batch
     # Use ADAM optimizer
     self.optimizer = \
         tf.train.AdamOptimizer(learning_rate=self.learning_rate).minimize(self.cost)
开发者ID:johnzon,项目名称:kboc,代码行数:26,代码来源:anomaly.py


注:本文中的tensorflow.log函数示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。