當前位置: 首頁>>代碼示例>>Python>>正文


Python layers.classification_loss方法代碼示例

本文整理匯總了Python中layers.classification_loss方法的典型用法代碼示例。如果您正苦於以下問題:Python layers.classification_loss方法的具體用法?Python layers.classification_loss怎麽用?Python layers.classification_loss使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在layers的用法示例。


在下文中一共展示了layers.classification_loss方法的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: classifier_graph

# 需要導入模塊: import layers [as 別名]
# 或者: from layers import classification_loss [as 別名]
def classifier_graph(self):
    """Constructs classifier graph from inputs to classifier loss.

    * Caches the VatxtInput object in `self.cl_inputs`
    * Caches tensors: `cl_embedded`, `cl_logits`, `cl_loss`

    Returns:
      loss: scalar float.
    """
    inputs = _inputs('train', pretrain=False)
    self.cl_inputs = inputs
    embedded = self.layers['embedding'](inputs.tokens)
    self.tensors['cl_embedded'] = embedded

    _, next_state, logits, loss = self.cl_loss_from_embedding(
        embedded, return_intermediates=True)
    tf.summary.scalar('classification_loss', loss)
    self.tensors['cl_logits'] = logits
    self.tensors['cl_loss'] = loss

    acc = layers_lib.accuracy(logits, inputs.labels, inputs.weights)
    tf.summary.scalar('accuracy', acc)

    adv_loss = (self.adversarial_loss() * tf.constant(
        FLAGS.adv_reg_coeff, name='adv_reg_coeff'))
    tf.summary.scalar('adversarial_loss', adv_loss)

    total_loss = loss + adv_loss
    tf.summary.scalar('total_classification_loss', total_loss)

    with tf.control_dependencies([inputs.save_state(next_state)]):
      total_loss = tf.identity(total_loss)

    return total_loss 
開發者ID:ringringyi,項目名稱:DOTA_models,代碼行數:36,代碼來源:graphs.py

示例2: cl_loss_from_embedding

# 需要導入模塊: import layers [as 別名]
# 或者: from layers import classification_loss [as 別名]
def cl_loss_from_embedding(self,
                             embedded,
                             inputs=None,
                             return_intermediates=False):
    """Compute classification loss from embedding.

    Args:
      embedded: 3-D float Tensor [batch_size, num_timesteps, embedding_dim]
      inputs: VatxtInput, defaults to self.cl_inputs.
      return_intermediates: bool, whether to return intermediate tensors or only
        the final loss.

    Returns:
      If return_intermediates is True:
        lstm_out, next_state, logits, loss
      Else:
        loss
    """
    if inputs is None:
      inputs = self.cl_inputs

    lstm_out, next_state = self.layers['lstm'](embedded, inputs.state,
                                               inputs.length)
    logits = self.layers['cl_logits'](lstm_out)
    loss = layers_lib.classification_loss(logits, inputs.labels, inputs.weights)

    if return_intermediates:
      return lstm_out, next_state, logits, loss
    else:
      return loss 
開發者ID:ringringyi,項目名稱:DOTA_models,代碼行數:32,代碼來源:graphs.py

示例3: classifier_graph

# 需要導入模塊: import layers [as 別名]
# 或者: from layers import classification_loss [as 別名]
def classifier_graph(self):
    """Constructs classifier graph from inputs to classifier loss.

    * Caches the VatxtInput object in `self.cl_inputs`
    * Caches tensors: `cl_embedded`, `cl_logits`, `cl_loss`

    Returns:
      loss: scalar float.
    """
    inputs = _inputs('train', pretrain=False)
    self.cl_inputs = inputs
    embedded = self.layers['embedding'](inputs.tokens)
    self.tensors['cl_embedded'] = embedded

    _, next_state, logits, loss = self.cl_loss_from_embedding(
        embedded, return_intermediates=True)
    tf.summary.scalar('classification_loss', loss)
    self.tensors['cl_logits'] = logits
    self.tensors['cl_loss'] = loss

    if FLAGS.single_label:
      indices = tf.stack([tf.range(FLAGS.batch_size), inputs.length - 1], 1)
      labels = tf.expand_dims(tf.gather_nd(inputs.labels, indices), 1)
      weights = tf.expand_dims(tf.gather_nd(inputs.weights, indices), 1)
    else:
      labels = inputs.labels
      weights = inputs.weights
    acc = layers_lib.accuracy(logits, labels, weights)
    tf.summary.scalar('accuracy', acc)

    adv_loss = (self.adversarial_loss() * tf.constant(
        FLAGS.adv_reg_coeff, name='adv_reg_coeff'))
    tf.summary.scalar('adversarial_loss', adv_loss)

    total_loss = loss + adv_loss

    with tf.control_dependencies([inputs.save_state(next_state)]):
      total_loss = tf.identity(total_loss)
      tf.summary.scalar('total_classification_loss', total_loss)
    return total_loss 
開發者ID:itsamitgoel,項目名稱:Gun-Detector,代碼行數:42,代碼來源:graphs.py

示例4: cl_loss_from_embedding

# 需要導入模塊: import layers [as 別名]
# 或者: from layers import classification_loss [as 別名]
def cl_loss_from_embedding(self,
                             embedded,
                             inputs=None,
                             return_intermediates=False):
    """Compute classification loss from embedding.

    Args:
      embedded: 3-D float Tensor [batch_size, num_timesteps, embedding_dim]
      inputs: VatxtInput, defaults to self.cl_inputs.
      return_intermediates: bool, whether to return intermediate tensors or only
        the final loss.

    Returns:
      If return_intermediates is True:
        lstm_out, next_state, logits, loss
      Else:
        loss
    """
    if inputs is None:
      inputs = self.cl_inputs

    lstm_out, next_state = self.layers['lstm'](embedded, inputs.state,
                                               inputs.length)
    if FLAGS.single_label:
      indices = tf.stack([tf.range(FLAGS.batch_size), inputs.length - 1], 1)
      lstm_out = tf.expand_dims(tf.gather_nd(lstm_out, indices), 1)
      labels = tf.expand_dims(tf.gather_nd(inputs.labels, indices), 1)
      weights = tf.expand_dims(tf.gather_nd(inputs.weights, indices), 1)
    else:
      labels = inputs.labels
      weights = inputs.weights
    logits = self.layers['cl_logits'](lstm_out)
    loss = layers_lib.classification_loss(logits, labels, weights)

    if return_intermediates:
      return lstm_out, next_state, logits, loss
    else:
      return loss 
開發者ID:itsamitgoel,項目名稱:Gun-Detector,代碼行數:40,代碼來源:graphs.py


注:本文中的layers.classification_loss方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。