当前位置: 首页>>代码示例>>Python>>正文


Python v1.size方法代码示例

本文整理汇总了Python中tensorflow.compat.v1.size方法的典型用法代码示例。如果您正苦于以下问题:Python v1.size方法的具体用法?Python v1.size怎么用?Python v1.size使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tensorflow.compat.v1的用法示例。


在下文中一共展示了v1.size方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: softmax_cross_entropy_one_hot

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def softmax_cross_entropy_one_hot(logits, labels, weights_fn=None):
  """Calculate softmax cross entropy given one-hot labels and logits.

  Args:
    logits: Tensor of size [batch-size, o=1, p=1, num-classes]
    labels: Tensor of size [batch-size, o=1, p=1, num-classes]
    weights_fn: Function that takes in labels and weighs examples (unused)
  Returns:
    cross-entropy (scalar), weights
  """
  with tf.variable_scope("softmax_cross_entropy_one_hot",
                         values=[logits, labels]):
    del weights_fn
    cross_entropy = tf.losses.softmax_cross_entropy(
        onehot_labels=labels, logits=logits)
    return cross_entropy, tf.constant(1.0) 
开发者ID:tensorflow,项目名称:tensor2tensor,代码行数:18,代码来源:metrics.py

示例2: sigmoid_accuracy_one_hot

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def sigmoid_accuracy_one_hot(logits, labels, weights_fn=None):
  """Calculate accuracy for a set, given one-hot labels and logits.

  Args:
    logits: Tensor of size [batch-size, o=1, p=1, num-classes]
    labels: Tensor of size [batch-size, o=1, p=1, num-classes]
    weights_fn: Function that takes in labels and weighs examples (unused)
  Returns:
    accuracy (scalar), weights
  """
  with tf.variable_scope("sigmoid_accuracy_one_hot", values=[logits, labels]):
    del weights_fn
    predictions = tf.nn.sigmoid(logits)
    labels = tf.argmax(labels, -1)
    predictions = tf.argmax(predictions, -1)
    _, accuracy = tf.metrics.accuracy(labels=labels, predictions=predictions)
    return accuracy, tf.constant(1.0) 
开发者ID:tensorflow,项目名称:tensor2tensor,代码行数:19,代码来源:metrics.py

示例3: sigmoid_accuracy

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def sigmoid_accuracy(logits, labels, weights_fn=None):
  """Calculate accuracy for a set, given integer labels and logits.

  Args:
    logits: Tensor of size [batch-size, o=1, p=1, num-classes]
    labels: Tensor of size [batch-size, o=1, p=1]
    weights_fn: Function that takes in labels and weighs examples (unused)
  Returns:
    accuracy (scalar), weights
  """
  with tf.variable_scope("sigmoid_accuracy", values=[logits, labels]):
    del weights_fn
    predictions = tf.nn.sigmoid(logits)
    predictions = tf.argmax(predictions, -1)
    _, accuracy = tf.metrics.accuracy(labels=labels, predictions=predictions)
    return accuracy, tf.constant(1.0) 
开发者ID:tensorflow,项目名称:tensor2tensor,代码行数:18,代码来源:metrics.py

示例4: sigmoid_precision_one_hot

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def sigmoid_precision_one_hot(logits, labels, weights_fn=None):
  """Calculate precision for a set, given one-hot labels and logits.

  Predictions are converted to one-hot,
  as predictions[example][arg-max(example)] = 1

  Args:
    logits: Tensor of size [batch-size, o=1, p=1, num-classes]
    labels: Tensor of size [batch-size, o=1, p=1, num-classes]
    weights_fn: Function that takes in labels and weighs examples (unused)
  Returns:
    precision (scalar), weights
  """
  with tf.variable_scope("sigmoid_precision_one_hot", values=[logits, labels]):
    del weights_fn
    num_classes = logits.shape[-1]
    predictions = tf.nn.sigmoid(logits)
    predictions = tf.argmax(predictions, -1)
    predictions = tf.one_hot(predictions, num_classes)
    _, precision = tf.metrics.precision(labels=labels, predictions=predictions)
    return precision, tf.constant(1.0) 
开发者ID:tensorflow,项目名称:tensor2tensor,代码行数:23,代码来源:metrics.py

示例5: sigmoid_recall_one_hot

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def sigmoid_recall_one_hot(logits, labels, weights_fn=None):
  """Calculate recall for a set, given one-hot labels and logits.

  Predictions are converted to one-hot,
  as predictions[example][arg-max(example)] = 1

  Args:
    logits: Tensor of size [batch-size, o=1, p=1, num-classes]
    labels: Tensor of size [batch-size, o=1, p=1, num-classes]
    weights_fn: Function that takes in labels and weighs examples (unused)
  Returns:
    recall (scalar), weights
  """
  with tf.variable_scope("sigmoid_recall_one_hot", values=[logits, labels]):
    del weights_fn
    num_classes = logits.shape[-1]
    predictions = tf.nn.sigmoid(logits)
    predictions = tf.argmax(predictions, -1)
    predictions = tf.one_hot(predictions, num_classes)
    _, recall = tf.metrics.recall(labels=labels, predictions=predictions)
    return recall, tf.constant(1.0) 
开发者ID:tensorflow,项目名称:tensor2tensor,代码行数:23,代码来源:metrics.py

示例6: roc_auc

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def roc_auc(logits, labels, weights_fn=None):
  """Calculate ROC AUC.

  Requires binary classes.

  Args:
    logits: Tensor of size [batch_size, 1, 1, num_classes]
    labels: Tensor of size [batch_size, 1, 1, num_classes]
    weights_fn: Function that takes in labels and weighs examples (unused)
  Returns:
    ROC AUC (scalar), weights
  """
  del weights_fn
  with tf.variable_scope("roc_auc", values=[logits, labels]):
    predictions = tf.argmax(logits, axis=-1)
    _, auc = tf.metrics.auc(labels, predictions, curve="ROC")
    return auc, tf.constant(1.0) 
开发者ID:tensorflow,项目名称:tensor2tensor,代码行数:19,代码来源:metrics.py

示例7: _grad_sparsity

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def _grad_sparsity(self):
    """Gradient sparsity."""
    # If the sparse minibatch gradient has 10 percent of its entries
    # non-zero, its sparsity is 0.1.
    # The norm of dense gradient averaged from full dataset
    # are roughly estimated norm of minibatch
    # sparse gradient norm * sqrt(sparsity)
    # An extension maybe only correct the sparse blob.
    non_zero_cnt = tf.add_n([tf.count_nonzero(g) for g in self._grad])
    all_entry_cnt = tf.add_n([tf.size(g) for g in self._grad])
    self._sparsity = tf.cast(non_zero_cnt, self._grad[0].dtype)
    self._sparsity /= tf.cast(all_entry_cnt, self._grad[0].dtype)
    avg_op = self._moving_averager.apply([self._sparsity,])
    with tf.control_dependencies([avg_op]):
      self._sparsity_avg = self._moving_averager.average(self._sparsity)
    return avg_op 
开发者ID:tensorflow,项目名称:tensor2tensor,代码行数:18,代码来源:yellowfin.py

示例8: categorical_sample

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def categorical_sample(logits, dtype=tf.int32,
                       sample_shape=(), seed=None):
  """Samples from categorical distribution."""
  logits = tf.convert_to_tensor(logits, name="logits")
  event_size = tf.shape(logits)[-1]
  batch_shape_tensor = tf.shape(logits)[:-1]
  def _sample_n(n):
    """Sample vector of categoricals."""
    if logits.shape.ndims == 2:
      logits_2d = logits
    else:
      logits_2d = tf.reshape(logits, [-1, event_size])
    sample_dtype = tf.int64 if logits.dtype.size > 4 else tf.int32
    draws = tf.multinomial(
        logits_2d, n, seed=seed, output_dtype=sample_dtype)
    draws = tf.reshape(
        tf.transpose(draws),
        tf.concat([[n], batch_shape_tensor], 0))
    return tf.cast(draws, dtype)
  return _call_sampler(_sample_n, sample_shape) 
开发者ID:magenta,项目名称:magenta,代码行数:22,代码来源:seq2seq.py

示例9: _rnn_output_size

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def _rnn_output_size(self):
    """Compute output size of RNN."""
    size = self._cell.output_size
    if self._output_layer is None:
      return size
    else:
      # To use layer's compute_output_shape, we need to convert the
      # RNNCell's output_size entries into shapes with an unknown
      # batch size.  We then pass this through the layer's
      # compute_output_shape and read off all but the first (batch)
      # dimensions to get the output size of the rnn with the layer
      # applied to the top.
      output_shape_with_unknown_batch = tf.nest.map_structure(
          lambda s: tf.TensorShape([None]).concatenate(s), size)
      layer_output_shape = self._output_layer.compute_output_shape(
          output_shape_with_unknown_batch)
      return tf.nest.map_structure(lambda s: s[1:], layer_output_shape) 
开发者ID:magenta,项目名称:magenta,代码行数:19,代码来源:seq2seq.py

示例10: trim_and_pad_dataset

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def trim_and_pad_dataset(dataset, length, feature_keys=None):
  """Trim and pad first dimension of features to size `length`.

  Args:
    dataset: tf.data.Dataset, the dataset to trimp/pad examples in.
    length: int, or a dict from feature-key to int
    feature_keys: (optional) list of strings, the feature names to limit
      trimming/padding to. Defaults to all features.
  Returns:
    Trimmed/padded tf.data.Dataset.
  """
  def _trim_and_pad(k, t):
    """Trim/pad to the first axis of `t` to be of size `length`."""
    if feature_keys and k not in feature_keys:
      return t
    length_k = length if isinstance(length, int) else length[k]
    t = t[:length_k]
    pad_amt = length_k - tf.shape(t)[0]
    padded_t = tf.pad(t, [(0, pad_amt)] + [(0, 0)] * (len(t.shape) - 1))
    padded_t.set_shape([length_k] + t.shape.as_list()[1:])
    return padded_t

  return dataset.map(
      lambda x: {k: _trim_and_pad(k, t) for k, t in x.items()},
      num_parallel_calls=tf.data.experimental.AUTOTUNE) 
开发者ID:tensorflow,项目名称:mesh,代码行数:27,代码来源:dataset.py

示例11: select_slate_topk

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def select_slate_topk(slate_size, s_no_click, s, q):
  """Selects the slate using the top-K algorithm.

  This algorithm corresponds to the method "TS" in
  Ie et al. https://arxiv.org/abs/1905.12767.

  Args:
    slate_size: int, the size of the recommendation slate.
    s_no_click: float tensor, the score for not clicking any document.
    s: [num_of_documents] tensor, the scores for clicking documents.
    q: [num_of_documents] tensor, the predicted q values for documents.

  Returns:
    [slate_size] tensor, the selected slate.
  """
  del s_no_click  # Unused argument.
  _, output_slate = tf.math.top_k(s * q, k=slate_size)
  return output_slate 
开发者ID:google-research,项目名称:recsim,代码行数:20,代码来源:slate_decomp_q_agent.py

示例12: permute_noise_tokens

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def permute_noise_tokens(tokens, noise_mask, unused_vocabulary):
  """Permute the noise tokens, keeping the non-noise tokens where they are.

  Args:
    tokens: a 1d integer Tensor
    noise_mask: a boolean Tensor with the same shape as tokens
    unused_vocabulary: a vocabulary.Vocabulary
  Returns:
    a Tensor with the same shape and dtype as tokens
  """
  masked_only = tf.boolean_mask(tokens, noise_mask)
  permuted = tf.random.shuffle(masked_only)
  # pad to avoid errors when it has size 0
  permuted = tf.pad(permuted, [[0, 1]])
  indices = tf.cumsum(tf.cast(noise_mask, tf.int32), exclusive=True)
  return tf.where_v2(noise_mask,
                     tf.gather(permuted, indices),
                     tokens) 
开发者ID:google-research,项目名称:text-to-text-transfer-transformer,代码行数:20,代码来源:preprocessors.py

示例13: get_anchors

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def get_anchors(self, image_shape):
        """Returns anchor pyramid for the given image size."""
        backbone_shapes = compute_backbone_shapes(self.config, image_shape)
        # Cache anchors and reuse if image shape is the same
        if not hasattr(self, "_anchor_cache"):
            self._anchor_cache = {}
        if not tuple(image_shape) in self._anchor_cache:
            # Generate Anchors
            a = utils.generate_pyramid_anchors(
                self.config.RPN_ANCHOR_SCALES,
                self.config.RPN_ANCHOR_RATIOS,
                backbone_shapes,
                self.config.BACKBONE_STRIDES,
                self.config.RPN_ANCHOR_STRIDE)
            # Keep a copy of the latest anchors in pixel coordinates because
            # it's used in inspect_model notebooks.
            # TODO: Remove this after the notebook are refactored to not use it
            self.anchors = a
            # Normalize coordinates
            self._anchor_cache[tuple(image_shape)] = utils.norm_boxes(a, image_shape[:2])
        return self._anchor_cache[tuple(image_shape)] 
开发者ID:OCR-D,项目名称:ocrd_anybaseocr,代码行数:23,代码来源:model.py

示例14: expanded_shape

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def expanded_shape(orig_shape, start_dim, num_dims):
  """Inserts multiple ones into a shape vector.

  Inserts an all-1 vector of length num_dims at position start_dim into a shape.
  Can be combined with tf.reshape to generalize tf.expand_dims.

  Args:
    orig_shape: the shape into which the all-1 vector is added (int32 vector)
    start_dim: insertion position (int scalar)
    num_dims: length of the inserted all-1 vector (int scalar)
  Returns:
    An int32 vector of length tf.size(orig_shape) + num_dims.
  """
  with tf.name_scope('ExpandedShape'):
    start_dim = tf.expand_dims(start_dim, 0)  # scalar to rank-1
    before = tf.slice(orig_shape, [0], start_dim)
    add_shape = tf.ones(tf.reshape(num_dims, [1]), dtype=tf.int32)
    after = tf.slice(orig_shape, start_dim, [-1])
    new_shape = tf.concat([before, add_shape, after], 0)
    return new_shape 
开发者ID:tensorflow,项目名称:models,代码行数:22,代码来源:ops.py

示例15: fixed_padding

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import size [as 别名]
def fixed_padding(inputs, kernel_size, rate=1):
  """Pads the input along the spatial dimensions independently of input size.

  Args:
    inputs: A tensor of size [batch, height_in, width_in, channels].
    kernel_size: The kernel to be used in the conv2d or max_pool2d operation.
                 Should be a positive integer.
    rate: An integer, rate for atrous convolution.

  Returns:
    output: A tensor of size [batch, height_out, width_out, channels] with the
      input, either intact (if kernel_size == 1) or padded (if kernel_size > 1).
  """
  kernel_size_effective = kernel_size + (kernel_size - 1) * (rate - 1)
  pad_total = kernel_size_effective - 1
  pad_beg = pad_total // 2
  pad_end = pad_total - pad_beg
  padded_inputs = tf.pad(inputs, [[0, 0], [pad_beg, pad_end],
                                  [pad_beg, pad_end], [0, 0]])
  return padded_inputs 
开发者ID:tensorflow,项目名称:models,代码行数:22,代码来源:ops.py


注:本文中的tensorflow.compat.v1.size方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。