当前位置: 首页>>代码示例>>Python>>正文


Python tensor_shape.vector方法代码示例

本文整理汇总了Python中tensorflow.python.framework.tensor_shape.vector方法的典型用法代码示例。如果您正苦于以下问题:Python tensor_shape.vector方法的具体用法?Python tensor_shape.vector怎么用?Python tensor_shape.vector使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tensorflow.python.framework.tensor_shape的用法示例。


在下文中一共展示了tensor_shape.vector方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: insert_many

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def insert_many(self, component_index, keys, values, name=None):
    """For each key, assigns the respective value to the specified component.

    This operation updates each element at component_index.

    Args:
      component_index: The component of the value that is being assigned.
      keys: A vector of keys, with length n.
      values: An any-dimensional tensor of values, which are associated with the
        respective keys. The first dimension must have length n.
      name: Optional name for the op.

    Returns:
      The operation that performs the insertion.
    Raises:
      InvalidArgumentsError: If inserting keys and values without elements.
    """
    if name is None:
      name = "%s_BarrierInsertMany" % self._name
    return gen_data_flow_ops._barrier_insert_many(
        self._barrier_ref, keys, values, component_index, name=name) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:23,代码来源:data_flow_ops.py

示例2: top_k

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def top_k(input, k=1, sorted=True, name=None):
  """Finds values and indices of the `k` largest entries for the last dimension.

  If the input is a vector (rank-1), finds the `k` largest entries in the vector
  and outputs their values and indices as vectors.  Thus `values[j]` is the
  `j`-th largest entry in `input`, and its index is `indices[j]`.

  For matrices (resp. higher rank input), computes the top `k` entries in each
  row (resp. vector along the last dimension).  Thus,

      values.shape = indices.shape = input.shape[:-1] + [k]

  If two elements are equal, the lower-index element appears first.

  Args:
    input: 1-D or higher `Tensor` with last dimension at least `k`.
    k: 0-D `int32` `Tensor`.  Number of top elements to look for along the last
      dimension (along each row for matrices).
    sorted: If true the resulting `k` elements will be sorted by the values in
      descending order.
    name: Optional name for the operation.

  Returns:
    values: The `k` largest elements along each last dimensional slice.
    indices: The indices of `values` within the last dimension of `input`.
  """
  return gen_nn_ops._top_kv2(input, k=k, sorted=sorted, name=name) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:29,代码来源:nn_ops.py

示例3: padded_batch

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def padded_batch(self, batch_size, padded_shapes, padding_values=None):
    """Combines consecutive elements of this dataset into padded batches.

    Like `Dataset.dense_to_sparse_batch()`, this method combines
    multiple consecutive elements of this dataset, which might have
    different shapes, into a single element. The tensors in the
    resulting element have an additional outer dimension, and are
    padded to the respective shape in `padded_shapes`.

    Args:
      batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
        consecutive elements of this dataset to combine in a single batch.
      padded_shapes: A nested structure of `tf.TensorShape` or
        `tf.int64` vector tensor-like objects representing the shape
        to which the respective component of each input element should
        be padded prior to batching. Any unknown dimensions
        (e.g. `tf.Dimension(None)` in a `tf.TensorShape` or `-1` in a
        tensor-like object) will be padded to the maximum size of that
        dimension in each batch.
      padding_values: (Optional.) A nested structure of scalar-shaped
        `tf.Tensor`, representing the padding values to use for the
        respective components.  Defaults are `0` for numeric types and
        the empty string for string types.

    Returns:
      A `Dataset`.
    """
    return PaddedBatchDataset(self, batch_size, padded_shapes, padding_values) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:30,代码来源:dataset_ops.py

示例4: dense_to_sparse_batch

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def dense_to_sparse_batch(self, batch_size, row_shape):
    """Batches ragged elements of this dataset into `tf.SparseTensor`s.

    Like `Dataset.padded_batch()`, this method combines multiple
    consecutive elements of this dataset, which might have different
    shapes, into a single element. The resulting element has three
    components (`indices`, `values`, and `dense_shape`), which
    comprise a `tf.SparseTensor` that represents the same data. The
    `row_shape` represents the dense shape of each row in the
    resulting `tf.SparseTensor`, to which the effective batch size is
    prepended. For example:

    ```python
    # NOTE: The following examples use `{ ... }` to represent the
    # contents of a dataset.
    a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] }

    a.dense_to_sparse_batch(batch_size=2, row_shape=[6]) == {
        ([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1]],  # indices
         ['a', 'b', 'c', 'a', 'b'],                 # values
         [2, 6]),                                   # dense_shape
        ([[2, 0], [2, 1], [2, 2], [2, 3]],
         ['a', 'b', 'c', 'd'],
         [1, 6])
    }
    ```

    Args:
      batch_size: A `tf.int64` scalar `tf.Tensor`, representing the
        number of consecutive elements of this dataset to combine in a
        single batch.
      row_shape: A `tf.TensorShape` or `tf.int64` vector tensor-like
        object representing the equivalent dense shape of a row in the
        resulting `tf.SparseTensor`. Each element of this dataset must
        have the same rank as `row_shape`, and must have size less
        than or equal to `row_shape` in each dimension.

    Returns:
      A `Dataset`.
    """
    return DenseToSparseBatchDataset(self, batch_size, row_shape) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:43,代码来源:dataset_ops.py

示例5: output_shapes

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def output_shapes(self):
    input_shapes = self._input_dataset.output_shapes
    return nest.pack_sequence_as(input_shapes, [
        tensor_shape.vector(None).concatenate(s)
        for s in nest.flatten(self._input_dataset.output_shapes)
    ]) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:8,代码来源:dataset_ops.py

示例6: _from_tensor_list

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def _from_tensor_list(self, flat_value):
    if (len(flat_value) != 1 or flat_value[0].dtype != dtypes.variant or
        not flat_value[0].shape.is_compatible_with(tensor_shape.vector(3))):
      raise ValueError("SparseTensorStructure corresponds to a single "
                       "tf.variant vector of length 3.")
    return self._from_compatible_tensor_list(flat_value) 
开发者ID:yyht,项目名称:BERT,代码行数:8,代码来源:strcuture.py

示例7: _AccumulateNInitializedWithMerge

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def _AccumulateNInitializedWithMerge(self, inputs):
    return self._AccumulateNTemplate(
        inputs,
        init=tf.zeros_like(gen_control_flow_ops._merge(inputs)[0]),
        shape=tensor_shape.vector(0),
        validate_shape=False) 
开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:8,代码来源:accumulate_n_benchmark.py

示例8: testStr

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def testStr(self):
    self.assertEqual("<unknown>", str(tensor_shape.unknown_shape()))
    self.assertEqual("(?,)", str(tensor_shape.unknown_shape(ndims=1)))
    self.assertEqual("(?, ?)", str(tensor_shape.unknown_shape(ndims=2)))
    self.assertEqual("(?, ?, ?)", str(tensor_shape.unknown_shape(ndims=3)))

    self.assertEqual("()", str(tensor_shape.scalar()))
    self.assertEqual("(7,)", str(tensor_shape.vector(7)))
    self.assertEqual("(3, 8)", str(tensor_shape.matrix(3, 8)))
    self.assertEqual("(4, 5, 2)", str(tensor_shape.TensorShape([4, 5, 2])))

    self.assertEqual("(32, ?, 1, 9)",
                     str(tensor_shape.TensorShape([32, None, 1, 9]))) 
开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:15,代码来源:tensor_shape_test.py

示例9: testBroadcast_one_dimension

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def testBroadcast_one_dimension(self):
    s1 = tensor_shape.vector(5)
    s2 = tensor_shape.vector(7)

    unknown = tensor_shape.unknown_shape()
    scalar = tensor_shape.scalar()
    expanded_scalar = tensor_shape.TensorShape([1])

    # Tensors with same shape should have the same broadcast result.
    self.assertEqual(s1, common_shapes.broadcast_shape(s1, s1))
    self.assertEqual(s2, common_shapes.broadcast_shape(s2, s2))
    self.assertEqual(unknown, common_shapes.broadcast_shape(unknown, unknown))
    self.assertEqual(scalar, common_shapes.broadcast_shape(scalar, scalar))
    self.assertEqual(expanded_scalar, common_shapes.broadcast_shape(
        expanded_scalar, expanded_scalar))

    # [] acts like an identity.
    self.assertEqual(s1, common_shapes.broadcast_shape(s1, scalar))
    self.assertEqual(s2, common_shapes.broadcast_shape(s2, scalar))

    self.assertEqual(s1, common_shapes.broadcast_shape(s1, expanded_scalar))
    self.assertEqual(s2, common_shapes.broadcast_shape(s2, expanded_scalar))

    self.assertEqual(unknown, common_shapes.broadcast_shape(s1, unknown))
    self.assertEqual(unknown, common_shapes.broadcast_shape(s2, unknown))

    self.assertEqual(expanded_scalar, common_shapes.broadcast_shape(
        scalar, expanded_scalar))

    with self.assertRaises(ValueError):
      common_shapes.broadcast_shape(s1, s2)
      common_shapes.broadcast_shape(s2, s1) 
开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:34,代码来源:common_shapes_test.py

示例10: _WarpCtcShape

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def _WarpCtcShape(op):

    inputs_shape = op.inputs[0].get_shape().with_rank(3)
    batch_size = inputs_shape[1]
    # loss, gradient
    return [tensor_shape.vector(batch_size), inputs_shape] 
开发者ID:mindorii,项目名称:kws,代码行数:8,代码来源:warp_ctc_ops.py

示例11: in_top_k

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def in_top_k(predictions, targets, k, name=None):
  r"""Says whether the targets are in the top `K` predictions.

  This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the
  prediction for the target class is among the top `k` predictions among
  all predictions for example `i`. Note that the behavior of `InTopK` differs
  from the `TopK` op in its handling of ties; if multiple classes have the
  same prediction value and straddle the top-`k` boundary, all of those
  classes are considered to be in the top `k`.

  More formally, let

    \\(predictions_i\\) be the predictions for all classes for example `i`,
    \\(targets_i\\) be the target class for example `i`,
    \\(out_i\\) be the output for example `i`,

  $$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$

  Args:
    predictions: A `Tensor` of type `float32`.
      A `batch_size` x `classes` tensor.
    targets: A `Tensor`. Must be one of the following types: `int32`, `int64`.
      A `batch_size` vector of class ids.
    k: An `int`. Number of top elements to look at for computing precision.
    name: A name for the operation (optional).

  Returns:
    A `Tensor` of type `bool`. Computed Precision at `k` as a `bool Tensor`.
  """
  with ops.name_scope(name, 'in_top_k'):
    return gen_nn_ops._in_top_kv2(predictions, targets, k, name=name) 
开发者ID:PacktPublishing,项目名称:Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda,代码行数:33,代码来源:nn_ops.py

示例12: _variable_shape

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def _variable_shape(self):
    if not hasattr(self, '_shape'):
      self._shape = tensor_shape.vector(self.dimension)
    return self._shape 
开发者ID:PacktPublishing,项目名称:Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda,代码行数:6,代码来源:feature_column.py

示例13: __init__

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def __init__(self, capacity, types, shapes=None, names=None, shared_name=None,
               name="priority_queue"):
    """Creates a queue that dequeues elements in a first-in first-out order.

    A `PriorityQueue` has bounded capacity; supports multiple concurrent
    producers and consumers; and provides exactly-once delivery.

    A `PriorityQueue` holds a list of up to `capacity` elements. Each
    element is a fixed-length tuple of tensors whose dtypes are
    described by `types`, and whose shapes are optionally described
    by the `shapes` argument.

    If the `shapes` argument is specified, each component of a queue
    element must have the respective fixed shape. If it is
    unspecified, different queue elements may have different shapes,
    but the use of `dequeue_many` is disallowed.

    Enqueues and Dequeues to the `PriorityQueue` must include an additional
    tuple entry at the beginning: the `priority`.  The priority must be
    an int64 scalar (for `enqueue`) or an int64 vector (for `enqueue_many`).

    Args:
      capacity: An integer. The upper bound on the number of elements
        that may be stored in this queue.
      types:  A list of `DType` objects. The length of `types` must equal
        the number of tensors in each queue element, except the first priority
        element.  The first tensor in each element is the priority,
        which must be type int64.
      shapes: (Optional.) A list of fully-defined `TensorShape` objects,
        with the same length as `types`, or `None`.
      names: (Optional.) A list of strings naming the components in the queue
        with the same length as `dtypes`, or `None`.  If specified, the dequeue
        methods return a dictionary with the names as keys.
      shared_name: (Optional.) If non-empty, this queue will be shared under
        the given name across multiple sessions.
      name: Optional name for the queue operation.
    """
    types = _as_type_list(types)
    shapes = _as_shape_list(shapes, types)

    queue_ref = gen_data_flow_ops._priority_queue_v2(
        component_types=types, shapes=shapes, capacity=capacity,
        shared_name=shared_name, name=name)

    priority_dtypes = [_dtypes.int64] + types
    priority_shapes = [()] + shapes if shapes else shapes

    super(PriorityQueue, self).__init__(
        priority_dtypes, priority_shapes, names, queue_ref)


# TODO(josh11b): class BatchQueue(QueueBase): 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:54,代码来源:data_flow_ops.py

示例14: apply_grad

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def apply_grad(self,
                 grad_indices,
                 grad_values,
                 grad_shape=None,
                 local_step=0,
                 name=None):
    """Attempts to apply a sparse gradient to the accumulator.

    The attempt is silently dropped if the gradient is stale, i.e., local_step
    is less than the accumulator's global time step.

    A sparse gradient is represented by its indices, values and possibly empty
    or None shape. Indices must be a vector representing the locations of
    non-zero entries in the tensor. Values are the non-zero slices of the
    gradient, and must have the same first dimension as indices, i.e., the nnz
    represented by indices and values must be consistent. Shape, if not empty or
    None, must be consistent with the accumulator's shape (if also provided).

    Example:
      A tensor [[0, 0], [0. 1], [2, 3]] can be represented
        indices: [1,2]
        values: [[0,1],[2,3]]
        shape: [3, 2]

    Args:
      grad_indices: Indices of the sparse gradient to be applied.
      grad_values: Values of the sparse gradient to be applied.
      grad_shape: Shape of the sparse gradient to be applied.
      local_step: Time step at which the gradient was computed.
      name: Optional name for the operation.

    Returns:
      The operation that (conditionally) applies a gradient to the accumulator.

    Raises:
      InvalidArgumentError: If grad is of the wrong shape
    """
    local_step = math_ops.to_int64(ops.convert_to_tensor(local_step))
    return gen_data_flow_ops.sparse_accumulator_apply_gradient(
        self._accumulator_ref,
        local_step=local_step,
        gradient_indices=math_ops.to_int64(grad_indices),
        gradient_values=grad_values,
        gradient_shape=math_ops.to_int64([] if grad_shape is None else
                                         grad_shape),
        has_known_shape=(grad_shape is not None),
        name=name) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:49,代码来源:data_flow_ops.py

示例15: bincount

# 需要导入模块: from tensorflow.python.framework import tensor_shape [as 别名]
# 或者: from tensorflow.python.framework.tensor_shape import vector [as 别名]
def bincount(arr,
             weights=None,
             minlength=None,
             maxlength=None,
             dtype=dtypes.int32):
  """Counts the number of occurrences of each value in an integer array.

  If `minlength` and `maxlength` are not given, returns a vector with length
  `tf.reduce_max(arr) + 1` if `arr` is non-empty, and length 0 otherwise.
  If `weights` are non-None, then index `i` of the output stores the sum of the
  value in `weights` at each index where the corresponding value in `arr` is
  `i`.

  Args:
    arr: An int32 tensor of non-negative values.
    weights: If non-None, must be the same shape as arr. For each value in
        `arr`, the bin will be incremented by the corresponding weight instead
        of 1.
    minlength: If given, ensures the output has length at least `minlength`,
        padding with zeros at the end if necessary.
    maxlength: If given, skips values in `arr` that are equal or greater than
        `maxlength`, ensuring that the output has length at most `maxlength`.
    dtype: If `weights` is None, determines the type of the output bins.

  Returns:
    A vector with the same dtype as `weights` or the given `dtype`. The bin
    values.
  """
  arr = ops.convert_to_tensor(arr, name="arr", dtype=dtypes.int32)
  array_is_nonempty = reduce_prod(array_ops.shape(arr)) > 0
  output_size = cast(array_is_nonempty, dtypes.int32) * (reduce_max(arr) + 1)
  if minlength is not None:
    minlength = ops.convert_to_tensor(
        minlength, name="minlength", dtype=dtypes.int32)
    output_size = gen_math_ops.maximum(minlength, output_size)
  if maxlength is not None:
    maxlength = ops.convert_to_tensor(
        maxlength, name="maxlength", dtype=dtypes.int32)
    output_size = gen_math_ops.minimum(maxlength, output_size)
  weights = (ops.convert_to_tensor(weights, name="weights")
             if weights is not None else constant_op.constant([], dtype))
  return gen_math_ops.bincount(arr, output_size, weights) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:44,代码来源:math_ops.py


注:本文中的tensorflow.python.framework.tensor_shape.vector方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。