当前位置: 首页>>代码示例>>Python>>正文


Python dtypes.int16方法代码示例

本文整理汇总了Python中tensorflow.python.framework.dtypes.int16方法的典型用法代码示例。如果您正苦于以下问题:Python dtypes.int16方法的具体用法?Python dtypes.int16怎么用?Python dtypes.int16使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tensorflow.python.framework.dtypes的用法示例。


在下文中一共展示了dtypes.int16方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: _convert_string_dtype

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def _convert_string_dtype(dtype):
  if dtype == 'float16':
    return dtypes_module.float16
  if dtype == 'float32':
    return dtypes_module.float32
  elif dtype == 'float64':
    return dtypes_module.float64
  elif dtype == 'int16':
    return dtypes_module.int16
  elif dtype == 'int32':
    return dtypes_module.int32
  elif dtype == 'int64':
    return dtypes_module.int64
  elif dtype == 'uint8':
    return dtypes_module.int8
  elif dtype == 'uint16':
    return dtypes_module.uint16
  else:
    raise ValueError('Unsupported dtype:', dtype) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:21,代码来源:backend.py

示例2: testConvertBetweenInt16AndInt8

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def testConvertBetweenInt16AndInt8(self):
    with self.test_session(use_gpu=True):
      # uint8, uint16
      self._convert([0, 255 * 256], dtypes.uint16, dtypes.uint8,
                    [0, 255])
      self._convert([0, 255], dtypes.uint8, dtypes.uint16,
                    [0, 255 * 256])
      # int8, uint16
      self._convert([0, 127 * 2 * 256], dtypes.uint16, dtypes.int8,
                    [0, 127])
      self._convert([0, 127], dtypes.int8, dtypes.uint16,
                    [0, 127 * 2 * 256])
      # int16, uint16
      self._convert([0, 255 * 256], dtypes.uint16, dtypes.int16,
                    [0, 255 * 128])
      self._convert([0, 255 * 128], dtypes.int16, dtypes.uint16,
                    [0, 255 * 256]) 
开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:19,代码来源:image_ops_test.py

示例3: truediv

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def truediv(x, y, name=None):
  """Divides x / y elementwise (using Python 3 division operator semantics).

  NOTE: Prefer using the Tensor operator or tf.divide which obey Python
  division operator semantics.

  This function forces Python 3 division operator semantics where all integer
  arguments are cast to floating types first.   This op is generated by normal
  `x / y` division in Python 3 and in Python 2.7 with
  `from __future__ import division`.  If you want integer division that rounds
  down, use `x // y` or `tf.floordiv`.

  `x` and `y` must have the same numeric type.  If the inputs are floating
  point, the output will have the same type.  If the inputs are integral, the
  inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32`
  and `int64` (matching the behavior of Numpy).

  Args:
    x: `Tensor` numerator of numeric type.
    y: `Tensor` denominator of numeric type.
    name: A name for the operation (optional).

  Returns:
    `x / y` evaluated in floating point.

  Raises:
    TypeError: If `x` and `y` have different dtypes.
  """
  return _truediv_python3(x, y, name) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:31,代码来源:math_ops.py

示例4: test_int16_to_sparse_ids_2d

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def test_int16_to_sparse_ids_2d(self):
    indicators = (
        (0, 0, 1, 0),
        (1, 0, 0, 1),
    )
    sparse_ids = sparse_ops.indicators_to_sparse_ids(
        indicators, dtype=dtypes.int16)
    with self.cached_session():
      _assert_sparse_tensor_value(self, sparse_tensor.SparseTensorValue(
          indices=((0, 0), (1, 0), (1, 1)),
          values=np.array((2, 0, 3), dtype=np.int16),
          dense_shape=(2, 2),
      ), sparse_ids.eval()) 
开发者ID:google-research,项目名称:tf-slim,代码行数:15,代码来源:sparse_ops_test.py

示例5: testConvertBetweenInteger

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def testConvertBetweenInteger(self):
    # Make sure converting to between integer types scales appropriately
    with self.test_session(use_gpu=True):
      self._convert([0, 255], dtypes.uint8, dtypes.int16, [0, 255 * 128])
      self._convert([0, 32767], dtypes.int16, dtypes.uint8, [0, 255]) 
开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:7,代码来源:image_ops_test.py

示例6: testFromCSVWithFeatureSpec

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def testFromCSVWithFeatureSpec(self):
    if not HAS_PANDAS:
      return
    num_batches = 100
    batch_size = 8

    data_path = _make_test_csv_sparse()
    feature_spec = {
        "int": tf.FixedLenFeature(None, dtypes.int16, np.nan),
        "float": tf.VarLenFeature(dtypes.float16),
        "bool": tf.VarLenFeature(dtypes.bool),
        "string": tf.FixedLenFeature(None, dtypes.string, "")
    }

    pandas_df = pd.read_csv(data_path, dtype={"string": object})
    # Pandas insanely uses NaN for empty cells in a string column.
    # And, we can't use Pandas replace() to fix them because nan != nan
    s = pandas_df["string"]
    for i in range(0, len(s)):
      if isinstance(s[i], float) and math.isnan(s[i]):
        pandas_df.set_value(i, "string", "")
    tensorflow_df = df.TensorFlowDataFrame.from_csv_with_feature_spec(
        [data_path],
        batch_size=batch_size,
        shuffle=False,
        feature_spec=feature_spec)

    # These columns were sparse; re-densify them for comparison
    tensorflow_df["float"] = densify.Densify(np.nan)(tensorflow_df["float"])
    tensorflow_df["bool"] = densify.Densify(np.nan)(tensorflow_df["bool"])

    self._assert_pandas_equals_tensorflow(pandas_df,
                                          tensorflow_df,
                                          num_batches=num_batches,
                                          batch_size=batch_size) 
开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:37,代码来源:tensorflow_dataframe_test.py

示例7: _convert_string_dtype

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def _convert_string_dtype(dtype):
  """Get the type from a string.

  Arguments:
      dtype: A string representation of a type.

  Returns:
      The type requested.

  Raises:
      ValueError: if `dtype` is not supported.
  """
  if dtype == 'float16':
    return dtypes_module.float16
  if dtype == 'float32':
    return dtypes_module.float32
  elif dtype == 'float64':
    return dtypes_module.float64
  elif dtype == 'int16':
    return dtypes_module.int16
  elif dtype == 'int32':
    return dtypes_module.int32
  elif dtype == 'int64':
    return dtypes_module.int64
  elif dtype == 'uint8':
    return dtypes_module.int8
  elif dtype == 'uint16':
    return dtypes_module.uint16
  else:
    raise ValueError('Unsupported dtype:', dtype) 
开发者ID:PacktPublishing,项目名称:Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda,代码行数:32,代码来源:backend.py

示例8: assert_integer_form

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def assert_integer_form(
    x, data=None, summarize=None, message=None,
    int_dtype=None, name="assert_integer_form"):
  """Assert that x has integer components (or floats equal to integers).

  Args:
    x: Floating-point `Tensor`
    data: The tensors to print out if the condition is `False`. Defaults to
      error message and first few entries of `x` and `y`.
    summarize: Print this many entries of each tensor.
    message: A string to prefix to the default message.
    int_dtype: A `tf.dtype` used to cast the float to. The default (`None`)
      implies the smallest possible signed int will be used for casting.
    name: A name for this operation (optional).

  Returns:
    Op raising `InvalidArgumentError` if `cast(x, int_dtype) != x`.
  """
  with ops.name_scope(name, values=[x, data]):
    x = ops.convert_to_tensor(x, name="x")
    if x.dtype.is_integer:
      return control_flow_ops.no_op()
    message = message or "{} has non-integer components".format(x.op.name)
    if int_dtype is None:
      try:
        int_dtype = {
            dtypes.float16: dtypes.int16,
            dtypes.float32: dtypes.int32,
            dtypes.float64: dtypes.int64,
        }[x.dtype.base_dtype]
      except KeyError:
        raise TypeError("Unrecognized type {}".format(x.dtype.name))
    return check_ops.assert_equal(
        x, math_ops.cast(math_ops.cast(x, int_dtype), x.dtype),
        data=data, summarize=summarize, message=message, name=name) 
开发者ID:PacktPublishing,项目名称:Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda,代码行数:37,代码来源:util.py

示例9: cumsum

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def cumsum(x, axis=0, exclusive=False, reverse=False, name=None):
  """Compute the cumulative sum of the tensor `x` along `axis`.

  By default, this op performs an inclusive cumsum, which means that the first
  element of the input is identical to the first element of the output:
  ```prettyprint
  tf.cumsum([a, b, c]) ==> [a, a + b, a + b + c]
  ```

  By setting the `exclusive` kwarg to `True`, an exclusive cumsum is performed
  instead:
  ```prettyprint
  tf.cumsum([a, b, c], exclusive=True) ==> [0, a, a + b]
  ```

  By setting the `reverse` kwarg to `True`, the cumsum is performed in the
  opposite direction:
  ```prettyprint
  tf.cumsum([a, b, c], reverse=True) ==> [a + b + c, b + c, c]
  ```
  This is more efficient than using separate `tf.reverse` ops.

  The `reverse` and `exclusive` kwargs can also be combined:
  ```prettyprint
  tf.cumsum([a, b, c], exclusive=True, reverse=True) ==> [b + c, c, 0]
  ```

  Args:
    x: A `Tensor`. Must be one of the following types: `float32`, `float64`,
       `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,
       `complex128`, `qint8`, `quint8`, `qint32`, `half`.
    axis: A `Tensor` of type `int32` (default: 0).
    exclusive: If `True`, perform exclusive cumsum.
    reverse: A `bool` (default: False).
    name: A name for the operation (optional).

  Returns:
    A `Tensor`. Has the same type as `x`.
  """
  with ops.name_scope(name, "Cumsum", [x]) as name:
    x = ops.convert_to_tensor(x, name="x")
    return gen_math_ops.cumsum(
        x, axis, exclusive=exclusive, reverse=reverse, name=name) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:45,代码来源:math_ops.py

示例10: cumprod

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def cumprod(x, axis=0, exclusive=False, reverse=False, name=None):
  """Compute the cumulative product of the tensor `x` along `axis`.

  By default, this op performs an inclusive cumprod, which means that the
  first
  element of the input is identical to the first element of the output:
  ```prettyprint
  tf.cumprod([a, b, c]) ==> [a, a * b, a * b * c]
  ```

  By setting the `exclusive` kwarg to `True`, an exclusive cumprod is
  performed
  instead:
  ```prettyprint
  tf.cumprod([a, b, c], exclusive=True) ==> [1, a, a * b]
  ```

  By setting the `reverse` kwarg to `True`, the cumprod is performed in the
  opposite direction:
  ```prettyprint
  tf.cumprod([a, b, c], reverse=True) ==> [a * b * c, b * c, c]
  ```
  This is more efficient than using separate `tf.reverse` ops.

  The `reverse` and `exclusive` kwargs can also be combined:
  ```prettyprint
  tf.cumprod([a, b, c], exclusive=True, reverse=True) ==> [b * c, c, 1]
  ```

  Args:
    x: A `Tensor`. Must be one of the following types: `float32`, `float64`,
       `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,
       `complex128`, `qint8`, `quint8`, `qint32`, `half`.
    axis: A `Tensor` of type `int32` (default: 0).
    exclusive: If `True`, perform exclusive cumprod.
    reverse: A `bool` (default: False).
    name: A name for the operation (optional).

  Returns:
    A `Tensor`. Has the same type as `x`.
  """
  with ops.name_scope(name, "Cumprod", [x]) as name:
    x = ops.convert_to_tensor(x, name="x")
    return gen_math_ops.cumprod(
        x, axis, exclusive=exclusive, reverse=reverse, name=name) 
开发者ID:ryfeus,项目名称:lambda-packs,代码行数:47,代码来源:math_ops.py

示例11: cumsum

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def cumsum(x, axis=0, exclusive=False, reverse=False, name=None):
  """Compute the cumulative sum of the tensor `x` along `axis`.

  By default, this op performs an inclusive cumsum, which means that the first
  element of the input is identical to the first element of the output:
  ```prettyprint
  tf.cumsum([a, b, c]) ==> [a, a + b, a + b + c]
  ```

  By setting the `exclusive` kwarg to `True`, an exclusive cumsum is performed
  instead:
  ```prettyprint
  tf.cumsum([a, b, c], exclusive=True) ==> [0, a, a + b]
  ```

  By setting the `reverse` kwarg to `True`, the cumsum is performed in the
  opposite direction:
  ```prettyprint
  tf.cumsum([a, b, c], reverse=True) ==> [a + b + c, b + c, c]
  ```
  This is more efficient than using separate `tf.reverse` ops.

  The `reverse` and `exclusive` kwargs can also be combined:
  ```prettyprint
  tf.cumsum([a, b, c], exclusive=True, reverse=True) ==> [b + c, c, 0]
  ```

  Args:
    x: A `Tensor`. Must be one of the following types: `float32`, `float64`,
       `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,
       `complex128`, `qint8`, `quint8`, `qint32`, `half`.
       axis: A `Tensor` of type `int32` (default: 0).
       reverse: A `bool` (default: False).
       name: A name for the operation (optional).

  Returns:
    A `Tensor`. Has the same type as `x`.
  """
  with ops.name_scope(name, "Cumsum", [x]) as name:
    x = ops.convert_to_tensor(x, name="x")
    return gen_math_ops.cumsum(
        x, axis, exclusive=exclusive, reverse=reverse, name=name) 
开发者ID:abhisuri97,项目名称:auto-alt-text-lambda-api,代码行数:44,代码来源:math_ops.py

示例12: cumprod

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def cumprod(x, axis=0, exclusive=False, reverse=False, name=None):
  """Compute the cumulative product of the tensor `x` along `axis`.

  By default, this op performs an inclusive cumprod, which means that the
  first
  element of the input is identical to the first element of the output:
  ```prettyprint
  tf.cumprod([a, b, c]) ==> [a, a * b, a * b * c]
  ```

  By setting the `exclusive` kwarg to `True`, an exclusive cumprod is
  performed
  instead:
  ```prettyprint
  tf.cumprod([a, b, c], exclusive=True) ==> [1, a, a * b]
  ```

  By setting the `reverse` kwarg to `True`, the cumprod is performed in the
  opposite direction:
  ```prettyprint
  tf.cumprod([a, b, c], reverse=True) ==> [a * b * c, b * c, c]
  ```
  This is more efficient than using separate `tf.reverse` ops.

  The `reverse` and `exclusive` kwargs can also be combined:
  ```prettyprint
  tf.cumprod([a, b, c], exclusive=True, reverse=True) ==> [b * c, c, 1]
  ```

  Args:
    x: A `Tensor`. Must be one of the following types: `float32`, `float64`,
       `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,
       `complex128`, `qint8`, `quint8`, `qint32`, `half`.
    axis: A `Tensor` of type `int32` (default: 0).
    reverse: A `bool` (default: False).
    name: A name for the operation (optional).

  Returns:
    A `Tensor`. Has the same type as `x`.
  """
  with ops.name_scope(name, "Cumprod", [x]) as name:
    x = ops.convert_to_tensor(x, name="x")
    return gen_math_ops.cumprod(
        x, axis, exclusive=exclusive, reverse=reverse, name=name) 
开发者ID:abhisuri97,项目名称:auto-alt-text-lambda-api,代码行数:46,代码来源:math_ops.py

示例13: truediv

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def truediv(x, y, name=None):
  """Divides x / y elementwise, always producing floating point results.

  The same as `tf.div` for floating point arguments, but casts integer arguments
  to floating point before dividing so that the result is always floating point.
  This op is generated by normal `x / y` division in Python 3 and in Python 2.7
  with `from __future__ import division`.  If you want integer division that
  rounds down, use `x // y` or `tf.floordiv`.

  `x` and `y` must have the same numeric type.  If the inputs are floating
  point, the output will have the same type.  If the inputs are integral, the
  inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32`
  and `int64` (matching the behavior of Numpy).

  Args:
    x: `Tensor` numerator of numeric type.
    y: `Tensor` denominator of numeric type.
    name: A name for the operation (optional).

  Returns:
    `x / y` evaluated in floating point.

  Raises:
    TypeError: If `x` and `y` have different dtypes.
  """
  with ops.name_scope(name, "truediv", [x, y]) as name:
    x = ops.convert_to_tensor(x, name="x")
    y = ops.convert_to_tensor(y, name="y")
    x_dtype = x.dtype.base_dtype
    y_dtype = y.dtype.base_dtype
    if x_dtype != y_dtype:
      raise TypeError("x and y must have the same dtype, got %r != %r" %
                      (x_dtype, y_dtype))
    try:
      dtype = _TRUEDIV_TABLE[x_dtype]
    except KeyError:
      raise TypeError("Invalid dtype %r in __truediv__" % x_dtype)
    if dtype is not None:
      x = cast(x, dtype)
      y = cast(y, dtype)
    return gen_math_ops.div(x, y, name=name)


# TODO(aselle): Deprecate this once all internal functionality uses
# tf.truncatediv 
开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:47,代码来源:math_ops.py

示例14: compress

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def compress(self, inputs):
    """Compress inputs and store their binary representations into strings.

    Args:
      inputs: `Tensor` with values to be compressed.

    Returns:
      String `Tensor` vector containing the compressed representation of each
      batch element of `inputs`.
    """
    with ops.name_scope(self._name_scope()):
      inputs = ops.convert_to_tensor(inputs)
      if not self.built:
        # Check input assumptions set before layer building, e.g. input rank.
        input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
        if self.dtype is None:
          self._dtype = inputs.dtype.base_dtype.name
        self.build(inputs.shape)

      # Check input assumptions set after layer building, e.g. input shape.
      if not context.executing_eagerly():
        input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)

      ndim = self.input_spec.ndim
      channel_axis = self._channel_axis(ndim)
      # Tuple of slices for expanding dimensions of tensors below.
      slices = ndim * [None] + [slice(None)]
      slices[channel_axis] = slice(None)
      slices = tuple(slices)

      # Expand dimensions of CDF to input dimensions, keeping the channels along
      # the right dimension.
      cdf = self._quantized_cdf[slices[1:]]
      num_levels = array_ops.shape(cdf)[-1] - 1

      # Bring inputs to the right range by centering the range on the medians.
      half = constant_op.constant(.5, dtype=self.dtype)
      medians = array_ops.squeeze(self._medians, [1, 2])
      offsets = (math_ops.cast(num_levels // 2, self.dtype) + half) - medians
      # Expand offsets to input dimensions and add to inputs.
      values = inputs + offsets[slices[:-1]]

      # Clip to range and cast to integers. Because we have added .5 above, and
      # all values are positive, the cast effectively implements rounding.
      values = math_ops.maximum(values, half)
      values = math_ops.minimum(
          values, math_ops.cast(num_levels, self.dtype) - half)
      values = math_ops.cast(values, dtypes.int16)

      def loop_body(tensor):
        return coder_ops.range_encode(
            tensor, cdf, precision=self.range_coder_precision)
      strings = functional_ops.map_fn(
          loop_body, values, dtype=dtypes.string, back_prop=False)

      if not context.executing_eagerly():
        strings.set_shape(inputs.shape[:1])

      return strings 
开发者ID:mauriceqch,项目名称:pcc_geo_cnn,代码行数:61,代码来源:entropy_models.py

示例15: cumsum

# 需要导入模块: from tensorflow.python.framework import dtypes [as 别名]
# 或者: from tensorflow.python.framework.dtypes import int16 [as 别名]
def cumsum(x, axis=0, exclusive=False, reverse=False, name=None):
  """Compute the cumulative sum of the tensor `x` along `axis`.

  By default, this op performs an inclusive cumsum, which means that the first
  element of the input is identical to the first element of the output:

  ```python
  tf.cumsum([a, b, c])  # [a, a + b, a + b + c]
  ```

  By setting the `exclusive` kwarg to `True`, an exclusive cumsum is performed
  instead:

  ```python
  tf.cumsum([a, b, c], exclusive=True)  # [0, a, a + b]
  ```

  By setting the `reverse` kwarg to `True`, the cumsum is performed in the
  opposite direction:

  ```python
  tf.cumsum([a, b, c], reverse=True)  # [a + b + c, b + c, c]
  ```

  This is more efficient than using separate `tf.reverse` ops.

  The `reverse` and `exclusive` kwargs can also be combined:

  ```python
  tf.cumsum([a, b, c], exclusive=True, reverse=True)  # [b + c, c, 0]
  ```

  Args:
    x: A `Tensor`. Must be one of the following types: `float32`, `float64`,
       `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,
       `complex128`, `qint8`, `quint8`, `qint32`, `half`.
    axis: A `Tensor` of type `int32` (default: 0). Must be in the range
      `[-rank(x), rank(x))`.
    exclusive: If `True`, perform exclusive cumsum.
    reverse: A `bool` (default: False).
    name: A name for the operation (optional).

  Returns:
    A `Tensor`. Has the same type as `x`.
  """
  with ops.name_scope(name, "Cumsum", [x]) as name:
    x = ops.convert_to_tensor(x, name="x")
    return gen_math_ops.cumsum(
        x, axis, exclusive=exclusive, reverse=reverse, name=name) 
开发者ID:PacktPublishing,项目名称:Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda,代码行数:51,代码来源:math_ops.py


注:本文中的tensorflow.python.framework.dtypes.int16方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。