当前位置: 首页>>代码示例>>Python>>正文


Python tensor_util.TensorShapeProtoToList方法代码示例

本文整理汇总了Python中tensorflow.python.framework.tensor_util.TensorShapeProtoToList方法的典型用法代码示例。如果您正苦于以下问题:Python tensor_util.TensorShapeProtoToList方法的具体用法?Python tensor_util.TensorShapeProtoToList怎么用?Python tensor_util.TensorShapeProtoToList使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tensorflow.python.framework.tensor_util的用法示例。


在下文中一共展示了tensor_util.TensorShapeProtoToList方法的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: quantize_weight_rounded

# 需要导入模块: from tensorflow.python.framework import tensor_util [as 别名]
# 或者: from tensorflow.python.framework.tensor_util import TensorShapeProtoToList [as 别名]
def quantize_weight_rounded(input_node):
  """Returns a replacement node for input_node containing bucketed floats."""
  input_tensor = input_node.attr["value"].tensor
  tensor_value = tensor_util.MakeNdarray(input_tensor)
  shape = input_tensor.tensor_shape
  # Currently, the parameter FLAGS.bitdepth is used to compute the
  # number of buckets as 1 << FLAGS.bitdepth, meaning the number of
  # buckets can only be a power of 2.
  # This could be fixed by introducing a new parameter, num_buckets,
  # which would allow for more flexibility in chosing the right model
  # size/accuracy tradeoff. But I didn't want to add more parameters
  # to this script than absolutely necessary.
  num_buckets = 1 << FLAGS.bitdepth
  tensor_value_rounded = quantize_array(tensor_value, num_buckets)
  tensor_shape_list = tensor_util.TensorShapeProtoToList(shape)
  return [create_constant_node(input_node.name, tensor_value_rounded,
                               tf.float32, shape=tensor_shape_list)] 
开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:19,代码来源:quantize_graph.py

示例2: quantize_weight_rounded

# 需要导入模块: from tensorflow.python.framework import tensor_util [as 别名]
# 或者: from tensorflow.python.framework.tensor_util import TensorShapeProtoToList [as 别名]
def quantize_weight_rounded(input_node):
  """Returns a replacement node for input_node containing bucketed floats."""
  input_tensor = input_node.attr["value"].tensor
  tensor_value = tensor_util.MakeNdarray(input_tensor)
  shape = input_tensor.tensor_shape
  # Currently, the parameter FLAGS.bitdepth is used to compute the
  # number of buckets as 1 << FLAGS.bitdepth, meaning the number of
  # buckets can only be a power of 2.
  # This could be fixed by introducing a new parameter, num_buckets,
  # which would allow for more flexibility in chosing the right model
  # size/accuracy tradeoff. But I didn't want to add more parameters
  # to this script than absolutely necessary.
  num_buckets = 1 << FLAGS.bitdepth
  tensor_value_rounded = quantize_array(tensor_value, num_buckets)
  tensor_shape_list = tensor_util.TensorShapeProtoToList(shape)
  return [
      create_constant_node(
          input_node.name,
          tensor_value_rounded,
          dtypes.float32,
          shape=tensor_shape_list)
  ] 
开发者ID:googlecodelabs,项目名称:tensorflow-for-poets-2,代码行数:24,代码来源:quantize_graph.py

示例3: _parse_param

# 需要导入模块: from tensorflow.python.framework import tensor_util [as 别名]
# 或者: from tensorflow.python.framework.tensor_util import TensorShapeProtoToList [as 别名]
def _parse_param(self, key, value, name, shape):
        try:
            from tensorflow.python.framework import tensor_util
        except ImportError as e:
            raise ImportError(
                "Unable to import tensorflow which is required {}".format(e))

        if key == 'value':
            np_array = tensor_util.MakeNdarray(value.tensor)

            if np_array.dtype == np.dtype(object):
                # Object types are generally tensorflow DT_STRING (DecodeJpeg op).
                # Just leave it as placeholder.
                if shape and name in shape:
                    var_shape = shape[name]
                else:
                    var_shape = tensor_util.TensorShapeProtoToList(value.tensor.tensor_shape)
                self._nodes[name] = [_expr.var(name, shape=var_shape, dtype='uint8')]
                return

            array_ndim = len(np_array.shape)
            if array_ndim == 0:
                self._nodes[name] = [tvm.relay.const(np_array)]
            else:
                self._params[name] = tvm.nd.array(np_array)
                self._nodes[name] = [_expr.var(name,
                                               shape=self._params[name].shape,
                                               dtype=self._params[name].dtype)]
        else:
            if key not in ('dtype', '_output_shapes', '_class'):
                raise NotImplementedError \
                    ("Other attributes for a Const(param) Node {} ? .".format(key)) 
开发者ID:apache,项目名称:incubator-tvm,代码行数:34,代码来源:tensorflow.py

示例4: load_tensor_from_event

# 需要导入模块: from tensorflow.python.framework import tensor_util [as 别名]
# 或者: from tensorflow.python.framework.tensor_util import TensorShapeProtoToList [as 别名]
def load_tensor_from_event(event):
  """Load a tensor from an Event proto.

  Args:
    event: The Event proto, assumed to hold a tensor value in its
        summary.value[0] field.

  Returns:
    The tensor value loaded from the event file, as a `numpy.ndarray`, if
    representation of the tensor value by a `numpy.ndarray` is possible.
    For uninitialized Tensors, returns `None`. For Tensors of data types that
    cannot be represented as `numpy.ndarray` (e.g., `tf.resource`), return
    the `TensorProto` protobuf object without converting it to a
    `numpy.ndarray`.
  """

  tensor_proto = event.summary.value[0].tensor
  shape = tensor_util.TensorShapeProtoToList(tensor_proto.tensor_shape)
  num_elements = 1
  for shape_dim in shape:
    num_elements *= shape_dim

  if tensor_proto.tensor_content or tensor_proto.string_val or not num_elements:
    # Initialized tensor or empty tensor.
    if tensor_proto.dtype == types_pb2.DT_RESOURCE:
      tensor_value = InconvertibleTensorProto(tensor_proto)
    else:
      try:
        tensor_value = tensor_util.MakeNdarray(tensor_proto)
      except KeyError:
        tensor_value = InconvertibleTensorProto(tensor_proto)
  else:
    # Uninitialized tensor or tensor of unconvertible data type.
    tensor_value = InconvertibleTensorProto(tensor_proto, False)

  return tensor_value 
开发者ID:PacktPublishing,项目名称:Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda,代码行数:38,代码来源:debug_data.py

示例5: quantize_weight_eightbit

# 需要导入模块: from tensorflow.python.framework import tensor_util [as 别名]
# 或者: from tensorflow.python.framework.tensor_util import TensorShapeProtoToList [as 别名]
def quantize_weight_eightbit(input_node, quantization_mode):
  """Returns replacement nodes for input_node using the Dequantize op."""
  base_name = input_node.name + "_"
  quint8_const_name = base_name + "quint8_const"
  min_name = base_name + "min"
  max_name = base_name + "max"
  float_tensor = tensor_util.MakeNdarray(
      input_node.attr["value"].tensor)
  min_value = np.min(float_tensor.flatten())
  max_value = np.max(float_tensor.flatten())
  # min_value == max_value is a tricky case. It can occur for general
  # tensors, and of course for scalars. The quantized ops cannot deal
  # with this case, so we set max_value to something else.
  # It's a tricky question what is the numerically best solution to
  # deal with this degeneracy.
  # TODO(petewarden): Better use a tolerance than a hard comparison?
  if min_value == max_value:
    if abs(min_value) < 0.000001:
      max_value = min_value + 1.0
    elif min_value > 0:
      max_value = 2 * min_value
    else:
      max_value = min_value / 2.0

  sess = tf.Session()
  with sess.as_default():
    quantize_op = tf.contrib.quantization.python.quantize_v2(
        float_tensor,
        min_value,
        max_value,
        tf.quint8,
        mode=quantization_mode)
    quint8_tensor = quantize_op[0].eval()
  shape = tensor_util.TensorShapeProtoToList(input_node.attr[
      "value"].tensor.tensor_shape)
  quint8_const_node = create_constant_node(quint8_const_name,
                                           quint8_tensor,
                                           tf.quint8,
                                           shape=shape)
  min_node = create_constant_node(min_name, min_value, tf.float32)
  max_node = create_constant_node(max_name, max_value, tf.float32)
  dequantize_node = create_node("Dequantize", input_node.name,
                                [quint8_const_name, min_name, max_name])
  set_attr_dtype(dequantize_node, "T", tf.quint8)
  set_attr_string(dequantize_node, "mode", quantization_mode)
  return [quint8_const_node, min_node, max_node, dequantize_node] 
开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:48,代码来源:quantize_graph.py

示例6: quantize_weight_eightbit

# 需要导入模块: from tensorflow.python.framework import tensor_util [as 别名]
# 或者: from tensorflow.python.framework.tensor_util import TensorShapeProtoToList [as 别名]
def quantize_weight_eightbit(input_node, quantization_mode):
  """Returns replacement nodes for input_node using the Dequantize op."""
  base_name = input_node.name + "_"
  quint8_const_name = base_name + "quint8_const"
  min_name = base_name + "min"
  max_name = base_name + "max"
  float_tensor = tensor_util.MakeNdarray(input_node.attr["value"].tensor)
  min_value = np.min(float_tensor.flatten())
  max_value = np.max(float_tensor.flatten())
  # Make sure that the range includes zero.
  if min_value > 0.0:
    min_value = 0.0
  # min_value == max_value is a tricky case. It can occur for general
  # tensors, and of course for scalars. The quantized ops cannot deal
  # with this case, so we set max_value to something else.
  # It's a tricky question what is the numerically best solution to
  # deal with this degeneracy.
  # TODO(petewarden): Better use a tolerance than a hard comparison?
  if min_value == max_value:
    if abs(min_value) < 0.000001:
      max_value = min_value + 1.0
    elif min_value > 0:
      max_value = 2 * min_value
    else:
      max_value = min_value / 2.0

  sess = session.Session()
  with sess.as_default():
    quantize_op = array_ops.quantize_v2(
        float_tensor,
        min_value,
        max_value,
        dtypes.quint8,
        mode=quantization_mode)
    quint8_tensor = quantize_op[0].eval()
  shape = tensor_util.TensorShapeProtoToList(input_node.attr["value"]
                                             .tensor.tensor_shape)
  quint8_const_node = create_constant_node(
      quint8_const_name, quint8_tensor, dtypes.quint8, shape=shape)
  min_node = create_constant_node(min_name, min_value, dtypes.float32)
  max_node = create_constant_node(max_name, max_value, dtypes.float32)
  dequantize_node = create_node("Dequantize", input_node.name,
                                [quint8_const_name, min_name, max_name])
  set_attr_dtype(dequantize_node, "T", dtypes.quint8)
  set_attr_string(dequantize_node, "mode", quantization_mode)
  return [quint8_const_node, min_node, max_node, dequantize_node] 
开发者ID:googlecodelabs,项目名称:tensorflow-for-poets-2,代码行数:48,代码来源:quantize_graph.py


注:本文中的tensorflow.python.framework.tensor_util.TensorShapeProtoToList方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。