当前位置: 首页>>代码示例>>Python>>正文


Python v1.floor方法代码示例

本文整理汇总了Python中tensorflow.compat.v1.floor方法的典型用法代码示例。如果您正苦于以下问题:Python v1.floor方法的具体用法?Python v1.floor怎么用?Python v1.floor使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tensorflow.compat.v1的用法示例。


在下文中一共展示了v1.floor方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: _build

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def _build(self, x, state):
    prev_keep_mask = state
    shape = tf.shape(x)
    noise = tf.random_uniform(shape, dtype=x.dtype)
    other_mask = tf.floor(self._keep_prob + noise)
    choice_noise = tf.random_uniform(shape, dtype=x.dtype)
    choice = tf.less(choice_noise, self._flip_prob)
    # KLUDGE(melisgl): The client has to pass the last keep_mask from
    # a batch to the next so the mask may end up next to some
    # recurrent cell state. This state is often zero at the beginning
    # and may be periodically zeroed (per example) during training.
    # While zeroing LSTM state is okay, zeroing the dropout mask is
    # not. So instead of forcing every client to deal with this common
    # (?) case, if an all zero mask is detected, then regenerate a
    # fresh mask. This is of course a major hack and won't help with
    # learnt initial states, for example.
    sum_ = tf.reduce_sum(prev_keep_mask, 1, keepdims=True)
    is_initializing = tf.equal(sum_, 0.0)

    self._keep_mask = tf.where(tf.logical_or(choice, is_initializing),
                               other_mask,
                               prev_keep_mask)
    self._time_step += 1
    return x * self._keep_mask / self._keep_prob * self._scaler 
开发者ID:deepmind,项目名称:lamb,代码行数:26,代码来源:dropout.py

示例2: _quantize

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def _quantize(x, params, randomize=True):
  """Quantize x according to params, optionally randomizing the rounding."""
  if not params.quantize:
    return x

  if not randomize:
    return tf.bitcast(
        tf.cast(x / params.quantization_scale, tf.int16), tf.float16)

  abs_x = tf.abs(x)
  sign_x = tf.sign(x)
  y = abs_x / params.quantization_scale
  y = tf.floor(y + tf.random_uniform(common_layers.shape_list(x)))
  y = tf.minimum(y, tf.int16.max) * sign_x
  q = tf.bitcast(tf.cast(y, tf.int16), tf.float16)
  return q 
开发者ID:tensorflow,项目名称:tensor2tensor,代码行数:18,代码来源:diet.py

示例3: preprocess

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def preprocess(self, x):
    """Normalize x.

    Args:
      x: 4-D Tensor.

    Returns:
      x: Scaled such that x lies in-between -0.5 and 0.5
    """
    n_bits_x = self.hparams.n_bits_x
    n_bins = 2**n_bits_x
    x = tf.cast(x, dtype=tf.float32)
    if n_bits_x < 8:
      x = tf.floor(x / 2 ** (8 - n_bits_x))
    x = x / n_bins - 0.5
    return x 
开发者ID:tensorflow,项目名称:tensor2tensor,代码行数:18,代码来源:glow.py

示例4: mu_law

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def mu_law(x, mu=255, int8=False):
  """A TF implementation of Mu-Law encoding.

  Args:
    x: The audio samples to encode.
    mu: The Mu to use in our Mu-Law.
    int8: Use int8 encoding.

  Returns:
    out: The Mu-Law encoded int8 data.
  """
  out = tf.sign(x) * tf.log(1 + mu * tf.abs(x)) / np.log(1 + mu)
  out = tf.floor(out * 128)
  if int8:
    out = tf.cast(out, tf.int8)
  return out 
开发者ID:magenta,项目名称:magenta,代码行数:18,代码来源:utils.py

示例5: drop_connect

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def drop_connect(inputs, is_training, survival_prob):
  """Drop the entire conv with given survival probability."""
  # "Deep Networks with Stochastic Depth", https://arxiv.org/pdf/1603.09382.pdf
  if not is_training:
    return inputs

  # Compute tensor.
  batch_size = tf.shape(inputs)[0]
  random_tensor = survival_prob
  random_tensor += tf.random_uniform([batch_size, 1, 1, 1], dtype=inputs.dtype)
  binary_tensor = tf.floor(random_tensor)
  # Unlike conventional way that multiply survival_prob at test time, here we
  # divide survival_prob at training time, such that no addition compute is
  # needed at test time.
  output = tf.div(inputs, survival_prob) * binary_tensor
  return output 
开发者ID:JunweiLiang,项目名称:Object_Detection_Tracking,代码行数:18,代码来源:utils.py

示例6: _apply_func_with_prob

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def _apply_func_with_prob(func, image, args, prob, bboxes):
  """Apply `func` to image w/ `args` as input with probability `prob`."""
  assert isinstance(args, tuple)
  if six.PY2:
    # pylint: disable=deprecated-method
    arg_spec = inspect.getargspec(func)
    # pylint: enable=deprecated-method
  else:
    arg_spec = inspect.getfullargspec(func)
  assert 'bboxes' == arg_spec[0][1]

  # If prob is a function argument, then this randomness is being handled
  # inside the function, so make sure it is always called.
  if 'prob' in arg_spec[0]:
    prob = 1.0

  # Apply the function with probability `prob`.
  should_apply_op = tf.cast(
      tf.floor(tf.random_uniform([], dtype=tf.float32) + prob), tf.bool)
  augmented_image, augmented_bboxes = tf.cond(
      should_apply_op,
      lambda: func(image, bboxes, *args),
      lambda: (image, bboxes))
  return augmented_image, augmented_bboxes 
开发者ID:tensorflow,项目名称:models,代码行数:26,代码来源:autoaugment_utils.py

示例7: drop_path

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def drop_path(net, keep_prob, is_training=True):
  """Drops out a whole example hiddenstate with the specified probability."""
  if is_training:
    batch_size = tf.shape(net)[0]
    noise_shape = [batch_size, 1, 1, 1]
    keep_prob = tf.cast(keep_prob, dtype=net.dtype)
    random_tensor = keep_prob
    random_tensor += tf.random_uniform(noise_shape, dtype=net.dtype)
    binary_tensor = tf.floor(random_tensor)
    net = tf.div(net, keep_prob) * binary_tensor
  return net 
开发者ID:tensorflow,项目名称:benchmarks,代码行数:13,代码来源:nasnet_utils.py

示例8: _ensure_keep_mask

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def _ensure_keep_mask(self, x):
    if self._keep_mask is None or not self._share_mask:
      shape = tf.shape(x)
      noise = tf.random_uniform(shape, dtype=x.dtype)
      self._keep_mask = (tf.floor(self._keep_prob + noise)
                         * (self._scaler / self._keep_prob))
      self._keep_mask.set_shape(x.get_shape())
    return self._keep_mask 
开发者ID:deepmind,项目名称:lamb,代码行数:10,代码来源:dropout.py

示例9: __call__

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def __call__(self, inputs, residual_inputs):
    """Apply SwitchLayer to inputs.

    Args:
      inputs: Input tensor
      residual_inputs: Residual connections from previous block

    Returns:
      tf.Tensor: New candidate value
    """
    input_shape = tf.shape(inputs)
    self.batch_size = input_shape[0]
    self.length = input_shape[1]
    self.num_units = inputs.shape.as_list()[2]

    self.n_bits = tf.log(tf.cast(self.length - 1, tf.float32)) / tf.log(2.0)
    self.n_bits = tf.floor(self.n_bits) + 1

    initializer = tf.constant_initializer(0.5)
    residual_scale = tf.get_variable(
        self.prefix + "/residual_scale", [self.num_units],
        initializer=initializer)

    shuffled_input = self.swap_halves(inputs)
    mem_all = inputs + residual_inputs * residual_scale

    # calculate the new value
    candidate = self.gated_linear_map(mem_all, "c", 0.5, self.num_units,
                                      self.num_units)
    gate = tf.nn.sigmoid(
        self.linear_map(mem_all, "g", 0.5, self.num_units, self.num_units))

    candidate = gate * shuffled_input + (1 - gate) * candidate

    if self.dropout > 0:
      candidate = tf.nn.dropout(candidate, rate=self.dropout / self.n_bits)
    if self.dropout != 0.0 and self.mode == tf.estimator.ModeKeys.TRAIN:
      noise = tf.random_normal(tf.shape(candidate), mean=1.0, stddev=0.001)
      candidate = candidate * noise

    return candidate 
开发者ID:tensorflow,项目名称:tensor2tensor,代码行数:43,代码来源:shuffle_network.py

示例10: feature_grid_coordinate_vectors

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def feature_grid_coordinate_vectors(box_grid_y, box_grid_x):
  """Returns feature grid point coordinate vectors for bilinear interpolation.

  Box grid is specified in absolute coordinate system with origin at left top
  (0, 0). The returned coordinate vectors contain 0-based feature point indices.

  This function snaps each point in the box grid to nearest 4 points on the
  feature map.

  In this function we also follow the convention of treating feature pixels as
  point objects with no spatial extent.

  Args:
    box_grid_y: A float tensor of shape [batch, num_boxes, size] containing y
      coordinate vector of the box grid.
    box_grid_x: A float tensor of shape [batch, num_boxes, size] containing x
      coordinate vector of the box grid.

  Returns:
    feature_grid_y0: An int32 tensor of shape [batch, num_boxes, size]
      containing y coordinate vector for the top neighbors.
    feature_grid_x0: A int32 tensor of shape [batch, num_boxes, size]
      containing x coordinate vector for the left neighbors.
    feature_grid_y1: A int32 tensor of shape [batch, num_boxes, size]
      containing y coordinate vector for the bottom neighbors.
    feature_grid_x1: A int32 tensor of shape [batch, num_boxes, size]
      containing x coordinate vector for the right neighbors.
  """
  feature_grid_y0 = tf.floor(box_grid_y)
  feature_grid_x0 = tf.floor(box_grid_x)
  feature_grid_y1 = tf.floor(box_grid_y + 1)
  feature_grid_x1 = tf.floor(box_grid_x + 1)
  feature_grid_y0 = tf.cast(feature_grid_y0, dtype=tf.int32)
  feature_grid_y1 = tf.cast(feature_grid_y1, dtype=tf.int32)
  feature_grid_x0 = tf.cast(feature_grid_x0, dtype=tf.int32)
  feature_grid_x1 = tf.cast(feature_grid_x1, dtype=tf.int32)
  return (feature_grid_y0, feature_grid_x0, feature_grid_y1, feature_grid_x1) 
开发者ID:tensorflow,项目名称:models,代码行数:39,代码来源:spatial_transform_ops.py

示例11: _randomly_negate_tensor

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def _randomly_negate_tensor(tensor):
  """With 50% prob turn the tensor negative."""
  should_flip = tf.cast(tf.floor(tf.random_uniform([]) + 0.5), tf.bool)
  final_tensor = tf.cond(should_flip, lambda: tensor, lambda: -tensor)
  return final_tensor 
开发者ID:tensorflow,项目名称:models,代码行数:7,代码来源:autoaugment_utils.py

示例12: test_forward_floor

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def test_forward_floor():
    ishape = (1, 3, 10, 10)
    inp_array = np.random.uniform(size=ishape).astype(np.float32)
    with tf.Graph().as_default():
        in1 = tf.placeholder(shape=inp_array.shape, dtype=inp_array.dtype)
        tf.floor(in1)
        compare_tf_with_tvm(inp_array, 'Placeholder:0', 'Floor:0') 
开发者ID:apache,项目名称:incubator-tvm,代码行数:9,代码来源:test_forward.py

示例13: learning_rate_factor

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def learning_rate_factor(name, step_num, hparams):
  """Compute the designated learning rate factor from hparams."""
  if name == "constant":
    tf.logging.info("Base learning rate: %f", hparams.learning_rate_constant)
    return hparams.learning_rate_constant
  elif name == "linear_warmup":
    return tf.minimum(1.0, step_num / hparams.learning_rate_warmup_steps)
  elif name == "linear_decay":
    ret = (hparams.train_steps - step_num) / hparams.learning_rate_decay_steps
    return tf.minimum(1.0, tf.maximum(0.0, ret))
  elif name == "cosdecay":  # openai gpt
    in_warmup = tf.cast(step_num <= hparams.learning_rate_warmup_steps,
                        dtype=tf.float32)
    ret = 0.5 * (1 + tf.cos(
        np.pi * step_num / hparams.learning_rate_decay_steps))
    # if in warmup stage return 1 else return the decayed value
    return in_warmup * 1 + (1 - in_warmup) * ret
  elif name == "single_cycle_cos_decay":
    # Cosine decay to zero with a single cycle. This is different from
    # "cosdecay" because it starts at 1 when the warmup steps end.
    x = tf.maximum(step_num, hparams.learning_rate_warmup_steps)
    step = x - hparams.learning_rate_warmup_steps
    if hparams.train_steps <= hparams.learning_rate_warmup_steps:
      raise ValueError("single_cycle_cos_decay cannot be used unless "
                       "hparams.train_steps > "
                       "hparams.learning_rate_warmup_steps")
    return tf.math.cos(
        step * np.pi /
        (hparams.train_steps - hparams.learning_rate_warmup_steps)) / 2.0 + 0.5
  elif name == "multi_cycle_cos_decay":
    # Cosine decay with a variable number of cycles. This is different from
    # "cosdecay" because it starts at 1 when the warmup steps end. Use
    # hparams.learning_rate_decay_steps to determine the number of cycles.
    x = tf.maximum(step_num, hparams.learning_rate_warmup_steps)
    step = x - hparams.learning_rate_warmup_steps
    return tf.math.cos(
        step * np.pi / hparams.learning_rate_decay_steps) / 2.0 + 0.5
  elif name == "rsqrt_decay":
    return tf.rsqrt(tf.maximum(step_num, hparams.learning_rate_warmup_steps))
  elif name == "rsqrt_normalized_decay":
    scale = tf.sqrt(tf.to_float(hparams.learning_rate_warmup_steps))
    return scale * tf.rsqrt(tf.maximum(
        step_num, hparams.learning_rate_warmup_steps))
  elif name == "exp_decay":
    decay_steps = hparams.learning_rate_decay_steps
    warmup_steps = hparams.learning_rate_warmup_steps
    p = (step_num - warmup_steps) / decay_steps
    p = tf.maximum(p, 0.)
    if hparams.learning_rate_decay_staircase:
      p = tf.floor(p)
    return tf.pow(hparams.learning_rate_decay_rate, p)
  elif name == "rsqrt_hidden_size":
    return hparams.hidden_size ** -0.5
  elif name == "legacy":
    return legacy_learning_rate_schedule(hparams)
  else:
    raise ValueError("unknown learning rate factor %s" % name) 
开发者ID:tensorflow,项目名称:tensor2tensor,代码行数:59,代码来源:learning_rate.py

示例14: _learning_rate_decay

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def _learning_rate_decay(hparams, warmup_steps=0):
  """Learning rate decay multiplier."""
  scheme = hparams.learning_rate_decay_scheme
  warmup_steps = tf.to_float(warmup_steps)
  global_step = _global_step(hparams)

  if not scheme or scheme == "none":
    return tf.constant(1.)

  tf.logging.info("Applying learning rate decay: %s.", scheme)

  if scheme == "exp":
    decay_steps = hparams.learning_rate_decay_steps
    p = (global_step - warmup_steps) / decay_steps
    if hparams.learning_rate_decay_staircase:
      p = tf.floor(p)
    return tf.pow(hparams.learning_rate_decay_rate, p)

  if scheme == "piecewise":
    return _piecewise_learning_rate(global_step,
                                    hparams.learning_rate_boundaries,
                                    hparams.learning_rate_multiples)

  if scheme == "cosine":
    cycle_steps = hparams.learning_rate_cosine_cycle_steps
    cycle_position = global_step % (2 * cycle_steps)
    cycle_position = cycle_steps - tf.abs(cycle_steps - cycle_position)
    return 0.5 * (1 + tf.cos(np.pi * cycle_position / cycle_steps))

  if scheme == "cyclelinear10x":
    # Cycle the rate linearly by 10x every warmup_steps, up and down.
    cycle_steps = warmup_steps
    cycle_position = global_step % (2 * cycle_steps)
    cycle_position = tf.to_float(  # Normalize to the interval [-1, 1].
        cycle_position - cycle_steps) / float(cycle_steps)
    cycle_position = 1.0 - tf.abs(cycle_position)  # 0 to 1 and back to 0.
    return (cycle_position + 0.1) * 3.0  # 10x difference each cycle (0.3-3).

  if scheme == "sqrt":
    return _legacy_sqrt_decay(global_step - warmup_steps)

  raise ValueError("Unrecognized learning rate decay scheme: %s" %
                   hparams.learning_rate_decay_scheme) 
开发者ID:tensorflow,项目名称:tensor2tensor,代码行数:45,代码来源:learning_rate.py

示例15: simulated_quantize

# 需要导入模块: from tensorflow.compat import v1 [as 别名]
# 或者: from tensorflow.compat.v1 import floor [as 别名]
def simulated_quantize(x, num_bits, noise):
  """Simulate quantization to num_bits bits, with externally-stored scale.

  num_bits is the number of bits used to store each value.
  noise is a float32 Tensor containing values in [0, 1).
  Each value in noise should take different values across
  different steps, approximating a uniform distribution over [0, 1).
  In the case of replicated TPU training, noise should be identical
  across replicas in order to keep the parameters identical across replicas.

  The natural choice for noise would be tf.random_uniform(),
  but this is not possible for TPU, since there is currently no way to seed
  the different cores to produce identical values across replicas.  Instead we
  use noise_from_step_num() (see below).

  The quantization scheme is as follows:

  Compute the maximum absolute value by row (call this max_abs).
  Store this either in an auxiliary variable or in an extra column.

  Divide the parameters by (max_abs / (2^(num_bits-1)-1)).  This gives a
  float32 value in the range [-2^(num_bits-1)-1, 2^(num_bits-1)-1]

  Unbiased randomized roundoff by adding noise and rounding down.

  This produces a signed integer with num_bits bits which can then be stored.

  Args:
    x: a float32 Tensor
    num_bits: an integer between 1 and 22
    noise: a float Tensor broadcastable to the shape of x.

  Returns:
    a float32 Tensor
  """
  shape = x.get_shape().as_list()
  if not (len(shape) >= 2 and shape[-1] > 1):
    return x
  max_abs = tf.reduce_max(tf.abs(x), -1, keepdims=True) + 1e-9
  max_int = 2 ** (num_bits - 1) - 1
  scale = max_abs / max_int
  x /= scale
  x = tf.floor(x + noise)
  # dequantize before storing (since this is a simulation)
  x *= scale
  return x 
开发者ID:tensorflow,项目名称:tensor2tensor,代码行数:48,代码来源:quantization.py


注:本文中的tensorflow.compat.v1.floor方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。