當前位置: 首頁>>代碼示例>>Python>>正文


Python v1.floor方法代碼示例

本文整理匯總了Python中tensorflow.compat.v1.floor方法的典型用法代碼示例。如果您正苦於以下問題:Python v1.floor方法的具體用法?Python v1.floor怎麽用?Python v1.floor使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在tensorflow.compat.v1的用法示例。


在下文中一共展示了v1.floor方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: _build

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def _build(self, x, state):
    prev_keep_mask = state
    shape = tf.shape(x)
    noise = tf.random_uniform(shape, dtype=x.dtype)
    other_mask = tf.floor(self._keep_prob + noise)
    choice_noise = tf.random_uniform(shape, dtype=x.dtype)
    choice = tf.less(choice_noise, self._flip_prob)
    # KLUDGE(melisgl): The client has to pass the last keep_mask from
    # a batch to the next so the mask may end up next to some
    # recurrent cell state. This state is often zero at the beginning
    # and may be periodically zeroed (per example) during training.
    # While zeroing LSTM state is okay, zeroing the dropout mask is
    # not. So instead of forcing every client to deal with this common
    # (?) case, if an all zero mask is detected, then regenerate a
    # fresh mask. This is of course a major hack and won't help with
    # learnt initial states, for example.
    sum_ = tf.reduce_sum(prev_keep_mask, 1, keepdims=True)
    is_initializing = tf.equal(sum_, 0.0)

    self._keep_mask = tf.where(tf.logical_or(choice, is_initializing),
                               other_mask,
                               prev_keep_mask)
    self._time_step += 1
    return x * self._keep_mask / self._keep_prob * self._scaler 
開發者ID:deepmind,項目名稱:lamb,代碼行數:26,代碼來源:dropout.py

示例2: _quantize

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def _quantize(x, params, randomize=True):
  """Quantize x according to params, optionally randomizing the rounding."""
  if not params.quantize:
    return x

  if not randomize:
    return tf.bitcast(
        tf.cast(x / params.quantization_scale, tf.int16), tf.float16)

  abs_x = tf.abs(x)
  sign_x = tf.sign(x)
  y = abs_x / params.quantization_scale
  y = tf.floor(y + tf.random_uniform(common_layers.shape_list(x)))
  y = tf.minimum(y, tf.int16.max) * sign_x
  q = tf.bitcast(tf.cast(y, tf.int16), tf.float16)
  return q 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:18,代碼來源:diet.py

示例3: preprocess

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def preprocess(self, x):
    """Normalize x.

    Args:
      x: 4-D Tensor.

    Returns:
      x: Scaled such that x lies in-between -0.5 and 0.5
    """
    n_bits_x = self.hparams.n_bits_x
    n_bins = 2**n_bits_x
    x = tf.cast(x, dtype=tf.float32)
    if n_bits_x < 8:
      x = tf.floor(x / 2 ** (8 - n_bits_x))
    x = x / n_bins - 0.5
    return x 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:18,代碼來源:glow.py

示例4: mu_law

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def mu_law(x, mu=255, int8=False):
  """A TF implementation of Mu-Law encoding.

  Args:
    x: The audio samples to encode.
    mu: The Mu to use in our Mu-Law.
    int8: Use int8 encoding.

  Returns:
    out: The Mu-Law encoded int8 data.
  """
  out = tf.sign(x) * tf.log(1 + mu * tf.abs(x)) / np.log(1 + mu)
  out = tf.floor(out * 128)
  if int8:
    out = tf.cast(out, tf.int8)
  return out 
開發者ID:magenta,項目名稱:magenta,代碼行數:18,代碼來源:utils.py

示例5: drop_connect

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def drop_connect(inputs, is_training, survival_prob):
  """Drop the entire conv with given survival probability."""
  # "Deep Networks with Stochastic Depth", https://arxiv.org/pdf/1603.09382.pdf
  if not is_training:
    return inputs

  # Compute tensor.
  batch_size = tf.shape(inputs)[0]
  random_tensor = survival_prob
  random_tensor += tf.random_uniform([batch_size, 1, 1, 1], dtype=inputs.dtype)
  binary_tensor = tf.floor(random_tensor)
  # Unlike conventional way that multiply survival_prob at test time, here we
  # divide survival_prob at training time, such that no addition compute is
  # needed at test time.
  output = tf.div(inputs, survival_prob) * binary_tensor
  return output 
開發者ID:JunweiLiang,項目名稱:Object_Detection_Tracking,代碼行數:18,代碼來源:utils.py

示例6: _apply_func_with_prob

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def _apply_func_with_prob(func, image, args, prob, bboxes):
  """Apply `func` to image w/ `args` as input with probability `prob`."""
  assert isinstance(args, tuple)
  if six.PY2:
    # pylint: disable=deprecated-method
    arg_spec = inspect.getargspec(func)
    # pylint: enable=deprecated-method
  else:
    arg_spec = inspect.getfullargspec(func)
  assert 'bboxes' == arg_spec[0][1]

  # If prob is a function argument, then this randomness is being handled
  # inside the function, so make sure it is always called.
  if 'prob' in arg_spec[0]:
    prob = 1.0

  # Apply the function with probability `prob`.
  should_apply_op = tf.cast(
      tf.floor(tf.random_uniform([], dtype=tf.float32) + prob), tf.bool)
  augmented_image, augmented_bboxes = tf.cond(
      should_apply_op,
      lambda: func(image, bboxes, *args),
      lambda: (image, bboxes))
  return augmented_image, augmented_bboxes 
開發者ID:tensorflow,項目名稱:models,代碼行數:26,代碼來源:autoaugment_utils.py

示例7: drop_path

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def drop_path(net, keep_prob, is_training=True):
  """Drops out a whole example hiddenstate with the specified probability."""
  if is_training:
    batch_size = tf.shape(net)[0]
    noise_shape = [batch_size, 1, 1, 1]
    keep_prob = tf.cast(keep_prob, dtype=net.dtype)
    random_tensor = keep_prob
    random_tensor += tf.random_uniform(noise_shape, dtype=net.dtype)
    binary_tensor = tf.floor(random_tensor)
    net = tf.div(net, keep_prob) * binary_tensor
  return net 
開發者ID:tensorflow,項目名稱:benchmarks,代碼行數:13,代碼來源:nasnet_utils.py

示例8: _ensure_keep_mask

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def _ensure_keep_mask(self, x):
    if self._keep_mask is None or not self._share_mask:
      shape = tf.shape(x)
      noise = tf.random_uniform(shape, dtype=x.dtype)
      self._keep_mask = (tf.floor(self._keep_prob + noise)
                         * (self._scaler / self._keep_prob))
      self._keep_mask.set_shape(x.get_shape())
    return self._keep_mask 
開發者ID:deepmind,項目名稱:lamb,代碼行數:10,代碼來源:dropout.py

示例9: __call__

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def __call__(self, inputs, residual_inputs):
    """Apply SwitchLayer to inputs.

    Args:
      inputs: Input tensor
      residual_inputs: Residual connections from previous block

    Returns:
      tf.Tensor: New candidate value
    """
    input_shape = tf.shape(inputs)
    self.batch_size = input_shape[0]
    self.length = input_shape[1]
    self.num_units = inputs.shape.as_list()[2]

    self.n_bits = tf.log(tf.cast(self.length - 1, tf.float32)) / tf.log(2.0)
    self.n_bits = tf.floor(self.n_bits) + 1

    initializer = tf.constant_initializer(0.5)
    residual_scale = tf.get_variable(
        self.prefix + "/residual_scale", [self.num_units],
        initializer=initializer)

    shuffled_input = self.swap_halves(inputs)
    mem_all = inputs + residual_inputs * residual_scale

    # calculate the new value
    candidate = self.gated_linear_map(mem_all, "c", 0.5, self.num_units,
                                      self.num_units)
    gate = tf.nn.sigmoid(
        self.linear_map(mem_all, "g", 0.5, self.num_units, self.num_units))

    candidate = gate * shuffled_input + (1 - gate) * candidate

    if self.dropout > 0:
      candidate = tf.nn.dropout(candidate, rate=self.dropout / self.n_bits)
    if self.dropout != 0.0 and self.mode == tf.estimator.ModeKeys.TRAIN:
      noise = tf.random_normal(tf.shape(candidate), mean=1.0, stddev=0.001)
      candidate = candidate * noise

    return candidate 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:43,代碼來源:shuffle_network.py

示例10: feature_grid_coordinate_vectors

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def feature_grid_coordinate_vectors(box_grid_y, box_grid_x):
  """Returns feature grid point coordinate vectors for bilinear interpolation.

  Box grid is specified in absolute coordinate system with origin at left top
  (0, 0). The returned coordinate vectors contain 0-based feature point indices.

  This function snaps each point in the box grid to nearest 4 points on the
  feature map.

  In this function we also follow the convention of treating feature pixels as
  point objects with no spatial extent.

  Args:
    box_grid_y: A float tensor of shape [batch, num_boxes, size] containing y
      coordinate vector of the box grid.
    box_grid_x: A float tensor of shape [batch, num_boxes, size] containing x
      coordinate vector of the box grid.

  Returns:
    feature_grid_y0: An int32 tensor of shape [batch, num_boxes, size]
      containing y coordinate vector for the top neighbors.
    feature_grid_x0: A int32 tensor of shape [batch, num_boxes, size]
      containing x coordinate vector for the left neighbors.
    feature_grid_y1: A int32 tensor of shape [batch, num_boxes, size]
      containing y coordinate vector for the bottom neighbors.
    feature_grid_x1: A int32 tensor of shape [batch, num_boxes, size]
      containing x coordinate vector for the right neighbors.
  """
  feature_grid_y0 = tf.floor(box_grid_y)
  feature_grid_x0 = tf.floor(box_grid_x)
  feature_grid_y1 = tf.floor(box_grid_y + 1)
  feature_grid_x1 = tf.floor(box_grid_x + 1)
  feature_grid_y0 = tf.cast(feature_grid_y0, dtype=tf.int32)
  feature_grid_y1 = tf.cast(feature_grid_y1, dtype=tf.int32)
  feature_grid_x0 = tf.cast(feature_grid_x0, dtype=tf.int32)
  feature_grid_x1 = tf.cast(feature_grid_x1, dtype=tf.int32)
  return (feature_grid_y0, feature_grid_x0, feature_grid_y1, feature_grid_x1) 
開發者ID:tensorflow,項目名稱:models,代碼行數:39,代碼來源:spatial_transform_ops.py

示例11: _randomly_negate_tensor

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def _randomly_negate_tensor(tensor):
  """With 50% prob turn the tensor negative."""
  should_flip = tf.cast(tf.floor(tf.random_uniform([]) + 0.5), tf.bool)
  final_tensor = tf.cond(should_flip, lambda: tensor, lambda: -tensor)
  return final_tensor 
開發者ID:tensorflow,項目名稱:models,代碼行數:7,代碼來源:autoaugment_utils.py

示例12: test_forward_floor

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def test_forward_floor():
    ishape = (1, 3, 10, 10)
    inp_array = np.random.uniform(size=ishape).astype(np.float32)
    with tf.Graph().as_default():
        in1 = tf.placeholder(shape=inp_array.shape, dtype=inp_array.dtype)
        tf.floor(in1)
        compare_tf_with_tvm(inp_array, 'Placeholder:0', 'Floor:0') 
開發者ID:apache,項目名稱:incubator-tvm,代碼行數:9,代碼來源:test_forward.py

示例13: learning_rate_factor

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def learning_rate_factor(name, step_num, hparams):
  """Compute the designated learning rate factor from hparams."""
  if name == "constant":
    tf.logging.info("Base learning rate: %f", hparams.learning_rate_constant)
    return hparams.learning_rate_constant
  elif name == "linear_warmup":
    return tf.minimum(1.0, step_num / hparams.learning_rate_warmup_steps)
  elif name == "linear_decay":
    ret = (hparams.train_steps - step_num) / hparams.learning_rate_decay_steps
    return tf.minimum(1.0, tf.maximum(0.0, ret))
  elif name == "cosdecay":  # openai gpt
    in_warmup = tf.cast(step_num <= hparams.learning_rate_warmup_steps,
                        dtype=tf.float32)
    ret = 0.5 * (1 + tf.cos(
        np.pi * step_num / hparams.learning_rate_decay_steps))
    # if in warmup stage return 1 else return the decayed value
    return in_warmup * 1 + (1 - in_warmup) * ret
  elif name == "single_cycle_cos_decay":
    # Cosine decay to zero with a single cycle. This is different from
    # "cosdecay" because it starts at 1 when the warmup steps end.
    x = tf.maximum(step_num, hparams.learning_rate_warmup_steps)
    step = x - hparams.learning_rate_warmup_steps
    if hparams.train_steps <= hparams.learning_rate_warmup_steps:
      raise ValueError("single_cycle_cos_decay cannot be used unless "
                       "hparams.train_steps > "
                       "hparams.learning_rate_warmup_steps")
    return tf.math.cos(
        step * np.pi /
        (hparams.train_steps - hparams.learning_rate_warmup_steps)) / 2.0 + 0.5
  elif name == "multi_cycle_cos_decay":
    # Cosine decay with a variable number of cycles. This is different from
    # "cosdecay" because it starts at 1 when the warmup steps end. Use
    # hparams.learning_rate_decay_steps to determine the number of cycles.
    x = tf.maximum(step_num, hparams.learning_rate_warmup_steps)
    step = x - hparams.learning_rate_warmup_steps
    return tf.math.cos(
        step * np.pi / hparams.learning_rate_decay_steps) / 2.0 + 0.5
  elif name == "rsqrt_decay":
    return tf.rsqrt(tf.maximum(step_num, hparams.learning_rate_warmup_steps))
  elif name == "rsqrt_normalized_decay":
    scale = tf.sqrt(tf.to_float(hparams.learning_rate_warmup_steps))
    return scale * tf.rsqrt(tf.maximum(
        step_num, hparams.learning_rate_warmup_steps))
  elif name == "exp_decay":
    decay_steps = hparams.learning_rate_decay_steps
    warmup_steps = hparams.learning_rate_warmup_steps
    p = (step_num - warmup_steps) / decay_steps
    p = tf.maximum(p, 0.)
    if hparams.learning_rate_decay_staircase:
      p = tf.floor(p)
    return tf.pow(hparams.learning_rate_decay_rate, p)
  elif name == "rsqrt_hidden_size":
    return hparams.hidden_size ** -0.5
  elif name == "legacy":
    return legacy_learning_rate_schedule(hparams)
  else:
    raise ValueError("unknown learning rate factor %s" % name) 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:59,代碼來源:learning_rate.py

示例14: _learning_rate_decay

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def _learning_rate_decay(hparams, warmup_steps=0):
  """Learning rate decay multiplier."""
  scheme = hparams.learning_rate_decay_scheme
  warmup_steps = tf.to_float(warmup_steps)
  global_step = _global_step(hparams)

  if not scheme or scheme == "none":
    return tf.constant(1.)

  tf.logging.info("Applying learning rate decay: %s.", scheme)

  if scheme == "exp":
    decay_steps = hparams.learning_rate_decay_steps
    p = (global_step - warmup_steps) / decay_steps
    if hparams.learning_rate_decay_staircase:
      p = tf.floor(p)
    return tf.pow(hparams.learning_rate_decay_rate, p)

  if scheme == "piecewise":
    return _piecewise_learning_rate(global_step,
                                    hparams.learning_rate_boundaries,
                                    hparams.learning_rate_multiples)

  if scheme == "cosine":
    cycle_steps = hparams.learning_rate_cosine_cycle_steps
    cycle_position = global_step % (2 * cycle_steps)
    cycle_position = cycle_steps - tf.abs(cycle_steps - cycle_position)
    return 0.5 * (1 + tf.cos(np.pi * cycle_position / cycle_steps))

  if scheme == "cyclelinear10x":
    # Cycle the rate linearly by 10x every warmup_steps, up and down.
    cycle_steps = warmup_steps
    cycle_position = global_step % (2 * cycle_steps)
    cycle_position = tf.to_float(  # Normalize to the interval [-1, 1].
        cycle_position - cycle_steps) / float(cycle_steps)
    cycle_position = 1.0 - tf.abs(cycle_position)  # 0 to 1 and back to 0.
    return (cycle_position + 0.1) * 3.0  # 10x difference each cycle (0.3-3).

  if scheme == "sqrt":
    return _legacy_sqrt_decay(global_step - warmup_steps)

  raise ValueError("Unrecognized learning rate decay scheme: %s" %
                   hparams.learning_rate_decay_scheme) 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:45,代碼來源:learning_rate.py

示例15: simulated_quantize

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import floor [as 別名]
def simulated_quantize(x, num_bits, noise):
  """Simulate quantization to num_bits bits, with externally-stored scale.

  num_bits is the number of bits used to store each value.
  noise is a float32 Tensor containing values in [0, 1).
  Each value in noise should take different values across
  different steps, approximating a uniform distribution over [0, 1).
  In the case of replicated TPU training, noise should be identical
  across replicas in order to keep the parameters identical across replicas.

  The natural choice for noise would be tf.random_uniform(),
  but this is not possible for TPU, since there is currently no way to seed
  the different cores to produce identical values across replicas.  Instead we
  use noise_from_step_num() (see below).

  The quantization scheme is as follows:

  Compute the maximum absolute value by row (call this max_abs).
  Store this either in an auxiliary variable or in an extra column.

  Divide the parameters by (max_abs / (2^(num_bits-1)-1)).  This gives a
  float32 value in the range [-2^(num_bits-1)-1, 2^(num_bits-1)-1]

  Unbiased randomized roundoff by adding noise and rounding down.

  This produces a signed integer with num_bits bits which can then be stored.

  Args:
    x: a float32 Tensor
    num_bits: an integer between 1 and 22
    noise: a float Tensor broadcastable to the shape of x.

  Returns:
    a float32 Tensor
  """
  shape = x.get_shape().as_list()
  if not (len(shape) >= 2 and shape[-1] > 1):
    return x
  max_abs = tf.reduce_max(tf.abs(x), -1, keepdims=True) + 1e-9
  max_int = 2 ** (num_bits - 1) - 1
  scale = max_abs / max_int
  x /= scale
  x = tf.floor(x + noise)
  # dequantize before storing (since this is a simulation)
  x *= scale
  return x 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:48,代碼來源:quantization.py


注:本文中的tensorflow.compat.v1.floor方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。