當前位置: 首頁>>代碼示例>>Python>>正文


Python v1.pad方法代碼示例

本文整理匯總了Python中tensorflow.compat.v1.pad方法的典型用法代碼示例。如果您正苦於以下問題:Python v1.pad方法的具體用法?Python v1.pad怎麽用?Python v1.pad使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在tensorflow.compat.v1的用法示例。


在下文中一共展示了v1.pad方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: _fixed_padding

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def _fixed_padding(inputs, kernel_size, rate=1):
  """Pads the input along the spatial dimensions independently of input size.

  Pads the input such that if it was used in a convolution with 'VALID' padding,
  the output would have the same dimensions as if the unpadded input was used
  in a convolution with 'SAME' padding.

  Args:
    inputs: A tensor of size [batch, height_in, width_in, channels].
    kernel_size: The kernel to be used in the conv2d or max_pool2d operation.
    rate: An integer, rate for atrous convolution.

  Returns:
    output: A tensor of size [batch, height_out, width_out, channels] with the
      input, either intact (if kernel_size == 1) or padded (if kernel_size > 1).
  """
  kernel_size_effective = [kernel_size[0] + (kernel_size[0] - 1) * (rate - 1),
                           kernel_size[0] + (kernel_size[0] - 1) * (rate - 1)]
  pad_total = [kernel_size_effective[0] - 1, kernel_size_effective[1] - 1]
  pad_beg = [pad_total[0] // 2, pad_total[1] // 2]
  pad_end = [pad_total[0] - pad_beg[0], pad_total[1] - pad_beg[1]]
  padded_inputs = tf.pad(inputs, [[0, 0], [pad_beg[0], pad_end[0]],
                                  [pad_beg[1], pad_end[1]], [0, 0]])
  return padded_inputs 
開發者ID:tensorflow,項目名稱:benchmarks,代碼行數:26,代碼來源:mobilenet.py

示例2: pad_batch

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def pad_batch(features, batch_multiple):
  """Pad batch dim of features to nearest multiple of batch_multiple."""
  feature = list(features.items())[0][1]
  batch_size = tf.shape(feature)[0]
  mod = batch_size % batch_multiple
  has_mod = tf.cast(tf.cast(mod, tf.bool), tf.int32)
  batch_padding = batch_multiple * has_mod - mod

  padded_features = {}
  for k, feature in features.items():
    rank = len(feature.shape)
    paddings = [[0, 0] for _ in range(rank)]
    paddings[0][1] = batch_padding
    padded_feature = tf.pad(feature, paddings)
    padded_features[k] = padded_feature
  return padded_features


# TODO(lukaszkaiser): refactor the API to not be just a list of self params
#   but make sense for other uses too. 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:22,代碼來源:data_reader.py

示例3: bottom

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def bottom(self, features):
    """We add padding to the input and output so they are the same.

    Length of input and output should be power of 2.

    Args:
      features: Dictionary of inputs and targets

    Returns:
      dictionary: Inputs and targets padded with 0 to the length of power of 2.
      Both are same length.
    """
    pad_len = self.max_pad_length(features)
    features["inputs"] = self.pad(features["inputs"], pad_len)

    if features.get("targets") is not None:
      features["targets"] = self.pad(features["targets"], pad_len)

    return super(ShuffleNetwork, self).bottom(features) 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:21,代碼來源:shuffle_network.py

示例4: pad

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def pad(tensor, pad_len):
    """Pad tensor on first dimension to pad_len.

    Args:
      tensor: input tensor of shape length >= 2
      pad_len: pad length

    Returns:
      tf.Tensor: Padded input tensor.
    """

    assert len(tensor.shape) >= 2  # tensor of shape [batch, length, ...]
    length = tf.shape(tensor)[1]

    padding = [[0, 0], [0, pad_len - length]]
    padding += [[0, 0]] * (len(tensor.shape) - 2)
    return tf.pad(tensor, padding) 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:19,代碼來源:shuffle_network.py

示例5: add_edge_bias

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def add_edge_bias(x, filter_size):
  """Pad x and concatenates an edge bias across the depth of x.

  The edge bias can be thought of as a binary feature which is unity when
  the filter is being convolved over an edge and zero otherwise.

  Args:
    x: Input tensor, shape (NHWC)
    filter_size: filter_size to determine padding.
  Returns:
    x_pad: Input tensor, shape (NHW(c+1))
  """
  x_shape = common_layers.shape_list(x)
  if filter_size[0] == 1 and filter_size[1] == 1:
    return x
  a = (filter_size[0] - 1) // 2  # vertical padding size
  b = (filter_size[1] - 1) // 2  # horizontal padding size
  padding = [[0, 0], [a, a], [b, b], [0, 0]]
  x_bias = tf.zeros(x_shape[:-1] + [1])

  x = tf.pad(x, padding)
  x_pad = tf.pad(x_bias, padding, constant_values=1)
  return tf.concat([x, x_pad], axis=3) 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:25,代碼來源:glow_ops.py

示例6: shake_shake_skip_connection

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def shake_shake_skip_connection(x, output_filters, stride, is_training):
  """Adds a residual connection to the filter x for the shake-shake model."""
  curr_filters = common_layers.shape_list(x)[-1]
  if curr_filters == output_filters:
    return x
  stride_spec = [1, stride, stride, 1]
  # Skip path 1.
  path1 = tf.nn.avg_pool(x, [1, 1, 1, 1], stride_spec, "VALID")
  path1 = tf.layers.conv2d(
      path1, int(output_filters / 2), (1, 1), padding="SAME", name="path1_conv")

  # Skip path 2.
  pad_arr = [[0, 0], [0, 1], [0, 1], [0, 0]]  # First pad with 0's then crop.
  path2 = tf.pad(x, pad_arr)[:, 1:, 1:, :]
  path2 = tf.nn.avg_pool(path2, [1, 1, 1, 1], stride_spec, "VALID")
  path2 = tf.layers.conv2d(
      path2, int(output_filters / 2), (1, 1), padding="SAME", name="path2_conv")

  # Concat and apply BN.
  final_path = tf.concat(values=[path1, path2], axis=-1)
  final_path = tf.layers.batch_normalization(
      final_path, training=is_training, name="final_path_bn")
  return final_path 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:25,代碼來源:shake_shake.py

示例7: _apply_logic

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def _apply_logic(self, input_tensor, output_depth, hparams, var_scope_suffix,
                   nonpadding, mask_future, **unused_kwargs):
    """Applies conv logic to `input_tensor`."""
    with tf.variable_scope("%s_conv_%s" % (self._conv_type, var_scope_suffix)):
      if mask_future:
        # Pad shift the inputs so that temporal information does not leak. This
        # must be used in tandem with VALID padding.
        pad_amount = int(self._conv_width - 1) * self._dilation_rate
        logic_output = tf.pad(
            input_tensor, paddings=[[0, 0], [pad_amount, 0], [0, 0]])
        padding = "VALID"
      else:
        logic_output = input_tensor
        padding = "SAME"

      logic_output = tf.expand_dims(logic_output, 2)
      logic_output = self._conv_function(logic_output, output_depth, padding)

      logic_output = tf.squeeze(logic_output, 2)
    return logic_output 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:22,代碼來源:nas_layers.py

示例8: bytenet_internal

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def bytenet_internal(inputs, targets, hparams):
  """ByteNet, main step used for training."""
  with tf.variable_scope("bytenet"):
    # Flatten inputs and extend length by 50%.
    inputs = tf.expand_dims(common_layers.flatten4d3d(inputs), axis=2)
    extend_length = tf.to_int32(0.5 * tf.to_float(tf.shape(inputs)[1]))
    inputs_shape = inputs.shape.as_list()
    inputs = tf.pad(inputs, [[0, 0], [0, extend_length], [0, 0], [0, 0]])
    inputs_shape[1] = None
    inputs.set_shape(inputs_shape)  # Don't lose the other shapes when padding.
    # Pad inputs and targets to be the same length, divisible by 50.
    inputs, targets = common_layers.pad_to_same_length(
        inputs, targets, final_length_divisible_by=50)
    final_encoder = residual_dilated_conv(inputs, hparams.num_block_repeat,
                                          "SAME", "encoder", hparams)

    shifted_targets = common_layers.shift_right(targets)
    kernel = (hparams.kernel_height, hparams.kernel_width)
    decoder_start = common_layers.conv_block(
        tf.concat([final_encoder, shifted_targets], axis=3),
        hparams.hidden_size, [((1, 1), kernel)],
        padding="LEFT")

    return residual_dilated_conv(decoder_start, hparams.num_block_repeat,
                                 "LEFT", "decoder", hparams) 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:27,代碼來源:bytenet.py

示例9: _import_feature

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def _import_feature(self, features, mesh, key):
    """Import a feature from the features dictionary into a mtf.Tensor.

    Args:
      features: a features dictionary
      mesh: a Mesh
      key: a string

    Returns:
      a mtf.Tensor with dtype int32 and shape self.batch_dims + self.length_dim
    """
    if key not in features:
      return None
    x = tf.to_int32(features[key])
    x = common_layers.expand_squeeze_to_nd(x, 2)
    batch_size = mtf.Shape(self.batch_dims).size
    x = x[:, :self.length_dim.size]
    extra_length = self.length_dim.size - tf.shape(x)[1]
    extra_batch = batch_size - tf.shape(x)[0]
    x = tf.pad(x, [[0, extra_batch], [0, extra_length]])
    mtf_shape = mtf.Shape(self.batch_dims + [self.length_dim])
    x = tf.reshape(x, mtf_shape.to_integer_list)
    return mtf.import_fully_replicated(mesh, x, mtf_shape, name=key) 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:25,代碼來源:mtf_transformer2.py

示例10: update_internal_states_early

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def update_internal_states_early(self, internal_states, frames):
    """Update the internal states early in the network in GRU-like way."""
    batch_size = common_layers.shape_list(frames[0])[0]
    internal_state = internal_states[0][0][:batch_size, :, :, :]
    state_activation = tf.concat([internal_state, frames[0]], axis=-1)
    state_gate_candidate = tf.layers.conv2d(
        state_activation, 2 * self.hparams.recurrent_state_size,
        (3, 3), padding="SAME", name="state_conv")
    state_gate, state_candidate = tf.split(state_gate_candidate, 2, axis=-1)
    state_gate = tf.nn.sigmoid(state_gate)
    state_candidate = tf.tanh(state_candidate)
    internal_state = internal_state * state_gate
    internal_state += state_candidate * (1.0 - state_gate)
    max_batch_size = max(_MAX_BATCH, self.hparams.batch_size)
    diff_batch_size = max_batch_size - batch_size
    internal_state = tf.pad(
        internal_state, [[0, diff_batch_size], [0, 0], [0, 0], [0, 0]])
    return [[internal_state]] 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:20,代碼來源:basic_stochastic.py

示例11: categorical_case

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def categorical_case(pmf, fns, rand=None):
  """Returns the outputs of fns[i] with probability pmf[i].

  Args:
    pmf: A 1-D tensor of probabilities, the probability mass function.
    fns: A list of callables that return tensors, same length as pmf.
    rand: An optional scalar between 0.0 and 1.0, the output of an RNG.

  Returns:
    A tensor, the output of fns[i] with probability pmf[i].
  """
  rand = tf.random_uniform([]) if rand is None else rand
  cmf = tf.pad(tf.cumsum(pmf), [(1, 0)])
  cmf = [cmf[i] for i in range(len(fns) + 1)]
  preds = [(rand >= a) & (rand < b) for a, b in zip(cmf[:-1], cmf[1:])]
  return tf.case(list(zip(preds, fns)), exclusive=True) 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:18,代碼來源:multi_problem_v2.py

示例12: waves_to_stfts

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def waves_to_stfts(self, waves):
    """Convert from waves to complex stfts.

    Args:
      waves: Tensor of the waveform, shape [batch, time, 1].

    Returns:
      stfts: Complex64 tensor of stft, shape [batch, time, freq, 1].
    """
    waves_padded = tf.pad(waves, [[0, 0], [self._pad_l, self._pad_r], [0, 0]])
    stfts = tf.signal.stft(
        waves_padded[:, :, 0],
        frame_length=self._nfft,
        frame_step=self._nhop,
        fft_length=self._nfft,
        pad_end=False)[:, :, :, tf.newaxis]
    stfts = stfts[:, :, 1:] if self._discard_dc else stfts[:, :, :-1]
    stft_shape = stfts.get_shape().as_list()[1:3]
    if tuple(stft_shape) != tuple(self._spec_shape):
      raise ValueError(
          'Spectrogram returned the wrong shape {}, is not the same as the '
          'constructor spec_shape {}.'.format(stft_shape, self._spec_shape))
    return stfts 
開發者ID:magenta,項目名稱:magenta,代碼行數:25,代碼來源:specgrams_helper.py

示例13: _call_sampler

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def _call_sampler(sample_n_fn, sample_shape, name=None):
  """Reshapes vector of samples."""
  with tf.name_scope(name, "call_sampler", values=[sample_shape]):
    sample_shape = tf.convert_to_tensor(
        sample_shape, dtype=tf.int32, name="sample_shape")
    # Ensure sample_shape is a vector (vs just a scalar).
    pad = tf.cast(tf.equal(tf.rank(sample_shape), 0), tf.int32)
    sample_shape = tf.reshape(
        sample_shape,
        tf.pad(tf.shape(sample_shape),
               paddings=[[pad, 0]],
               constant_values=1))
    samples = sample_n_fn(tf.reduce_prod(sample_shape))
    batch_event_shape = tf.shape(samples)[1:]
    final_shape = tf.concat([sample_shape, batch_event_shape], 0)
    return tf.reshape(samples, final_shape) 
開發者ID:magenta,項目名稱:magenta,代碼行數:18,代碼來源:seq2seq.py

示例14: fixed_padding

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def fixed_padding(inputs, kernel_size, data_format):
  """Pads the input along the spatial dimensions independently of input size.

  Args:
    inputs: A tensor of size [batch, channels, height_in, width_in] or
      [batch, height_in, width_in, channels] depending on data_format.
    kernel_size: The kernel to be used in the conv2d or max_pool2d operation.
                 Should be a positive integer.
    data_format: The input format ('channels_last' or 'channels_first').
  Returns:
    A tensor with the same format as the input with the data either intact
    (if kernel_size == 1) or padded (if kernel_size > 1).
  """
  pad_total = kernel_size - 1
  pad_beg = pad_total // 2
  pad_end = pad_total - pad_beg

  if data_format == 'channels_first':
    padded_inputs = tf.pad(inputs, [[0, 0], [0, 0],
                                    [pad_beg, pad_end], [pad_beg, pad_end]])
  else:
    padded_inputs = tf.pad(inputs, [[0, 0], [pad_beg, pad_end],
                                    [pad_beg, pad_end], [0, 0]])
  return padded_inputs 
開發者ID:google-research,項目名稱:tensor2robot,代碼行數:26,代碼來源:resnet.py

示例15: CausalConv

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import pad [as 別名]
def CausalConv(x, dilation_rate, filters, kernel_size=2, scope = ""):
  """Performs causal dilated 1D convolutions.

  Args:
    x : Tensor of shape (batch_size, steps, input_dim).
    dilation_rate: Dilation rate of convolution.
    filters: Number of convolution filters.
    kernel_size: Width of convolution kernel. SNAIL paper uses 2 for all
      experiments.
    scope: Variable scope for this layer.
  Returns:
    y: Tensor of shape (batch_size, new_steps, D).
  """
  with tf.variable_scope(scope):
    causal_pad_size = (kernel_size - 1) * dilation_rate
    # Pad sequence dimension.
    x = tf.pad(x, [[0, 0], [causal_pad_size, 0], [0, 0]])
    return layers.conv1d(
        x,
        filters,
        kernel_size=kernel_size,
        padding="VALID",
        rate=dilation_rate) 
開發者ID:google-research,項目名稱:tensor2robot,代碼行數:25,代碼來源:snail.py


注:本文中的tensorflow.compat.v1.pad方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。