當前位置: 首頁>>代碼示例>>Python>>正文


Python v1.case方法代碼示例

本文整理匯總了Python中tensorflow.compat.v1.case方法的典型用法代碼示例。如果您正苦於以下問題:Python v1.case方法的具體用法?Python v1.case怎麽用?Python v1.case使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在tensorflow.compat.v1的用法示例。


在下文中一共展示了v1.case方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: video_features

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def video_features(
      self, all_frames, all_actions, all_rewards, all_raw_frames):
    """Optional video wide features.

      If the model requires access to all of the video frames
      (e.g. in case of approximating one latent for the whole video)
      override this function to add them. They will be accessible
      as video_features in next_frame function.

    Args:
      all_frames: list of all frames including input and target frames.
      all_actions: list of all actions including input and target actions.
      all_rewards: list of all rewards including input and target rewards.
      all_raw_frames: list of all raw frames (before modalities).

    Returns:
      video_features: a dictionary containing video-wide features.
    """
    del all_frames, all_actions, all_rewards, all_raw_frames
    return None 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:22,代碼來源:base.py

示例2: video_extra_loss

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def video_extra_loss(self, frames_predicted, frames_target,
                       internal_states, video_features):
    """Optional video wide extra loss.

      If the model needs to calculate some extra loss across all predicted
      frames (e.g. in case of video GANS loss) override this function.

    Args:
      frames_predicted: list of all predicted frames.
      frames_target: list of all target frames.
      internal_states: internal states of the video.
      video_features: video wide features coming from video_features function.

    Returns:
      extra_loss: extra video side loss.
    """
    del frames_predicted, frames_target, internal_states, video_features
    return 0.0 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:20,代碼來源:base.py

示例3: finish

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def finish(self):
    """Finishes transconding and returns the video.

    Returns:
      bytes

    Raises:
      IOError: in case of transcoding error.
    """
    if self.proc is None:
      return None
    self.proc.stdin.close()
    for thread in (self._out_thread, self._err_thread):
      thread.join()
    (out, err) = [
        b"".join(chunks) for chunks in (self._out_chunks, self._err_chunks)
    ]
    self.proc.stdout.close()
    self.proc.stderr.close()
    if self.proc.returncode:
      err = "\n".join([" ".join(self.cmd), err.decode("utf8")])
      raise IOError(err)
    del self.proc
    self.proc = None
    return out 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:27,代碼來源:common_video.py

示例4: unsupervised

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def unsupervised(dataset, preprocessors=None, **kwargs):
  """Configure this to point at unsupervised preprocessors.

   This function creates an extra level of indirection in case we want
   different unsupervised pretraining functions in the future which do not
   fit into the denoise() framework.

  Args:
    dataset: A tf.data.Dataset to process.
    preprocessors: a list of token-preprocessor functions
    **kwargs: passthrough keyword arguments for token preprocessors

  Returns:
    A preprocessed tf.data.Dataset.
  """
  if preprocessors is None:
    tf.logging.warn(
        'unsupervised preprocessor got preprocessors=None; no preprocessing '
        'will be applied.'
    )
    return dataset
  for p in preprocessors:
    dataset = p(dataset, **kwargs)
  return dataset 
開發者ID:google-research,項目名稱:text-to-text-transfer-transformer,代碼行數:26,代碼來源:preprocessors.py

示例5: next_frame

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def next_frame(self,
                 frames, actions, rewards,
                 target_frame, internal_states, video_features):
    """The main prediction function of next frame models.

      This is the main function that should be overridden to implement models.

    Args:
      frames: The list of input frames.
              Only previous frame in case of recurrent models.
      actions: The list of input actions.
              Only previous action in case of recurrent models.
      rewards: The list of input rewards.
              Only previous reward in case of recurrent models.
      target_frame: The target frame.
              Usually required for approximating the posterior.
      internal_states: Internal model states. Only useful for recurrent models
              to keep the state from the previous time index.
              internal_states is None at the first frame and should be
              initialized properly.
      video_features: video wide features. None by default.
              Please refer to video_features function for description.

    Returns:
      pred_frame: predicted frame BSxWxHxC
              where C is 3 for L1/L2 modality and 3*256 for Softmax.
      pred_reward: the same size as input reward.
              None if the model does not detect rewards.
      pred_action: predicted action logits
      pred_value: predicted value
      extra_loss: any extra loss other than predicted frame and reward.
              e.g. KL loss in case of VAE models.
      internal_states: updated internal models states.
    """
    raise NotImplementedError("Base video model.") 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:37,代碼來源:base.py

示例6: get_scheduled_sample_inputs

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def get_scheduled_sample_inputs(self,
                                  done_warm_start,
                                  groundtruth_items,
                                  generated_items,
                                  scheduled_sampling_func):
    """Scheduled sampling.

    Args:
      done_warm_start: whether we are done with warm start or not.
      groundtruth_items: list of ground truth items.
      generated_items: list of generated items.
      scheduled_sampling_func: scheduled sampling function to choose between
        groundtruth items and generated items.

    Returns:
      A mix list of ground truth and generated items.
    """
    def sample():
      """Calculate the scheduled sampling params based on iteration number."""
      with tf.variable_scope("scheduled_sampling", reuse=tf.AUTO_REUSE):
        return [
            scheduled_sampling_func(item_gt, item_gen)
            for item_gt, item_gen in zip(groundtruth_items, generated_items)]

    cases = [
        (tf.logical_not(done_warm_start), lambda: groundtruth_items),
        (tf.logical_not(self.is_training), lambda: generated_items),
    ]
    output_items = tf.case(cases, default=sample, strict=True)

    return output_items 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:33,代碼來源:base.py

示例7: next_frame_base

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def next_frame_base():
  """Common HParams for next_frame models."""
  hparams = common_hparams.basic_params1()
  # Loss cutoff.
  hparams.add_hparam("video_modality_loss_cutoff", 0.01)
  # Additional resizing the frames before feeding them to model.
  hparams.add_hparam("preprocess_resize_frames", None)
  # How many data points to suffle. Ideally should be part of problem not model!
  hparams.add_hparam("shuffle_buffer_size", 128)
  # Tiny mode. For faster tests.
  hparams.add_hparam("tiny_mode", False)
  # In case a model supports smaller/faster version.
  hparams.add_hparam("small_mode", False)
  # In case a model has stochastic version.
  hparams.add_hparam("stochastic_model", False)
  # Internal loss for recurrent models.
  hparams.add_hparam("internal_loss", True)
  # choose from: concat, multiplicative, multi_additive
  hparams.add_hparam("action_injection", "multi_additive")
  # Scheduled sampling method. Choose between
  # ground_truth_only, prediction_only, prob, count, prob_inverse_exp.
  hparams.add_hparam("scheduled_sampling_mode", "prediction_only")
  hparams.add_hparam("scheduled_sampling_decay_steps", 10000)
  hparams.add_hparam("scheduled_sampling_max_prob", 1.0)
  hparams.add_hparam("scheduled_sampling_k", 900.0)
  return hparams 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:28,代碼來源:base.py

示例8: inject_additional_input

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def inject_additional_input(layer, inputs, name, mode="concat"):
  """Injects the additional input into the layer.

  Args:
    layer: layer that the input should be injected to.
    inputs: inputs to be injected.
    name: TF scope name.
    mode: how the infor should be added to the layer:
      "concat" concats as additional channels.
      "multiplicative" broadcasts inputs and multiply them to the channels.
      "multi_additive" broadcasts inputs and multiply and add to the channels.

  Returns:
    updated layer.

  Raises:
    ValueError: in case of unknown mode.
  """
  layer_shape = common_layers.shape_list(layer)
  input_shape = common_layers.shape_list(inputs)
  zeros_mask = tf.zeros(layer_shape, dtype=tf.float32)
  if mode == "concat":
    emb = encode_to_shape(inputs, layer_shape, name)
    layer = tf.concat(values=[layer, emb], axis=-1)
  elif mode == "multiplicative":
    filters = layer_shape[-1]
    input_reshaped = tf.reshape(inputs, [-1, 1, 1, input_shape[-1]])
    input_mask = tf.layers.dense(input_reshaped, filters, name=name)
    input_broad = input_mask + zeros_mask
    layer *= input_broad
  elif mode == "multi_additive":
    filters = layer_shape[-1]
    input_reshaped = tf.reshape(inputs, [-1, 1, 1, input_shape[-1]])
    input_mul = tf.layers.dense(input_reshaped, filters, name=name + "_mul")
    layer *= tf.nn.sigmoid(input_mul)
    input_add = tf.layers.dense(input_reshaped, filters, name=name + "_add")
    layer += input_add
  else:
    raise ValueError("Unknown injection mode: %s" % mode)

  return layer 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:43,代碼來源:common_video.py

示例9: beta_schedule

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def beta_schedule(schedule, global_step, final_beta, decay_start, decay_end):
  """Get KL multiplier (beta) based on the schedule."""
  if decay_start > decay_end:
    raise ValueError("decay_end is smaller than decay_end.")

  # Since some of the TF schedules do not support incrementing a value,
  # in all of the schedules, we anneal the beta from final_beta to zero
  # and then reverse it at the bottom.
  if schedule == "constant":
    decayed_value = 0.0
  elif schedule == "linear":
    decayed_value = tf.train.polynomial_decay(
        learning_rate=final_beta,
        global_step=global_step - decay_start,
        decay_steps=decay_end - decay_start,
        end_learning_rate=0.0)
  elif schedule == "noisy_linear_cosine_decay":
    decayed_value = tf.train.noisy_linear_cosine_decay(
        learning_rate=final_beta,
        global_step=global_step - decay_start,
        decay_steps=decay_end - decay_start)
  # TODO(mechcoder): Add log_annealing schedule.
  else:
    raise ValueError("Unknown beta schedule.")

  increased_value = final_beta - decayed_value
  increased_value = tf.maximum(0.0, increased_value)

  beta = tf.case(
      pred_fn_pairs={
          tf.less(global_step, decay_start): lambda: 0.0,
          tf.greater(global_step, decay_end): lambda: final_beta},
      default=lambda: increased_value)
  return beta 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:36,代碼來源:common_video.py

示例10: get_schedule_distribution

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def get_schedule_distribution(schedule, global_step=None):
  """Computes the pmf of a schedule given the global_step.

  Args:
    schedule: A schedule tuple, see encode_schedule for details.
    global_step: A scalar tensor, the step to query the schedule.

  Returns:
    A 1-D tensor of probs, the sampling distribution of the global_step.
  """
  interpolation, steps, pmfs = schedule
  if len(pmfs) == 1:
    # py_func doesn't seem to work on TPU - at least get the constant case to
    # run.
    # TODO(noam): get the general case working.
    return pmfs[0]
  if global_step is None:
    global_step = tf.train.get_or_create_global_step()
  if interpolation == 'step':
    interpolation_fn = step_interpolation
  elif interpolation == 'linear':
    interpolation_fn = linear_interpolation
  else:
    raise ValueError('Invalid interpolation strategy: %s' % interpolation)
  return tf.reshape(
      tf.py_func(
          func=lambda x: interpolation_fn(x, np.array(steps), np.array(pmfs)),
          inp=[global_step], Tout=tf.float32), [len(pmfs[0])]) 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:30,代碼來源:multi_problem_v2.py

示例11: squad_span_space_tokenized

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def squad_span_space_tokenized(dataset):
  """Convert SQuAD examples to a text2text pair with span output.

  SQuAD produces examples with this form:
    {'context': <article>, 'question': <question>,
     'answers': { 'text': [<all answers>] }}

  This function returns examples with the format
    {'inputs': 'context: <article> question: <question>',
     'targets': 'start: <start_index> end: <end_index>'}
  where <start_index> and <end_index> specify the space-tokenized span
  start/end indices. Both <start_index> and <end_index> are included in
  the answer. In the case where the tokenized answer is
  not found in the tokenized context, the example is skipped.

  Args:
    dataset: a tf.data.Dataset to process.
  Returns:
    A preprocessed tf.data.Dataset with the format listed above.
  """
  def my_fn(x):
    """Create squad example as in squad_span_char, but tokenized on spaces."""
    res = dict(x)
    res['targets'] = _span_answer(x['context'], x['targets'],)
    return res

  dataset = squad(dataset)
  dataset = dataset.map(my_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE)
  return dataset.filter(lambda x: tf.strings.length(x['targets']) > 0) 
開發者ID:google-research,項目名稱:text-to-text-transfer-transformer,代碼行數:31,代碼來源:preprocessors.py

示例12: get_inception_crop

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def get_inception_crop(is_training, **kw):
  # kw of interest are: aspect_ratio_range, area_range.
  # Note that image is not resized yet here.
  def _inception_crop_pp(data):
    if is_training:
      data["image"] = inception_crop(data["image"], **kw)
    else:
      # TODO(lbeyer): Maybe do 87.5%-crop in test-mode by default?
      tf.logging.warn("inception_crop pre-processing keeps the full image in "
                      "eval mode for now. Contact lbeyer@ with your use-case "
                      "and propose a reasonable default behaviour.")
    return data
  return _inception_crop_pp 
開發者ID:google-research,項目名稱:s4l,代碼行數:15,代碼來源:preprocess.py

示例13: select_and_apply_random_policy

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def select_and_apply_random_policy(policies, image, bboxes):
  """Select a random policy from `policies` and apply it to `image`."""
  policy_to_select = tf.random_uniform([], maxval=len(policies), dtype=tf.int32)
  # Note that using tf.case instead of tf.conds would result in significantly
  # larger graphs and would even break export for some larger policies.
  for (i, policy) in enumerate(policies):
    image, bboxes = tf.cond(
        tf.equal(i, policy_to_select),
        lambda selected_policy=policy: selected_policy(image, bboxes),
        lambda: (image, bboxes))
  return (image, bboxes) 
開發者ID:tensorflow,項目名稱:models,代碼行數:13,代碼來源:autoaugment_utils.py

示例14: get_extra_internal_loss

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def get_extra_internal_loss(self, extra_raw_gts, extra_gts, extra_pds):
    """Hacky code the get the loss on predicted frames from input frames.

       Recurrent models consume the frames one-by-one. Therefore
       if there is more than one input frame they also get predicted.
       T2T only calculates loss on the predicted target frames which
       means the loss is not being applied on the predicted input frames.
       This code is to fix this issue. Since the model is not aware of the
       modality it has to match the pre-porocessing happening in bottom
       function and therefore this becomes a very hacky code. This code
       should match the bottom and top and loss of modalities otherwise
       it will calculate the wrong loss.

    Args:
      extra_raw_gts: extra raw ground truth frames.
      extra_gts: extra normalized ground truth frames.
      extra_pds: extra predicted frames.

    Returns:
      Additional reconstruction loss.

    Raises:
      ValueError: in case of unknown loss transformation.
    """
    # TODO(trandustin): This logic should be moved elsewhere.
    if self.hparams.loss.get("targets") == modalities.video_l2_raw_loss:
      recon_loss = tf.losses.mean_squared_error(extra_gts, extra_pds)
    elif "targets" not in self.hparams.loss:
      shape = common_layers.shape_list(extra_pds)
      updated_shape = shape[:-1] + [3, 256]
      extra_pds = tf.reshape(extra_pds, updated_shape)
      # Merge time and batch
      logits = tf.reshape(extra_pds, [-1] + updated_shape[2:])
      targets = extra_raw_gts
      targets_shape = common_layers.shape_list(targets)
      targets = tf.reshape(targets, [-1] + targets_shape[2:])
      targets_weights_fn = self.hparams.weights_fn.get(
          "targets",
          modalities.get_weights_fn(self._target_modality))
      numerator, denominator = common_layers.padded_cross_entropy(
          logits,
          targets,
          self.hparams.label_smoothing,
          cutoff=getattr(self.hparams, "video_modality_loss_cutoff", 0.01),
          weights_fn=targets_weights_fn)
      recon_loss = numerator / denominator
    else:
      raise ValueError("internal loss only supports specific hparams.loss.")
    tf.summary.scalar("recon_extra", recon_loss)
    return recon_loss 
開發者ID:tensorflow,項目名稱:tensor2tensor,代碼行數:52,代碼來源:base.py

示例15: _wsc_inputs

# 需要導入模塊: from tensorflow.compat import v1 [as 別名]
# 或者: from tensorflow.compat.v1 import case [as 別名]
def _wsc_inputs(x):
  """Given an example from SuperGLUE WSC, compute the 'inputs' value.

  The output will look like a fill in the blank with the pronoun blanked out.
  For example, the text
    'Mitchell asked Tom if he could lend some money.'
  would be transformed to
    'Mitchell asked Tom if X could lend some money.'

  Args:
    x: A dict that is an example from the WSC task of SuperGLUE.

  Returns:
    A scalar string tensor.
  """
  words = tf.strings.split([x['text']], sep=' ').values

  # We would need some special logic to handle the case where the pronoun is the
  # first or last word in the text. None of the examples in WSC seem to have
  # this, so we are ignoring these cases.
  with tf.control_dependencies([
      tf.assert_greater(x['span2_index'], 0),
      tf.assert_less(x['span2_index'], tf.size(words)),
  ]):
    pronoun_index = tf.identity(x['span2_index'])

  def create_input():
    with tf.control_dependencies(
        [tf.assert_equal(words[pronoun_index], x['span2_text'])]):
      return tf.strings.join(
          [
              tf.strings.reduce_join(words[:pronoun_index], separator=' '),
              'X',
              tf.strings.reduce_join(
                  words[pronoun_index + 1:], separator=' '),
          ],
          separator=' ',
      )

  # Handle some special cases.
  return tf.case(
      {
          # The issue here is that the pronoun is 'him,"' in the text.
          tf.equal(
              x['text'],
              'The boy continued to whip the pony , and eventually the pony threw him over. John laughed out quite loud. \"Good for him,\" he said. '
          ):
              lambda:
              'The boy continued to whip the pony , and eventually the pony threw him over. John laughed out quite loud. "Good for X ," he said.',
          # Using the span2_index, we get 'use' instead of 'it'.
          tf.equal(
              x['text'],
              'When they had eventually calmed down a bit , and had gotten home, Mr. Farley put the magic pebble in an iron safe . Some day they might want to use it , but really for now, what more could they wish for?'
          ):
              lambda:
              'When they had eventually calmed down a bit , and had gotten home, Mr. Farley put the magic pebble in an iron safe . Some day they might want to use X , but really for now, what more could they wish for?'
      },
      default=create_input,
      exclusive=True) 
開發者ID:google-research,項目名稱:text-to-text-transfer-transformer,代碼行數:61,代碼來源:preprocessors.py


注:本文中的tensorflow.compat.v1.case方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。