當前位置: 首頁>>代碼示例>>Python>>正文


Python utils.linear方法代碼示例

本文整理匯總了Python中utils.linear方法的典型用法代碼示例。如果您正苦於以下問題:Python utils.linear方法的具體用法?Python utils.linear怎麽用?Python utils.linear使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在utils的用法示例。


在下文中一共展示了utils.linear方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: __init__

# 需要導入模塊: import utils [as 別名]
# 或者: from utils import linear [as 別名]
def __init__(self, x_bxu, z_size, name, var_min=0.0):
    """Create an input dependent diagonal Gaussian distribution.

    Args:
      x: The input tensor from which the mean and variance are computed,
        via a linear transformation of x.  I.e.
          mu = Wx + b, log(var) = Mx + c
      z_size: The size of the distribution.
      name:  The name to prefix to learned variables.
      var_min (optional): Minimal variance allowed.  This is an additional
        way to control the amount of information getting through the stochastic
        layer.
    """
    size_bxn = tf.stack([tf.shape(x_bxu)[0], z_size])
    self.mean_bxn = mean_bxn = linear(x_bxu, z_size, name=(name+"/mean"))
    logvar_bxn = linear(x_bxu, z_size, name=(name+"/logvar"))
    if var_min > 0.0:
      logvar_bxn = tf.log(tf.exp(logvar_bxn) + var_min)
    self.logvar_bxn = logvar_bxn

    self.noise_bxn = noise_bxn = tf.random_normal(size_bxn)
    self.noise_bxn.set_shape([None, z_size])
    self.sample_bxn = mean_bxn + tf.exp(0.5 * logvar_bxn) * noise_bxn 
開發者ID:ringringyi,項目名稱:DOTA_models,代碼行數:25,代碼來源:distributions.py

示例2: __call__

# 需要導入模塊: import utils [as 別名]
# 或者: from utils import linear [as 別名]
def __call__(self, inputs, state, scope=None):
    """Gated recurrent unit (GRU) function.

    Args:
      inputs: A 2D batch x input_dim tensor of inputs.
      state: The previous state from the last time step.
      scope (optional): TF variable scope for defined GRU variables.

    Returns:
      A tuple (state, state), where state is the newly computed state at time t.
      It is returned twice to respect an interface that works for LSTMs.
    """

    x = inputs
    h = state
    if inputs is not None:
      xh = tf.concat(axis=1, values=[x, h])
    else:
      xh = h

    with tf.variable_scope(scope or type(self).__name__):  # "GRU"
      with tf.variable_scope("Gates"):  # Reset gate and update gate.
        # We start with bias of 1.0 to not reset and not update.
        r, u = tf.split(axis=1, num_or_size_splits=2, value=linear(xh,
                                     2 * self._num_units,
                                     alpha=self._weight_scale,
                                     name="xh_2_ru",
                                     collections=self._collections))
        r, u = tf.sigmoid(r), tf.sigmoid(u + self._forget_bias)
      with tf.variable_scope("Candidate"):
        xrh = tf.concat(axis=1, values=[x, r * h])
        c = tf.tanh(linear(xrh, self._num_units, name="xrh_2_c",
                           collections=self._collections))
      new_h = u * h + (1 - u) * c
      new_h = tf.clip_by_value(new_h, -self._clip_value, self._clip_value)

    return new_h, new_h 
開發者ID:ringringyi,項目名稱:DOTA_models,代碼行數:39,代碼來源:lfads.py

示例3: apply_model_semi

# 需要導入模塊: import utils [as 別名]
# 或者: from utils import linear [as 別名]
def apply_model_semi(img_unsup, img_sup, is_training, outputs, **kw):
  """Passes `img_unsup` and/or `img_sup` through the model.

  Args:
    img_unsup: The unsupervised input, could be None.
    img_sup: The supervised input, could be None.
    is_training: Train or test mode?
    outputs: A dict-like of {name: number} defining the desired output layers
      of the network. A linear layer with `number` outputs is added for each
      entry, with the given `name`.
    **kw: Extra keyword-args to be passed to `net()`.

  Returns:
    end_points: A dictionary of {name: tensor} mappings, partially dependent on
      which network is used. Additional entries are present for all entries in
      `outputs` and named accordingly.
      If both `img_unsup` and `img_sup` is given, every entry in `end_points`
      comes with two additional entries suffixed by "_unsup" and "_sup", which
      corresponds to the parts corresponding to the respective inputs.
  """
  # If both inputs are given, we concat them along the batch dimension.
  if img_unsup is not None and img_sup is not None:
    img_all = tf.concat([img_unsup, img_sup], axis=0)
  elif img_unsup is not None:
    img_all, split_idx = img_unsup, None
  elif img_sup is not None:
    img_all, split_idx = img_sup, None
  else:
    assert False, 'Either `img_unsup` or `img_sup` needs to be passed.'

  net = model_utils.get_net()
  _, end_points = net(img_all, is_training, spatial_squeeze=False, **kw)

  # TODO(xzhai): Try adding batch norm here.
  pre_logits = end_points['pre_logits']

  for name, nout in outputs.items():
    end_points[name] = utils.linear(pre_logits, nout, name)

  # Now, if both inputs were given, here we loop over all end_points, including
  # the final output we're usually interested in, and split them for
  # conveninece of the caller.
  if img_unsup is not None and img_sup is not None:
    split_idx = img_unsup.get_shape().as_list()[0]
    for name, val in end_points.copy().items():
      end_points[name + '_unsup'] = val[:split_idx]
      end_points[name + '_sup'] = val[split_idx:]

  elif img_unsup is not None:
    for name, val in end_points.copy().items():
      end_points[name + '_unsup'] = val

  elif img_sup is not None:
    for name, val in end_points.copy().items():
      end_points[name + '_sup'] = val

  else:
    raise ValueError('You must set at least one of {img_unsup, img_unsup}.')

  return end_points 
開發者ID:google-research,項目名稱:s4l,代碼行數:62,代碼來源:utils.py


注:本文中的utils.linear方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。