当前位置: 首页>>代码示例>>Python>>正文


Python vgslspecs.py方法代码示例

本文整理汇总了Python中vgslspecs.py方法的典型用法代码示例。如果您正苦于以下问题:Python vgslspecs.py方法的具体用法?Python vgslspecs.py怎么用?Python vgslspecs.py使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在vgslspecs的用法示例。


在下文中一共展示了vgslspecs.py方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: InitNetwork

# 需要导入模块: import vgslspecs [as 别名]
# 或者: from vgslspecs import py [as 别名]
def InitNetwork(input_pattern,
                model_spec,
                mode='eval',
                initial_learning_rate=0.00005,
                final_learning_rate=0.00005,
                halflife=1600000,
                optimizer_type='Adam',
                num_preprocess_threads=1,
                reader=None):
  """Constructs a python tensor flow model defined by model_spec.

  Args:
    input_pattern: File pattern of the data in tfrecords of Example.
    model_spec: Concatenation of input spec, model spec and output spec.
      See Build below for input/output spec. For model spec, see vgslspecs.py
    mode: One of 'train', 'eval'
    initial_learning_rate: Initial learning rate for the network.
    final_learning_rate: Final learning rate for the network.
    halflife: Number of steps over which to halve the difference between
              initial and final learning rate for the network.
    optimizer_type: One of 'GradientDescent', 'AdaGrad', 'Momentum', 'Adam'.
    num_preprocess_threads: Number of threads to use for image processing.
    reader: Function that returns an actual reader to read Examples from input
      files. If None, uses tf.TFRecordReader().
    Eval tasks need only specify input_pattern and model_spec.

  Returns:
    A VGSLImageModel class.

  Raises:
    ValueError: if the model spec syntax is incorrect.
  """
  model = VGSLImageModel(mode, model_spec, initial_learning_rate,
                         final_learning_rate, halflife)
  left_bracket = model_spec.find('[')
  right_bracket = model_spec.rfind(']')
  if left_bracket < 0 or right_bracket < 0:
    raise ValueError('Failed to find [] in model spec! ', model_spec)
  input_spec = model_spec[:left_bracket]
  layer_spec = model_spec[left_bracket:right_bracket + 1]
  output_spec = model_spec[right_bracket + 1:]
  model.Build(input_pattern, input_spec, layer_spec, output_spec,
              optimizer_type, num_preprocess_threads, reader)
  return model 
开发者ID:ringringyi,项目名称:DOTA_models,代码行数:46,代码来源:vgsl_model.py

示例2: Build

# 需要导入模块: import vgslspecs [as 别名]
# 或者: from vgslspecs import py [as 别名]
def Build(self, input_pattern, input_spec, model_spec, output_spec,
            optimizer_type, num_preprocess_threads, reader):
    """Builds the model from the separate input/layers/output spec strings.

    Args:
      input_pattern: File pattern of the data in tfrecords of TF Example format.
      input_spec: Specification of the input layer:
        batchsize,height,width,depth (4 comma-separated integers)
          Training will run with batches of batchsize images, but runtime can
          use any batch size.
          height and/or width can be 0 or -1, indicating variable size,
          otherwise all images must be the given size.
          depth must be 1 or 3 to indicate greyscale or color.
          NOTE 1-d image input, treating the y image dimension as depth, can
          be achieved using S1(1x0)1,3 as the first op in the model_spec, but
          the y-size of the input must then be fixed.
      model_spec: Model definition. See vgslspecs.py
      output_spec: Output layer definition:
        O(2|1|0)(l|s|c)n output layer with n classes.
          2 (heatmap) Output is a 2-d vector map of the input (possibly at
            different scale).
          1 (sequence) Output is a 1-d sequence of vector values.
          0 (value) Output is a 0-d single vector value.
          l uses a logistic non-linearity on the output, allowing multiple
            hot elements in any output vector value.
          s uses a softmax non-linearity, with one-hot output in each value.
          c uses a softmax with CTC. Can only be used with s (sequence).
          NOTE Only O1s and O1c are currently supported.
      optimizer_type: One of 'GradientDescent', 'AdaGrad', 'Momentum', 'Adam'.
      num_preprocess_threads: Number of threads to use for image processing.
      reader: Function that returns an actual reader to read Examples from input
        files. If None, uses tf.TFRecordReader().
    """
    self.global_step = tf.Variable(0, name='global_step', trainable=False)
    shape = _ParseInputSpec(input_spec)
    out_dims, out_func, num_classes = _ParseOutputSpec(output_spec)
    self.using_ctc = out_func == 'c'
    images, heights, widths, labels, sparse, _ = vgsl_input.ImageInput(
        input_pattern, num_preprocess_threads, shape, self.using_ctc, reader)
    self.labels = labels
    self.sparse_labels = sparse
    self.layers = vgslspecs.VGSLSpecs(widths, heights, self.mode == 'train')
    last_layer = self.layers.Build(images, model_spec)
    self._AddOutputs(last_layer, out_dims, out_func, num_classes)
    if self.mode == 'train':
      self._AddOptimizer(optimizer_type)

    # For saving the model across training and evaluation
    self.saver = tf.train.Saver() 
开发者ID:ringringyi,项目名称:DOTA_models,代码行数:51,代码来源:vgsl_model.py


注:本文中的vgslspecs.py方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。