当前位置: 首页>>代码示例>>Python>>正文


Python datasets.mnist方法代码示例

本文整理汇总了Python中slim.datasets.mnist方法的典型用法代码示例。如果您正苦于以下问题:Python datasets.mnist方法的具体用法?Python datasets.mnist怎么用?Python datasets.mnist使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在slim.datasets的用法示例。


在下文中一共展示了datasets.mnist方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: get_dataset

# 需要导入模块: from slim import datasets [as 别名]
# 或者: from slim.datasets import mnist [as 别名]
def get_dataset(dataset_name,
                split_name,
                dataset_dir,
                file_pattern=None,
                reader=None):
  """Given a dataset name and a split_name returns a Dataset.

  Args:
    dataset_name: String, the name of the dataset.
    split_name: A train/test split name.
    dataset_dir: The directory where the dataset files are stored.
    file_pattern: The file pattern to use for matching the dataset source files.
    reader: The subclass of tf.ReaderBase. If left as `None`, then the default
      reader defined by each dataset is used.

  Returns:
    A tf-slim `Dataset` class.

  Raises:
    ValueError: if `dataset_name` isn't recognized.
  """
  dataset_name_to_module = {'mnist': mnist, 'mnist_m': mnist_m}
  if dataset_name not in dataset_name_to_module:
    raise ValueError('Name of dataset unknown %s.' % dataset_name)

  return dataset_name_to_module[dataset_name].get_split(split_name, dataset_dir,
                                                        file_pattern, reader) 
开发者ID:ringringyi,项目名称:DOTA_models,代码行数:29,代码来源:dataset_factory.py

示例2: provide_batch

# 需要导入模块: from slim import datasets [as 别名]
# 或者: from slim.datasets import mnist [as 别名]
def provide_batch(dataset_name, split_name, dataset_dir, num_readers,
                  batch_size, num_preprocessing_threads):
  """Provides a batch of images and corresponding labels.

    Args:
    dataset_name: String, the name of the dataset.
    split_name: A train/test split name.
    dataset_dir: The directory where the dataset files are stored.
    num_readers: The number of readers used by DatasetDataProvider.
    batch_size: The size of the batch requested.
    num_preprocessing_threads: The number of preprocessing threads for
      tf.train.batch.
    file_pattern: The file pattern to use for matching the dataset source files.
    reader: The subclass of tf.ReaderBase. If left as `None`, then the default
      reader defined by each dataset is used.

  Returns:
    A batch of
      images: tensor of [batch_size, height, width, channels].
      labels: dictionary of labels.
  """
  dataset = get_dataset(dataset_name, split_name, dataset_dir)
  provider = slim.dataset_data_provider.DatasetDataProvider(
      dataset,
      num_readers=num_readers,
      common_queue_capacity=20 * batch_size,
      common_queue_min=10 * batch_size)
  [image, label] = provider.get(['image', 'label'])

  # Convert images to float32
  image = tf.image.convert_image_dtype(image, tf.float32)
  image -= 0.5
  image *= 2

  # Load the data.
  labels = {}
  images, labels['classes'] = tf.train.batch(
      [image, label],
      batch_size=batch_size,
      num_threads=num_preprocessing_threads,
      capacity=5 * batch_size)
  labels['classes'] = slim.one_hot_encoding(labels['classes'],
                                            dataset.num_classes)

  # Convert mnist to RGB and 32x32 so that it can match mnist_m.
  if dataset_name == 'mnist':
    images = tf.image.grayscale_to_rgb(images)
    images = tf.image.resize_images(images, [32, 32])
  return images, labels 
开发者ID:ringringyi,项目名称:DOTA_models,代码行数:51,代码来源:dataset_factory.py


注:本文中的slim.datasets.mnist方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。