當前位置: 首頁>>代碼示例>>Python>>正文


Python datasets.mnist方法代碼示例

本文整理匯總了Python中slim.datasets.mnist方法的典型用法代碼示例。如果您正苦於以下問題:Python datasets.mnist方法的具體用法?Python datasets.mnist怎麽用?Python datasets.mnist使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在slim.datasets的用法示例。


在下文中一共展示了datasets.mnist方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: get_dataset

# 需要導入模塊: from slim import datasets [as 別名]
# 或者: from slim.datasets import mnist [as 別名]
def get_dataset(dataset_name,
                split_name,
                dataset_dir,
                file_pattern=None,
                reader=None):
  """Given a dataset name and a split_name returns a Dataset.

  Args:
    dataset_name: String, the name of the dataset.
    split_name: A train/test split name.
    dataset_dir: The directory where the dataset files are stored.
    file_pattern: The file pattern to use for matching the dataset source files.
    reader: The subclass of tf.ReaderBase. If left as `None`, then the default
      reader defined by each dataset is used.

  Returns:
    A tf-slim `Dataset` class.

  Raises:
    ValueError: if `dataset_name` isn't recognized.
  """
  dataset_name_to_module = {'mnist': mnist, 'mnist_m': mnist_m}
  if dataset_name not in dataset_name_to_module:
    raise ValueError('Name of dataset unknown %s.' % dataset_name)

  return dataset_name_to_module[dataset_name].get_split(split_name, dataset_dir,
                                                        file_pattern, reader) 
開發者ID:ringringyi,項目名稱:DOTA_models,代碼行數:29,代碼來源:dataset_factory.py

示例2: provide_batch

# 需要導入模塊: from slim import datasets [as 別名]
# 或者: from slim.datasets import mnist [as 別名]
def provide_batch(dataset_name, split_name, dataset_dir, num_readers,
                  batch_size, num_preprocessing_threads):
  """Provides a batch of images and corresponding labels.

    Args:
    dataset_name: String, the name of the dataset.
    split_name: A train/test split name.
    dataset_dir: The directory where the dataset files are stored.
    num_readers: The number of readers used by DatasetDataProvider.
    batch_size: The size of the batch requested.
    num_preprocessing_threads: The number of preprocessing threads for
      tf.train.batch.
    file_pattern: The file pattern to use for matching the dataset source files.
    reader: The subclass of tf.ReaderBase. If left as `None`, then the default
      reader defined by each dataset is used.

  Returns:
    A batch of
      images: tensor of [batch_size, height, width, channels].
      labels: dictionary of labels.
  """
  dataset = get_dataset(dataset_name, split_name, dataset_dir)
  provider = slim.dataset_data_provider.DatasetDataProvider(
      dataset,
      num_readers=num_readers,
      common_queue_capacity=20 * batch_size,
      common_queue_min=10 * batch_size)
  [image, label] = provider.get(['image', 'label'])

  # Convert images to float32
  image = tf.image.convert_image_dtype(image, tf.float32)
  image -= 0.5
  image *= 2

  # Load the data.
  labels = {}
  images, labels['classes'] = tf.train.batch(
      [image, label],
      batch_size=batch_size,
      num_threads=num_preprocessing_threads,
      capacity=5 * batch_size)
  labels['classes'] = slim.one_hot_encoding(labels['classes'],
                                            dataset.num_classes)

  # Convert mnist to RGB and 32x32 so that it can match mnist_m.
  if dataset_name == 'mnist':
    images = tf.image.grayscale_to_rgb(images)
    images = tf.image.resize_images(images, [32, 32])
  return images, labels 
開發者ID:ringringyi,項目名稱:DOTA_models,代碼行數:51,代碼來源:dataset_factory.py


注:本文中的slim.datasets.mnist方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。