當前位置: 首頁>>代碼示例>>Python>>正文


Python mnist.IMAGE_PIXELS屬性代碼示例

本文整理匯總了Python中tensorflow.examples.tutorials.mnist.mnist.IMAGE_PIXELS屬性的典型用法代碼示例。如果您正苦於以下問題:Python mnist.IMAGE_PIXELS屬性的具體用法?Python mnist.IMAGE_PIXELS怎麽用?Python mnist.IMAGE_PIXELS使用的例子?那麽, 這裏精選的屬性代碼示例或許可以為您提供幫助。您也可以進一步了解該屬性所在tensorflow.examples.tutorials.mnist.mnist的用法示例。


在下文中一共展示了mnist.IMAGE_PIXELS屬性的10個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: placeholder_inputs

# 需要導入模塊: from tensorflow.examples.tutorials.mnist import mnist [as 別名]
# 或者: from tensorflow.examples.tutorials.mnist.mnist import IMAGE_PIXELS [as 別名]
def placeholder_inputs(batch_size):
  """Generate placeholder variables to represent the input tensors.

  These placeholders are used as inputs by the rest of the model building
  code and will be fed from the downloaded data in the .run() loop, below.

  Args:
    batch_size: The batch size will be baked into both placeholders.

  Returns:
    images_placeholder: Images placeholder.
    labels_placeholder: Labels placeholder.
  """
  # Note that the shapes of the placeholders match the shapes of the full
  # image and label tensors, except the first dimension is now batch_size
  # rather than the full size of the train or test data sets.
  images_placeholder = tf.placeholder(tf.float32, shape=(batch_size,
                                                         mnist.IMAGE_PIXELS))
  labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
  return images_placeholder, labels_placeholder 
開發者ID:GoogleCloudPlatform,項目名稱:cloudml-samples,代碼行數:22,代碼來源:task.py

示例2: placeholder_inputs

# 需要導入模塊: from tensorflow.examples.tutorials.mnist import mnist [as 別名]
# 或者: from tensorflow.examples.tutorials.mnist.mnist import IMAGE_PIXELS [as 別名]
def placeholder_inputs():
  """Generate placeholder variables to represent the input tensors.

  These placeholders are used as inputs by the rest of the model building
  code and will be fed from the downloaded data in the .run() loop, below.

  Returns:
    keys_placeholder: Keys placeholder.
    images_placeholder: Images placeholder.
    labels_placeholder: Labels placeholder.
  """
  # Note that the shapes of the placeholders match the shapes of the full
  # image and label tensors, except the first dimension is now batch_size
  # rather than the full size of the train or test data sets. We set
  # batch_size to None in order to allow us to use a variable number of
  # instances for online prediction.
  keys_placeholder = tf.placeholder(tf.int64, shape=(None,))
  images_placeholder = tf.placeholder(tf.float32, shape=(None,
                                                         mnist.IMAGE_PIXELS))
  labels_placeholder = tf.placeholder(tf.int32, shape=(None,))
  return keys_placeholder, images_placeholder, labels_placeholder 
開發者ID:GoogleCloudPlatform,項目名稱:cloudml-samples,代碼行數:23,代碼來源:task.py

示例3: placeholder_inputs

# 需要導入模塊: from tensorflow.examples.tutorials.mnist import mnist [as 別名]
# 或者: from tensorflow.examples.tutorials.mnist.mnist import IMAGE_PIXELS [as 別名]
def placeholder_inputs(batch_size):
    """Generate placeholder variables to represent the input tensors.
    These placeholders are used as inputs by the rest of the model building
    code and will be fed from the downloaded data in the .run() loop, below.
    Args:
        batch_size: The batch size will be baked into both placeholders.
    Returns:
        images_placeholder: Images placeholder.
        labels_placeholder: Labels placeholder.
    """
    # Note that the shapes of the placeholders match the shapes of the full
    # image and label tensors, except the first dimension is now batch_size
    # rather than the full size of the train or test data sets.
    images_placeholder = tf.placeholder(tf.float32, shape=(batch_size,
                                                           mnist.IMAGE_PIXELS))
    labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
    return images_placeholder, labels_placeholder 
開發者ID:gradientzoo,項目名稱:python-gradientzoo,代碼行數:19,代碼來源:tensorflow_mnist.py

示例4: read_and_decode

# 需要導入模塊: from tensorflow.examples.tutorials.mnist import mnist [as 別名]
# 或者: from tensorflow.examples.tutorials.mnist.mnist import IMAGE_PIXELS [as 別名]
def read_and_decode(filename_queue):
  reader = tf.TFRecordReader()
  _, serialized_example = reader.read(filename_queue)
  features = tf.parse_single_example(
      serialized_example,
      # Defaults are not specified since both keys are required.
      features={
          'image_raw': tf.FixedLenFeature([], tf.string),
          'label': tf.FixedLenFeature([], tf.int64),
      })

  # Convert from a scalar string tensor (whose single string has
  # length mnist.IMAGE_PIXELS) to a uint8 tensor with shape
  # [mnist.IMAGE_PIXELS].
  image = tf.decode_raw(features['image_raw'], tf.uint8)
  image.set_shape([mnist.IMAGE_PIXELS])
  print("image:",image)

  # OPTIONAL: Could reshape into a 28x28 image and apply distortions
  # here.  Since we are not applying any distortions in this
  # example, and the next step expects the image to be flattened
  # into a vector, we don't bother.

  # Convert from [0, 255] -> [-0.5, 0.5] floats.
  image = tf.cast(image, tf.float32) * (1. / 255) - 0.5

  # Convert label from a scalar uint8 tensor to an int32 scalar.
  label = tf.cast(features['label'], tf.int32)
  print("label:",label)

  return image, label 
開發者ID:cheyang,項目名稱:mnist-examples,代碼行數:33,代碼來源:mnist_train.py

示例5: inputs

# 需要導入模塊: from tensorflow.examples.tutorials.mnist import mnist [as 別名]
# 或者: from tensorflow.examples.tutorials.mnist.mnist import IMAGE_PIXELS [as 別名]
def inputs(train, batch_size, num_epochs):
  """Reads input data num_epochs times.
  Args:
    train: Selects between the training (True) and validation (False) data.
    batch_size: Number of examples per returned batch.
    num_epochs: Number of times to read the input data, or 0/None to
       train forever.
  Returns:
    A tuple (images, labels), where:
    * images is a float tensor with shape [batch_size, mnist.IMAGE_PIXELS]
      in the range [-0.5, 0.5].
    * labels is an int32 tensor with shape [batch_size] with the true label,
      a number in the range [0, mnist.NUM_CLASSES).
    Note that an tf.train.QueueRunner is added to the graph, which
    must be run using e.g. tf.train.start_queue_runners().
  """
  if not num_epochs: num_epochs = None
  filename = os.path.join(FLAGS.train_dir,
                          TRAIN_FILE if train else VALIDATION_FILE)

  with tf.name_scope('input'):
    filename_queue = tf.train.string_input_producer(
        [filename], num_epochs=num_epochs)

    # Even when reading in multiple threads, share the filename
    # queue.
    image, label = read_and_decode(filename_queue)

    # Shuffle the examples and collect them into batch_size batches.
    # (Internally uses a RandomShuffleQueue.)
    # We run this in two threads to avoid being a bottleneck.
    images, sparse_labels = tf.train.shuffle_batch(
        [image, label], batch_size=batch_size, num_threads=2,
        capacity=1000 + 3 * batch_size,
        # Ensures a minimum amount of shuffling of examples.
        min_after_dequeue=1000)

    return images, sparse_labels 
開發者ID:cheyang,項目名稱:mnist-examples,代碼行數:40,代碼來源:mnist_train.py

示例6: read_and_decode

# 需要導入模塊: from tensorflow.examples.tutorials.mnist import mnist [as 別名]
# 或者: from tensorflow.examples.tutorials.mnist.mnist import IMAGE_PIXELS [as 別名]
def read_and_decode(filename_queue):
  reader = tf.TFRecordReader()
  _, serialized_example = reader.read(filename_queue)
  features = tf.parse_single_example(
      serialized_example,
      # Defaults are not specified since both keys are required.
      features={
          'image_raw': tf.FixedLenFeature([], tf.string),
          'label': tf.FixedLenFeature([], tf.int64),
      })

  # Convert from a scalar string tensor (whose single string has
  # length mnist.IMAGE_PIXELS) to a uint8 tensor with shape
  # [mnist.IMAGE_PIXELS].
  image = tf.decode_raw(features['image_raw'], tf.uint8)
  image.set_shape([mnist.IMAGE_PIXELS])

  # OPTIONAL: Could reshape into a 28x28 image and apply distortions
  # here.  Since we are not applying any distortions in this
  # example, and the next step expects the image to be flattened
  # into a vector, we don't bother.

  # Convert from [0, 255] -> [-0.5, 0.5] floats.
  image = tf.cast(image, tf.float32) * (1. / 255) - 0.5

  # Convert label from a scalar uint8 tensor to an int32 scalar.
  label = tf.cast(features['label'], tf.int32)

  return image, label 
開發者ID:yaroslavvb,項目名稱:stuff,代碼行數:31,代碼來源:fully_connected_reader.py

示例7: inputs

# 需要導入模塊: from tensorflow.examples.tutorials.mnist import mnist [as 別名]
# 或者: from tensorflow.examples.tutorials.mnist.mnist import IMAGE_PIXELS [as 別名]
def inputs(train, batch_size, num_epochs):
  """Reads input data num_epochs times.

  Args:
    train: Selects between the training (True) and validation (False) data.
    batch_size: Number of examples per returned batch.
    num_epochs: Number of times to read the input data, or 0/None to
       train forever.

  Returns:
    A tuple (images, labels), where:
    * images is a float tensor with shape [batch_size, mnist.IMAGE_PIXELS]
      in the range [-0.5, 0.5].
    * labels is an int32 tensor with shape [batch_size] with the true label,
      a number in the range [0, mnist.NUM_CLASSES).
    Note that an tf.train.QueueRunner is added to the graph, which
    must be run using e.g. tf.train.start_queue_runners().
  """
  if not num_epochs: num_epochs = None
  filename = os.path.join(FLAGS.train_dir,
                          TRAIN_FILE if train else VALIDATION_FILE)

  with tf.name_scope('input'):
    filename_queue = tf.train.string_input_producer(
        [filename], num_epochs=num_epochs)

    # Even when reading in multiple threads, share the filename
    # queue.
    image, label = read_and_decode(filename_queue)

    # Shuffle the examples and collect them into batch_size batches.
    # (Internally uses a RandomShuffleQueue.)
    # We run this in two threads to avoid being a bottleneck.
    images, sparse_labels = tf.train.batch(
        [image, label], batch_size=batch_size, num_threads=10, capacity=60000)

    return images, sparse_labels 
開發者ID:yaroslavvb,項目名稱:stuff,代碼行數:39,代碼來源:fully_connected_reader.py

示例8: placeholder_inputs

# 需要導入模塊: from tensorflow.examples.tutorials.mnist import mnist [as 別名]
# 或者: from tensorflow.examples.tutorials.mnist.mnist import IMAGE_PIXELS [as 別名]
def placeholder_inputs(batch_size):
	'''
		Generate Placeholder variables to represent input tensors
	'''
	images_placeholder = tf.placeholder(tf.float32, shape = (batch_size, mnist.IMAGE_PIXELS))
	labels_placeholder = tf.placeholder(tf.int32, shape = (batch_size))
	return images_placeholder, labels_placeholder 
開發者ID:Ram81,項目名稱:Tensorflow_Practice,代碼行數:9,代碼來源:fully_connected_mnist.py

示例9: placeholder_inputs

# 需要導入模塊: from tensorflow.examples.tutorials.mnist import mnist [as 別名]
# 或者: from tensorflow.examples.tutorials.mnist.mnist import IMAGE_PIXELS [as 別名]
def placeholder_inputs():
  """Generate placeholder variables to represent the input tensors.

  Returns:
    images_placeholder: Images placeholder.
    labels_placeholder: Labels placeholder.
  """
  images_placeholder = tf.placeholder(tf.float32, shape=(None,
                                                         mnist.IMAGE_PIXELS))
  labels_placeholder = tf.placeholder(tf.int32, shape=(None))
  return images_placeholder, labels_placeholder 
開發者ID:rashmitripathi,項目名稱:DeepLearning_VirtualReality_BigData_Project,代碼行數:13,代碼來源:mnist.py

示例10: inference

# 需要導入模塊: from tensorflow.examples.tutorials.mnist import mnist [as 別名]
# 或者: from tensorflow.examples.tutorials.mnist.mnist import IMAGE_PIXELS [as 別名]
def inference(inp, num_clusters, hidden1_units, hidden2_units):
  """Build the MNIST model up to where it may be used for inference.

  Args:
    inp: input data
    num_clusters: number of clusters of input features to train.
    hidden1_units: Size of the first hidden layer.
    hidden2_units: Size of the second hidden layer.

  Returns:
    logits: Output tensor with the computed logits.
    clustering_loss: Clustering loss.
    kmeans_training_op: An op to train the clustering.
  """
  # Clustering
  kmeans = tf.contrib.factorization.KMeans(
      inp,
      num_clusters,
      distance_metric=tf.contrib.factorization.COSINE_DISTANCE,
      # TODO(agarwal): kmeans++ is currently causing crash in dbg mode.
      # Enable this after fixing.
      # initial_clusters=tf.contrib.factorization.KMEANS_PLUS_PLUS_INIT,
      use_mini_batch=True)

  all_scores, _, clustering_scores, kmeans_training_op = kmeans.training_graph()
  # Some heuristics to approximately whiten this output.
  all_scores = (all_scores[0] - 0.5) * 5
  # Here we avoid passing the gradients from the supervised objective back to
  # the clusters by creating a stop_gradient node.
  all_scores = tf.stop_gradient(all_scores)
  clustering_loss = tf.reduce_sum(clustering_scores[0])
  # Hidden 1
  with tf.name_scope('hidden1'):
    weights = tf.Variable(
        tf.truncated_normal([num_clusters, hidden1_units],
                            stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))),
        name='weights')
    biases = tf.Variable(tf.zeros([hidden1_units]),
                         name='biases')
    hidden1 = tf.nn.relu(tf.matmul(all_scores, weights) + biases)
  # Hidden 2
  with tf.name_scope('hidden2'):
    weights = tf.Variable(
        tf.truncated_normal([hidden1_units, hidden2_units],
                            stddev=1.0 / math.sqrt(float(hidden1_units))),
        name='weights')
    biases = tf.Variable(tf.zeros([hidden2_units]),
                         name='biases')
    hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases)
  # Linear
  with tf.name_scope('softmax_linear'):
    weights = tf.Variable(
        tf.truncated_normal([hidden2_units, NUM_CLASSES],
                            stddev=1.0 / math.sqrt(float(hidden2_units))),
        name='weights')
    biases = tf.Variable(tf.zeros([NUM_CLASSES]),
                         name='biases')
    logits = tf.matmul(hidden2, weights) + biases
  return logits, clustering_loss, kmeans_training_op 
開發者ID:rashmitripathi,項目名稱:DeepLearning_VirtualReality_BigData_Project,代碼行數:61,代碼來源:mnist.py


注:本文中的tensorflow.examples.tutorials.mnist.mnist.IMAGE_PIXELS屬性示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。