本文整理汇总了Python中differential_privacy.dp_sgd.dp_optimizer.utils.BuildNetwork方法的典型用法代码示例。如果您正苦于以下问题:Python utils.BuildNetwork方法的具体用法?Python utils.BuildNetwork怎么用?Python utils.BuildNetwork使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类differential_privacy.dp_sgd.dp_optimizer.utils
的用法示例。
在下文中一共展示了utils.BuildNetwork方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: Eval
# 需要导入模块: from differential_privacy.dp_sgd.dp_optimizer import utils [as 别名]
# 或者: from differential_privacy.dp_sgd.dp_optimizer.utils import BuildNetwork [as 别名]
def Eval(mnist_data_file, network_parameters, num_testing_images,
randomize, load_path, save_mistakes=False):
"""Evaluate MNIST for a number of steps.
Args:
mnist_data_file: Path of a file containing the MNIST images to process.
network_parameters: parameters for defining and training the network.
num_testing_images: the number of images we will evaluate on.
randomize: if false, randomize; otherwise, read the testing images
sequentially.
load_path: path where to load trained parameters from.
save_mistakes: save the mistakes if True.
Returns:
The evaluation accuracy as a float.
"""
batch_size = 100
# Like for training, we need a session for executing the TensorFlow graph.
with tf.Graph().as_default(), tf.Session() as sess:
# Create the basic Mnist model.
images, labels = MnistInput(mnist_data_file, batch_size, randomize)
logits, _, _ = utils.BuildNetwork(images, network_parameters)
softmax = tf.nn.softmax(logits)
# Load the variables.
ckpt_state = tf.train.get_checkpoint_state(load_path)
if not (ckpt_state and ckpt_state.model_checkpoint_path):
raise ValueError("No model checkpoint to eval at %s\n" % load_path)
saver = tf.train.Saver()
saver.restore(sess, ckpt_state.model_checkpoint_path)
coord = tf.train.Coordinator()
_ = tf.train.start_queue_runners(sess=sess, coord=coord)
total_examples = 0
correct_predictions = 0
image_index = 0
mistakes = []
for _ in xrange((num_testing_images + batch_size - 1) // batch_size):
predictions, label_values = sess.run([softmax, labels])
# Count how many were predicted correctly.
for prediction, label_value in zip(predictions, label_values):
total_examples += 1
if np.argmax(prediction) == label_value:
correct_predictions += 1
elif save_mistakes:
mistakes.append({"index": image_index,
"label": label_value,
"pred": np.argmax(prediction)})
image_index += 1
return (correct_predictions / total_examples,
mistakes if save_mistakes else None)