本文整理汇总了Python中verification.evaluate方法的典型用法代码示例。如果您正苦于以下问题:Python verification.evaluate方法的具体用法?Python verification.evaluate怎么用?Python verification.evaluate使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类verification
的用法示例。
在下文中一共展示了verification.evaluate方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: main
# 需要导入模块: import verification [as 别名]
# 或者: from verification import evaluate [as 别名]
def main(args):
with tf.Graph().as_default():
with tf.Session() as sess:
# prepare validate datasets
ver_list = []
ver_name_list = []
for db in args.eval_datasets:
print('begin db %s convert.' % db)
data_set = load_data(db, args.image_size, args)
ver_list.append(data_set)
ver_name_list.append(db)
# Load the model
load_model(args.model)
# Get input and output tensors, ignore phase_train_placeholder for it have default value.
inputs_placeholder = tf.get_default_graph().get_tensor_by_name("input:0")
embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0")
# image_size = images_placeholder.get_shape()[1] # For some reason this doesn't work for frozen graphs
embedding_size = embeddings.get_shape()[1]
for db_index in range(len(ver_list)):
# Run forward pass to calculate embeddings
print('\nRunnning forward pass on {} images'.format(ver_name_list[db_index]))
start_time = time.time()
data_sets, issame_list = ver_list[db_index]
nrof_batches = data_sets.shape[0] // args.test_batch_size
emb_array = np.zeros((data_sets.shape[0], embedding_size))
for index in range(nrof_batches):
start_index = index * args.test_batch_size
end_index = min((index + 1) * args.test_batch_size, data_sets.shape[0])
feed_dict = {inputs_placeholder: data_sets[start_index:end_index, ...]}
emb_array[start_index:end_index, :] = sess.run(embeddings, feed_dict=feed_dict)
tpr, fpr, accuracy, val, val_std, far = evaluate(emb_array, issame_list, nrof_folds=args.eval_nrof_folds)
duration = time.time() - start_time
print("total time %.3fs to evaluate %d images of %s" % (duration, data_sets.shape[0], ver_name_list[db_index]))
print('Accuracy: %1.3f+-%1.3f' % (np.mean(accuracy), np.std(accuracy)))
print('Validation rate: %2.5f+-%2.5f @ FAR=%2.5f' % (val, val_std, far))
print('fpr and tpr: %1.3f %1.3f' % (np.mean(fpr, 0), np.mean(tpr, 0)))
auc = metrics.auc(fpr, tpr)
print('Area Under Curve (AUC): %1.3f' % auc)
eer = brentq(lambda x: 1. - x - interpolate.interp1d(fpr, tpr)(x), 0., 1.)
print('Equal Error Rate (EER): %1.3f' % eer)