本文整理汇总了Python中neon.callbacks.callbacks.Callbacks.add_validation_callback方法的典型用法代码示例。如果您正苦于以下问题:Python Callbacks.add_validation_callback方法的具体用法?Python Callbacks.add_validation_callback怎么用?Python Callbacks.add_validation_callback使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类neon.callbacks.callbacks.Callbacks
的用法示例。
在下文中一共展示了Callbacks.add_validation_callback方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: Model
# 需要导入模块: from neon.callbacks.callbacks import Callbacks [as 别名]
# 或者: from neon.callbacks.callbacks.Callbacks import add_validation_callback [as 别名]
# initialize model object
mlp = Model(layers=layers)
if args.model_file:
assert os.path.exists(args.model_file), '%s not found' % args.model_file
logger.info('loading initial model state from %s' % args.model_file)
mlp.load_weights(args.model_file)
# setup standard fit callbacks
callbacks = Callbacks(mlp, train_set, output_file=args.output_file,
progress_bar=args.progress_bar)
# add a callback ot calculate
if args.validation_freq:
# setup validation trial callbacks
callbacks.add_validation_callback(valid_set, args.validation_freq)
if args.serialize > 0:
# add callback for saving checkpoint file
# every args.serialize epchs
checkpoint_schedule = args.serialize
checkpoint_model_path = args.save_path
callbacks.add_serialize_callback(checkpoint_schedule, checkpoint_model_path)
# run fit
mlp.fit(train_set, optimizer=optimizer, num_epochs=num_epochs, cost=cost, callbacks=callbacks)
print('Misclassification error = %.1f%%' % (mlp.eval(valid_set, metric=Misclassification())*100))
示例2: GeneralizedCost
# 需要导入模块: from neon.callbacks.callbacks import Callbacks [as 别名]
# 或者: from neon.callbacks.callbacks.Callbacks import add_validation_callback [as 别名]
layers.append(Conv((3, 3, 384), pad=1, init=init2, bias=Constant(0), activation=relu))
layers.append(Conv((3, 3, 256), pad=1, init=init2, bias=Constant(1), activation=relu))
layers.append(Conv((3, 3, 256), pad=1, init=init2, bias=Constant(1), activation=relu))
layers.append(Pooling(3, strides=2))
layers.append(Affine(nout=4096, init=init1, bias=Constant(1), activation=relu))
layers.append(Dropout(keep=0.5))
layers.append(Affine(nout=4096, init=init1, bias=Constant(1), activation=relu))
layers.append(Dropout(keep=0.5))
layers.append(Affine(nout=1000, init=init1, bias=Constant(-7), activation=Softmax()))
cost = GeneralizedCost(costfunc=CrossEntropyMulti())
opt = MultiOptimizer({'default': opt_gdm, 'Bias': opt_biases})
mlp = Model(layers=layers)
# configure callbacks
callbacks = Callbacks(mlp, train, output_file=args.output_file)
if args.validation_freq:
callbacks.add_validation_callback(test, args.validation_freq)
if args.save_path:
checkpoint_schedule = range(1, args.epochs)
callbacks.add_serialize_callback(checkpoint_schedule, args.save_path, history=2)
mlp.fit(train, optimizer=opt, num_epochs=args.epochs, cost=cost, callbacks=callbacks)
test.exit_batch_provider()
train.exit_batch_provider()