本文整理匯總了Python中tensorpack.utils.logger.error方法的典型用法代碼示例。如果您正苦於以下問題:Python logger.error方法的具體用法?Python logger.error怎麽用?Python logger.error使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類tensorpack.utils.logger
的用法示例。
在下文中一共展示了logger.error方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: convert_param_name
# 需要導入模塊: from tensorpack.utils import logger [as 別名]
# 或者: from tensorpack.utils.logger import error [as 別名]
def convert_param_name(param):
resnet_param = {}
for k, v in six.iteritems(param):
try:
newname = name_conversion(k)
except Exception:
logger.error("Exception when processing caffe layer {}".format(k))
raise
logger.info("Name Transform: " + k + ' --> ' + newname)
resnet_param[newname] = v
return resnet_param
示例2: get_config
# 需要導入模塊: from tensorpack.utils import logger [as 別名]
# 或者: from tensorpack.utils.logger import error [as 別名]
def get_config(model, nr_tower):
batch = TOTAL_BATCH_SIZE // nr_tower
logger.info("Running on {} towers. Batch size per tower: {}".format(nr_tower, batch))
dataset_train = get_data('train', batch)
dataset_val = get_data('val', batch)
step_size = 1280000 // TOTAL_BATCH_SIZE
max_iter = 3 * 10**5
max_epoch = (max_iter // step_size) + 1
callbacks = [
ModelSaver(),
ScheduledHyperParamSetter('learning_rate',
[(0, 0.5), (max_iter, 0)],
interp='linear', step_based=True),
EstimatedTimeLeft()
]
infs = [ClassificationError('wrong-top1', 'val-error-top1'),
ClassificationError('wrong-top5', 'val-error-top5')]
if nr_tower == 1:
# single-GPU inference with queue prefetch
callbacks.append(InferenceRunner(QueueInput(dataset_val), infs))
else:
# multi-GPU inference (with mandatory queue prefetch)
callbacks.append(DataParallelInferenceRunner(
dataset_val, infs, list(range(nr_tower))))
return TrainConfig(
model=model,
dataflow=dataset_train,
callbacks=callbacks,
steps_per_epoch=step_size,
max_epoch=max_epoch,
)