本文整理匯總了Python中ignite.metrics.Metric方法的典型用法代碼示例。如果您正苦於以下問題:Python metrics.Metric方法的具體用法?Python metrics.Metric怎麽用?Python metrics.Metric使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類ignite.metrics
的用法示例。
在下文中一共展示了metrics.Metric方法的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: __init__
# 需要導入模塊: from ignite import metrics [as 別名]
# 或者: from ignite.metrics import Metric [as 別名]
def __init__(self, src=None, alpha=0.98, output_transform=None):
if not (isinstance(src, Metric) or src is None):
raise TypeError("Argument src should be a Metric or None.")
if not (0.0 < alpha <= 1.0):
raise ValueError("Argument alpha should be a float between 0.0 and 1.0.")
if isinstance(src, Metric):
if output_transform is not None:
raise ValueError("Argument output_transform should be None if src is a Metric.")
self.src = src
self._get_src_value = self._get_metric_value
self.iteration_completed = self._metric_iteration_completed
else:
if output_transform is None:
raise ValueError("Argument output_transform should not be None if src corresponds "
"to the output of process function.")
self._get_src_value = self._get_output_value
self.update = self._output_update
self.alpha = alpha
super(RunningAverage, self).__init__(output_transform=output_transform)
示例2: __init__
# 需要導入模塊: from ignite import metrics [as 別名]
# 或者: from ignite.metrics import Metric [as 別名]
def __init__(self, metric: Metric):
self.source_metric = metric
self.reset()
super().__init__(lambda x: x[:-1])
示例3: update
# 需要導入模塊: from ignite import metrics [as 別名]
# 或者: from ignite.metrics import Metric [as 別名]
def update(self, output):
if not isinstance(self.source_metric, MetricsLambda):
self.source_metric.update(output)
return
# If a source metric is made of several metrics, e.g. MetricsLambda
# metrics, we need to update each sub-metrics separately
for source in self.source_metric.args:
if isinstance(source, Metric):
source.update(output)
return
示例4: create_supervised_evaluator
# 需要導入模塊: from ignite import metrics [as 別名]
# 或者: from ignite.metrics import Metric [as 別名]
def create_supervised_evaluator(
model: torch.nn.Module,
metrics: Optional[Dict[str, Metric]] = None,
device: Optional[Union[str, torch.device]] = None,
non_blocking: bool = False,
prepare_batch: Callable = _prepare_batch,
output_transform: Callable = lambda x, y, y_pred: (y_pred, y),
) -> Engine:
"""
Factory function for creating an evaluator for supervised models.
Args:
model (`torch.nn.Module`): the model to train.
metrics (dict of str - :class:`~ignite.metrics.Metric`): a map of metric names to Metrics.
device (str, optional): device type specification (default: None).
Applies to batches after starting the engine. Model *will not* be moved.
non_blocking (bool, optional): if True and this copy is between CPU and GPU, the copy may occur asynchronously
with respect to the host. For other cases, this argument has no effect.
prepare_batch (callable, optional): function that receives `batch`, `device`, `non_blocking` and outputs
tuple of tensors `(batch_x, batch_y)`.
output_transform (callable, optional): function that receives 'x', 'y', 'y_pred' and returns value
to be assigned to engine's state.output after each iteration. Default is returning `(y_pred, y,)` which fits
output expected by metrics. If you change it you should use `output_transform` in metrics.
Note:
`engine.state.output` for this engine is defind by `output_transform` parameter and is
a tuple of `(batch_pred, batch_y)` by default.
.. warning::
The internal use of `device` has changed.
`device` will now *only* be used to move the input data to the correct device.
The `model` should be moved by the user before creating an optimizer.
For more information see:
- `PyTorch Documentation <https://pytorch.org/docs/stable/optim.html#constructing-it>`_
- `PyTorch's Explanation <https://github.com/pytorch/pytorch/issues/7844#issuecomment-503713840>`_
Returns:
Engine: an evaluator engine with supervised inference function.
"""
metrics = metrics or {}
def _inference(engine: Engine, batch: Sequence[torch.Tensor]) -> Union[Any, Tuple[torch.Tensor]]:
model.eval()
with torch.no_grad():
x, y = prepare_batch(batch, device=device, non_blocking=non_blocking)
y_pred = model(x)
return output_transform(x, y, y_pred)
evaluator = Engine(_inference)
for name, metric in metrics.items():
metric.attach(evaluator, name)
return evaluator