當前位置: 首頁>>代碼示例 >>用法及示例精選 >>正文


Python PyTorch detect_anomaly用法及代碼示例


本文簡要介紹python語言中 torch.autograd.detect_anomaly 的用法。

用法:

class torch.autograd.detect_anomaly

Context-manager 為 autograd 引擎啟用異常檢測。

這做了兩件事:

  • 在啟用檢測的情況下運行正向傳遞將允許反向傳遞打印創建失敗的反向函數的正向操作的回溯。

  • 任何生成 “nan” 值的反向計算都會引發錯誤。

警告

此模式應僅用於調試,因為不同的測試會減慢您的程序執行速度。

示例

>>> import torch
>>> from torch import autograd
>>> class MyFunc(autograd.Function):
...     @staticmethod
...     def forward(ctx, inp):
...         return inp.clone()
...     @staticmethod
...     def backward(ctx, gO):
...         # Error during the backward pass
...         raise RuntimeError("Some error in backward")
...         return gO.clone()
>>> def run_fn(a):
...     out = MyFunc.apply(a)
...     return out.sum()
>>> inp = torch.rand(10, 10, requires_grad=True)
>>> out = run_fn(inp)
>>> out.backward()
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/your/pytorch/install/torch/_tensor.py", line 93, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph)
      File "/your/pytorch/install/torch/autograd/__init__.py", line 90, in backward
        allow_unreachable=True)  # allow_unreachable flag
      File "/your/pytorch/install/torch/autograd/function.py", line 76, in apply
        return self._forward_cls.backward(self, *args)
      File "<stdin>", line 8, in backward
    RuntimeError: Some error in backward
>>> with autograd.detect_anomaly():
...     inp = torch.rand(10, 10, requires_grad=True)
...     out = run_fn(inp)
...     out.backward()
    Traceback of forward call that caused the error:
      File "tmp.py", line 53, in <module>
        out = run_fn(inp)
      File "tmp.py", line 44, in run_fn
        out = MyFunc.apply(a)
    Traceback (most recent call last):
      File "<stdin>", line 4, in <module>
      File "/your/pytorch/install/torch/_tensor.py", line 93, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph)
      File "/your/pytorch/install/torch/autograd/__init__.py", line 90, in backward
        allow_unreachable=True)  # allow_unreachable flag
      File "/your/pytorch/install/torch/autograd/function.py", line 76, in apply
        return self._forward_cls.backward(self, *args)
      File "<stdin>", line 8, in backward
    RuntimeError: Some error in backward

相關用法


注:本文由純淨天空篩選整理自pytorch.org大神的英文原創作品 torch.autograd.detect_anomaly。非經特殊聲明,原始代碼版權歸原作者所有,本譯文未經允許或授權,請勿轉載或複製。