本文整理匯總了Python中fairseq.tasks.FairseqTask方法的典型用法代碼示例。如果您正苦於以下問題:Python tasks.FairseqTask方法的具體用法?Python tasks.FairseqTask怎麽用?Python tasks.FairseqTask使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類fairseq.tasks
的用法示例。
在下文中一共展示了tasks.FairseqTask方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: load_diverse_ensemble_for_inference
# 需要導入模塊: from fairseq import tasks [as 別名]
# 或者: from fairseq.tasks import FairseqTask [as 別名]
def load_diverse_ensemble_for_inference(
filenames: List[str], task: Optional[tasks.FairseqTask] = None
):
"""Load an ensemble of diverse models for inference.
This method is similar to fairseq.utils.load_ensemble_for_inference
but allows to load diverse models with non-uniform args.
Args:
filenames: List of file names to checkpoints
task: Optional[FairseqTask]. If this isn't provided, we setup the task
using the first checkpoint's model args loaded from the saved state.
Return:
models, args: Tuple of lists. models contains the loaded models, args
the corresponding configurations.
task: Either the input task or the task created within this function
using args
"""
# load model architectures and weights
checkpoints_data = []
for filename in filenames:
if not PathManager.exists(filename):
raise IOError("Model file not found: {}".format(filename))
with PathManager.open(filename, "rb") as f:
checkpoints_data.append(
torch.load(
f,
map_location=lambda s, l: torch.serialization.default_restore_location(
s, "cpu"
),
)
)
# build ensemble
ensemble = []
if task is None:
if hasattr(checkpoints_data[0]["args"], "mode"):
checkpoints_data[0]["args"].mode = "eval"
task = tasks.setup_task(checkpoints_data[0]["args"])
for checkpoint_data in checkpoints_data:
model = task.build_model(checkpoint_data["args"])
model.load_state_dict(checkpoint_data["model"])
ensemble.append(model)
args_list = [s["args"] for s in checkpoints_data]
return ensemble, args_list, task