当前位置: 首页>>代码示例>>Python>>正文


Python comm.get_world_size方法代码示例

本文整理汇总了Python中maskrcnn_benchmark.utils.comm.get_world_size方法的典型用法代码示例。如果您正苦于以下问题:Python comm.get_world_size方法的具体用法?Python comm.get_world_size怎么用?Python comm.get_world_size使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在maskrcnn_benchmark.utils.comm的用法示例。


在下文中一共展示了comm.get_world_size方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: reduce_loss_dict

# 需要导入模块: from maskrcnn_benchmark.utils import comm [as 别名]
# 或者: from maskrcnn_benchmark.utils.comm import get_world_size [as 别名]
def reduce_loss_dict(loss_dict):
    """
    Reduce the loss dictionary from all processes so that process with rank
    0 has the averaged results. Returns a dict with the same fields as
    loss_dict, after reduction.
    """
    world_size = get_world_size()
    if world_size < 2:
        return loss_dict
    with torch.no_grad():
        loss_names = []
        all_losses = []
        for k in sorted(loss_dict.keys()):
            loss_names.append(k)
            all_losses.append(loss_dict[k])
        all_losses = torch.stack(all_losses, dim=0)
        dist.reduce(all_losses, dst=0)
        if dist.get_rank() == 0:
            # only main process gets accumulated, so only divide by
            # world_size in this case
            all_losses /= world_size
        reduced_losses = {k: v for k, v in zip(loss_names, all_losses)}
    return reduced_losses 
开发者ID:Res2Net,项目名称:Res2Net-maskrcnn,代码行数:25,代码来源:trainer.py

示例2: reduce_loss_dict

# 需要导入模块: from maskrcnn_benchmark.utils import comm [as 别名]
# 或者: from maskrcnn_benchmark.utils.comm import get_world_size [as 别名]
def reduce_loss_dict(loss_dict):
    """
    Reduce the loss dictionary from all processes so that process with rank
    0 has the averaged results. Returns a dict with the same fields as
    loss_dict, after reduction.
    """
    world_size = get_world_size()
    if world_size < 2:
        return loss_dict
    with torch.no_grad():
        loss_names = []
        all_losses = []
        for k, v in loss_dict.items():
            loss_names.append(k)
            all_losses.append(v)
        all_losses = torch.stack(all_losses, dim=0)
        dist.reduce(all_losses, dst=0)
        if dist.get_rank() == 0:
            # only main process gets accumulated, so only divide by
            # world_size in this case
            all_losses /= world_size
        reduced_losses = {k: v for k, v in zip(loss_names, all_losses)}
    return reduced_losses 
开发者ID:HRNet,项目名称:HRNet-MaskRCNN-Benchmark,代码行数:25,代码来源:trainer.py

示例3: reduce_loss_dict

# 需要导入模块: from maskrcnn_benchmark.utils import comm [as 别名]
# 或者: from maskrcnn_benchmark.utils.comm import get_world_size [as 别名]
def reduce_loss_dict(loss_dict):
    """
    Reduce the loss dictionary from all processes so that process with rank
    0 has the averaged results. Returns a dict with the same fields as
    loss_dict, after reduction.
    """
    world_size = get_world_size()
    if world_size < 2:
        return loss_dict
    with torch.no_grad():
        loss_names = []
        all_losses = []
        for k in sorted(loss_dict.keys()):
            loss_names.append(k)
            all_losses.append(loss_dict[k])
        all_losses = torch.stack(all_losses, dim=0)
        dist.reduce(all_losses, dst=0)
        if dist.get_rank() == 0:
            # only main process gets accumulated, so only divide by
            # world_size in this case
            all_losses /= world_size
        reduced_losses = {k: v for k, v in zip(loss_names, all_losses)}
    return reduced_losses

# Instead of zeroing, set parameter grads to None
# Prevents extraneous copy as we're not accumulating 
开发者ID:mlperf,项目名称:training,代码行数:28,代码来源:trainer.py

示例4: make_init_data_loader

# 需要导入模块: from maskrcnn_benchmark.utils import comm [as 别名]
# 或者: from maskrcnn_benchmark.utils.comm import get_world_size [as 别名]
def make_init_data_loader(cfg, is_distributed=False, images_per_batch=32):
    num_gpus = get_world_size()
    assert (
        images_per_batch % num_gpus == 0
    ), "SOLVER.IMS_PER_BATCH ({}) must be divisible by the number of GPUs ({}) used.".format(
        images_per_batch, num_gpus)
    
    images_per_gpu = images_per_batch // num_gpus
    shuffle = False if not is_distributed else True
    num_iters = None # 1 epoch
    
    start_iter = 0

    if images_per_gpu > 1:
        logger = logging.getLogger(__name__)
        logger.warning(
            "When using more than one image per GPU you may encounter "
            "an out-of-memory (OOM) error if your GPU does not have "
            "sufficient memory. If this happens, you can reduce "
            "SOLVER.IMS_PER_BATCH (for training) or "
            "TEST.IMS_PER_BATCH (for inference). For training, you must "
            "also adjust the learning rate and schedule length according "
            "to the linear scaling rule. See for example: "
            "https://github.com/facebookresearch/Detectron/blob/master/configs/getting_started/tutorial_1gpu_e2e_faster_rcnn_R-50-FPN.yaml#L14"
        )

    # group images which have similar aspect ratio. In this case, we only
    # group in two cases: those with width / height > 1, and the other way around,
    # but the code supports more general grouping strategy
    aspect_grouping = [1] if cfg.DATALOADER.ASPECT_RATIO_GROUPING else []

    paths_catalog = import_file(
        "maskrcnn_benchmark.config.paths_catalog", cfg.PATHS_CATALOG, True
    )
    DatasetCatalog = paths_catalog.DatasetCatalog
    dataset_list = cfg.DATASETS.TRAIN

    # If bbox aug is enabled in testing, simply set transforms to None and we will apply transforms later
    transforms = build_transforms(cfg, is_train=True)
    datasets = build_dataset(dataset_list, transforms, DatasetCatalog, is_train=True)

    save_labels(datasets, cfg.OUTPUT_DIR)

    data_loaders = []
    for dataset in datasets:
        sampler = make_data_sampler(dataset, shuffle, is_distributed)
        batch_sampler = make_batch_data_sampler(
            dataset, sampler, aspect_grouping, images_per_gpu, num_iters, start_iter
        )
        collator = BatchCollator(cfg.DATALOADER.SIZE_DIVISIBILITY)
        num_workers = cfg.DATALOADER.NUM_WORKERS
        data_loader = torch.utils.data.DataLoader(
            dataset,
            num_workers=num_workers,
            batch_sampler=batch_sampler,
            collate_fn=collator,
        )
        data_loaders.append(data_loader)
    return data_loaders[0] 
开发者ID:ChenJoya,项目名称:sampling-free,代码行数:61,代码来源:build.py


注:本文中的maskrcnn_benchmark.utils.comm.get_world_size方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。