当前位置: 首页>>代码示例>>Python>>正文


Python muji.Allreduce方法代码示例

本文整理汇总了Python中caffe2.python.muji.Allreduce方法的典型用法代码示例。如果您正苦于以下问题:Python muji.Allreduce方法的具体用法?Python muji.Allreduce怎么用?Python muji.Allreduce使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在caffe2.python.muji的用法示例。

在下文中一共展示了muji.Allreduce方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: _add_allreduce_graph

# 需要导入模块: from caffe2.python import muji [as 别名]
# 或者: from caffe2.python.muji import Allreduce [as 别名]
def _add_allreduce_graph(model):
    """Construct the graph that performs Allreduce on the gradients."""
    # Need to all-reduce the per-GPU gradients if training with more than 1 GPU
    all_params = model.TrainableParams()
    assert len(all_params) % cfg.NUM_GPUS == 0
    # The model parameters are replicated on each GPU, get the number
    # distinct parameter blobs (i.e., the number of parameter blobs on
    # each GPU)
    params_per_gpu = int(len(all_params) / cfg.NUM_GPUS)
    with c2_utils.CudaScope(0):
        # Iterate over distinct parameter blobs
        for i in range(params_per_gpu):
            # Gradients from all GPUs for this parameter blob
            gradients = [
                model.param_to_grad[p] for p in all_params[i::params_per_gpu]
            ]
            if len(gradients) > 0:
                if cfg.USE_NCCL:
                    model.net.NCCLAllreduce(gradients, gradients)
                else:
                    muji.Allreduce(model.net, gradients, reduced_affix='') 
开发者ID:yihui-he,项目名称:KL-Loss,代码行数:23,代码来源:optimizer.py


示例2: build_data_parallel_model

# 需要导入模块: from caffe2.python import muji [as 别名]
# 或者: from caffe2.python.muji import Allreduce [as 别名]
def build_data_parallel_model(model, single_gpu_build_func):
    if model.train:
        all_loss_gradients = {}  # Will include loss gradients from all GPUs
        # Build the model on each GPU with correct name and device scoping
        for gpu_id in range(cfg.NUM_GPUS):
            with core.NameScope('gpu_{}'.format(gpu_id)):
                with core.DeviceScope(muji.OnGPU(gpu_id)):
                    all_loss_gradients.update(
                        single_gpu_build_func(model))
        # Add backward pass on all GPUs
        model.AddGradientOperators(all_loss_gradients)
        if cfg.NUM_GPUS > 1:
            # Need to all-reduce the per-GPU gradients if training with more
            # than 1 GPU
            all_params = model.TrainableParams()
            assert len(all_params) % cfg.NUM_GPUS == 0, \
                'This should not happen.'
            # The model parameters are replicated on each GPU, get the number
            # distinct parameter blobs (i.e., the number of parameter blobs on
            # each GPU)
            params_per_gpu = int(len(all_params) / cfg.NUM_GPUS)
            with core.DeviceScope(muji.OnGPU(cfg.ROOT_GPU_ID)):
                # Iterate over distinct parameter blobs
                for i in range(params_per_gpu):
                    # Gradients from all GPUs for this parameter blob
                    gradients = [
                        model.param_to_grad[p]
                        for p in all_params[i::params_per_gpu]
                    ]
                    if len(gradients) > 0:
                        if cfg.USE_NCCL:
                            model.net.NCCLAllreduce(gradients, gradients)
                        else:
                            muji.Allreduce(
                                model.net, gradients, reduced_affix='')
        for gpu_id in range(cfg.NUM_GPUS):
            # After all-reduce, all GPUs perform SGD updates on their identical
            # params and gradients in parallel
            add_parameter_update_ops(model, gpu_id)
    else:
        # Testing only supports running on a single GPU
        with core.NameScope('gpu_{}'.format(cfg.ROOT_GPU_ID)):
            with core.DeviceScope(muji.OnGPU(cfg.ROOT_GPU_ID)):
                single_gpu_build_func(model) 
开发者ID:facebookresearch,项目名称:DetectAndTrack,代码行数:46,代码来源:model_builder.py



注:本文中的caffe2.python.muji.Allreduce方法示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。