本文整理匯總了Python中torch.nn.parallel方法的典型用法代碼示例。如果您正苦於以下問題:Python nn.parallel方法的具體用法?Python nn.parallel怎麽用?Python nn.parallel使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類torch.nn
的用法示例。
在下文中一共展示了nn.parallel方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: get_parser
# 需要導入模塊: from torch import nn [as 別名]
# 或者: from torch.nn import parallel [as 別名]
def get_parser():
parser = argparse.ArgumentParser(description='PSMNet')
parser.add_argument('-cfg', '--cfg', '--config', default='./configs/default/config_car.py', help='config path')
parser.add_argument('--data_path', default='./data/kitti/training/', help='data_path')
parser.add_argument('--epochs', type=int, default=60, help='number of epochs to train')
parser.add_argument('--loadmodel', default=None, help='load model')
parser.add_argument('--savemodel', default=None, help='save model')
parser.add_argument('--debug', action='store_true', default=False, help='debug mode')
parser.add_argument('--seed', type=int, default=1, metavar='S', help='random seed (default: 1)')
parser.add_argument('--devices', '-d', type=str, default=None)
parser.add_argument('--lr_scale', type=int, default=40, metavar='S', help='lr scale')
parser.add_argument('--split_file', default='./data/kitti/train.txt', help='split file')
parser.add_argument('--btrain', '-btrain', type=int, default=None)
parser.add_argument('--start_epoch', type=int, default=None)
parser.add_argument('-j', '--workers', default=4, type=int, metavar='N',
help='number of data loading workers (default: 4)')
## for distributed training
parser.add_argument('--world-size', default=1, type=int,
help='number of nodes for distributed training')
parser.add_argument('--rank', default=0, type=int,
help='node rank for distributed training')
parser.add_argument('--dist-url', type=str,
help='url used to set up distributed training')
parser.add_argument('--dist-backend', default='nccl', type=str,
help='distributed backend')
parser.add_argument('--multiprocessing-distributed', action='store_true',
help='Use multi-processing distributed training to launch '
'N processes per node, which has N GPUs. This is the '
'fastest way to use PyTorch for either single node or '
'multi node data parallel training')
args = parser.parse_args()
if not args.devices:
args.devices = str(np.argmin(mem_info()))
if args.devices is not None and '-' in args.devices:
gpus = args.devices.split('-')
gpus[0] = 0 if not gpus[0].isdigit() else int(gpus[0])
gpus[1] = len(mem_info()) if not gpus[1].isdigit() else int(gpus[1]) + 1
args.devices = ','.join(map(lambda x: str(x), list(range(*gpus))))
if not args.dist_url:
args.dist_url = "tcp://127.0.0.1:{}".format(random_int() % 30000)
print('Using GPU:{}'.format(args.devices))
os.environ['CUDA_VISIBLE_DEVICES'] = args.devices
return args