当前位置: 首页>>代码示例>>Python>>正文


Python model_helper.ModelHelper方法代码示例

本文整理汇总了Python中caffe2.python.model_helper.ModelHelper方法的典型用法代码示例。如果您正苦于以下问题:Python model_helper.ModelHelper方法的具体用法?Python model_helper.ModelHelper怎么用?Python model_helper.ModelHelper使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在caffe2.python.model_helper的用法示例。


在下文中一共展示了model_helper.ModelHelper方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: AddTrainingOperators

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def AddTrainingOperators(model, softmax, label):
    """Adds training operators to the model."""
    xent = model.LabelCrossEntropy([softmax, label], 'xent')
    # compute the expected loss
    loss = model.AveragedLoss(xent, "loss")
    # track the accuracy of the model
    AddAccuracy(model, softmax, label)
    # use the average loss we just computed to add gradient operators to the model
    model.AddGradientOperators([loss])
    # do a simple stochastic gradient descent
    ITER = brew.iter(model, "iter")
    # set the learning rate schedule
    LR = model.LearningRate(
        ITER, "LR", base_lr=-0.1, policy="step", stepsize=1, gamma=0.999 )
    # ONE is a constant value that is used in the gradient update. We only need
    # to create it once, so it is explicitly placed in param_init_net.
    ONE = model.param_init_net.ConstantFill([], "ONE", shape=[1], value=1.0)
    # Now, for each parameter, we do the gradient updates.
    for param in model.params:
        # Note how we get the gradient of each parameter - ModelHelper keeps
        # track of that.
        param_grad = model.param_to_grad[param]
        # The update is a simple weighted sum: param = param + param_grad * LR
        model.WeightedSum([param, ONE, param_grad, LR], param) 
开发者ID:Azure,项目名称:batch-shipyard,代码行数:26,代码来源:mnist.py

示例2: AddTrainingOperators

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def AddTrainingOperators(model, softmax, label):
    """Adds training operators to the model."""
    xent = model.LabelCrossEntropy([softmax, label], 'xent')
    # compute the expected loss
    loss = model.AveragedLoss(xent, "loss")
    # track the accuracy of the model
    AddAccuracy(model, softmax, label)
    # use the average loss we just computed to add gradient operators to the
    # model
    model.AddGradientOperators([loss])
    # do a simple stochastic gradient descent
    ITER = brew.iter(model, "iter")
    # set the learning rate schedule
    LR = model.LearningRate(
        ITER, "LR", base_lr=-0.1, policy="step", stepsize=1, gamma=0.999)
    # ONE is a constant value that is used in the gradient update. We only need
    # to create it once, so it is explicitly placed in param_init_net.
    ONE = model.param_init_net.ConstantFill([], "ONE", shape=[1], value=1.0)
    # Now, for each parameter, we do the gradient updates.
    for param in model.params:
        # Note how we get the gradient of each parameter - ModelHelper keeps
        # track of that.
        param_grad = model.param_to_grad[param]
        # The update is a simple weighted sum: param = param + param_grad * LR
        model.WeightedSum([param, ONE, param_grad, LR], param) 
开发者ID:lanpa,项目名称:tensorboardX,代码行数:27,代码来源:demo_caffe2.py

示例3: __init__

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def __init__(self, device_option: DeviceOption):
        super(Caffe2Network, self).__init__()
        self.device_option = device_option

        self.train_model = model_helper.ModelHelper(name="train_default_net")
        self.test_model = model_helper.ModelHelper(name="test_default_net", init_params=False)
        self.train_net = self.train_model.net
        self.test_net = self.test_model.net
        self.train_init_net = self.train_model.param_init_net
        self.test_init_net = self.test_model.param_init_net
        self.workspace = workspace
        self.output_dict = {}
        self.param_names = None
        # dict that helps us remember that we already added the gradients to the graph for a given loss
        self.gradients_by_loss = {}
        self.is_cuda = (device_option.device_type == caffe2_pb2.CUDA) 
开发者ID:deep500,项目名称:deep500,代码行数:18,代码来源:caffe2_network.py

示例4: create_model

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def create_model(model_builder, model, enable_tensor_core, float16_compute, loss_scale=1.0):
    """Creates one model replica.

    :param obj model_builder: A model instance that contains `forward_pass_builder` method.
    :param model: Caffe2's model helper class instances.
    :type model: :py:class:`caffe2.python.model_helper.ModelHelper`
    :param bool enable_tensor_core: If true, Volta's tensor core ops are enabled.
    :param float loss_scale: Scale loss for multi-GPU training.
    :return: Head nodes (softmax or loss depending on phase)
    """
    initializer = (pFP16Initializer if model_builder.dtype == 'float16' else Initializer)
    with brew.arg_scope([brew.conv, brew.fc],
                        WeightInitializer=initializer,
                        BiasInitializer=initializer,
                        enable_tensor_core=enable_tensor_core,
                        float16_compute=float16_compute):
        outputs = model_builder.forward_pass_builder(model, loss_scale=loss_scale)
    return outputs 
开发者ID:HewlettPackard,项目名称:dlcookbook-dlbs,代码行数:20,代码来源:benchmarks.py

示例5: test_inference

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def test_inference(self):
        """caffe2_benchmarks ->  TestCaffe2Benchmarks::test_inference    [Caffe2 CPU/GPU inference.]"""
        print("Testing inference")
        for params in itertools.product(self.models, self.batch_sizes, self.devices):
            if params[0] in self.gpu_skip_models:
                continue
            model = model_helper.ModelHelper(name=params[0])
            name, times = benchmark_inference(
                model,
                {'model':params[0], 'phase':'inference', 'batch_size':params[1],
                 'num_batches':self.num_batches, 'num_warmup_batches':self.num_warmup_iters,
                 'num_gpus':self.num_gpus, 'device':params[2], 'dtype':'float',
                 'enable_tensor_core':False}
            )
            self.assertEqual(len(times), self.num_batches)
            print("model=%s, name=%s, batch=%d, device=%s, time=%f" %\
                  (params[0], name, params[1], params[2], 1000.0*np.mean(times)))
            workspace.ResetWorkspace() 
开发者ID:HewlettPackard,项目名称:dlcookbook-dlbs,代码行数:20,代码来源:test_benchmarks.py

示例6: test_training_gpu

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def test_training_gpu(self):
        """caffe2_benchmarks ->  TestCaffe2Benchmarks::test_training_gpu [Caffe2 GPU training.]"""
        print("Testing GPU training")
        for params in itertools.product(self.models, self.batch_sizes, self.gpus):
            if params[0] in self.gpu_skip_models:
                continue
            model = model_helper.ModelHelper(name=params[0])
            name, times = benchmark_training(
                model,
                {'model':params[0], 'phase':'training', 'batch_size':params[1],
                 'num_batches':self.num_batches, 'num_warmup_batches':self.num_warmup_iters,
                 'num_gpus':len(params[2].split()), 'device':'gpu', 'dtype':'float',
                 'enable_tensor_core':False}
            )
            self.assertEqual(len(times), self.num_batches)
            print("model=%s, name=%s, batch=%d, gpus=%s, time=%f" %\
                  (params[0], name, params[1], params[2], 1000.0*np.mean(times)))
            workspace.ResetWorkspace() 
开发者ID:HewlettPackard,项目名称:dlcookbook-dlbs,代码行数:20,代码来源:test_benchmarks.py

示例7: main

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def main():
    args = parser.parse_args()
    args.gpu_id = 0

    model = model_helper.ModelHelper(name="le_net", init_params=False)

    # Bring in the init net from init_net.pb
    init_net_proto = caffe2_pb2.NetDef()
    with open(args.c2_init, "rb") as f:
        init_net_proto.ParseFromString(f.read())
    model.param_init_net = core.Net(init_net_proto)  # model.param_init_net.AppendNet(core.Net(init_net_proto)) #

    # bring in the predict net from predict_net.pb
    predict_net_proto = caffe2_pb2.NetDef()
    with open(args.c2_predict, "rb") as f:
        predict_net_proto.ParseFromString(f.read())
    model.net = core.Net(predict_net_proto)  # model.net.AppendNet(core.Net(predict_net_proto))

    # CUDA performance not impressive
    #device_opts = core.DeviceOption(caffe2_pb2.PROTO_CUDA, args.gpu_id)
    #model.net.RunAllOnGPU(gpu_id=args.gpu_id, use_cudnn=True)
    #model.param_init_net.RunAllOnGPU(gpu_id=args.gpu_id, use_cudnn=True)

    input_blob = model.net.external_inputs[0]
    model.param_init_net.GaussianFill(
        [],
        input_blob.GetUnscopedName(),
        shape=(args.batch_size, 3, args.img_size, args.img_size),
        mean=0.0,
        std=1.0)
    workspace.RunNetOnce(model.param_init_net)
    workspace.CreateNet(model.net, overwrite=True)
    workspace.BenchmarkNet(model.net.Proto().name, 5, 20, True) 
开发者ID:rwightman,项目名称:gen-efficientnet-pytorch,代码行数:35,代码来源:caffe2_benchmark.py

示例8: train

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def train(INIT_NET, PREDICT_NET, epochs, batch_size, device_opts) :

    data, gt_segmentation = get_data(batch_size)
    workspace.FeedBlob("data", data, device_option=device_opts)
    workspace.FeedBlob("gt_segmentation", gt_segmentation, device_option=device_opts)

    train_model= model_helper.ModelHelper(name="train_net", arg_scope = {"order": "NHWC"})
    output_segmentation = create_unet_model(train_model, device_opts=device_opts, is_test=0)
    add_training_operators(output_segmentation, train_model, device_opts=device_opts)
    with core.DeviceScope(device_opts):
        brew.add_weight_decay(train_model, 0.001)

    workspace.RunNetOnce(train_model.param_init_net)
    workspace.CreateNet(train_model.net)

    print '\ntraining for', epochs, 'epochs'
    for j in range(0, epochs):
        data, gt_segmentation = get_data(batch_size, 4)

        workspace.FeedBlob("data", data, device_option=device_opts)
        workspace.FeedBlob("gt_segmentation", gt_segmentation, device_option=device_opts)

        workspace.RunNet(train_model.net, 1)   # run for 10 times
        print str(j) + ': ' + str(workspace.FetchBlob("avg_loss"))

    print 'training done'
    test_model= model_helper.ModelHelper(name="test_net", arg_scope = {"order": "NHWC"}, init_params=False)
    create_unet_model(test_model, device_opts=device_opts, is_test=1)
    workspace.RunNetOnce(test_model.param_init_net)
    workspace.CreateNet(test_model.net, overwrite=True)

    print '\nsaving test model'
    save_net(INIT_NET, PREDICT_NET, test_model) 
开发者ID:peterneher,项目名称:peters-stuff,代码行数:35,代码来源:segmentation_no_db_example.py

示例9: test_simple_cnnmodel

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def test_simple_cnnmodel(self):
        model = cnn.CNNModelHelper("NCHW", name="overfeat")
        workspace.FeedBlob("data", np.random.randn(1, 3, 64, 64).astype(np.float32))
        workspace.FeedBlob("label", np.random.randn(1, 1000).astype(np.int))
        with core.NameScope("conv1"):
            conv1 = model.Conv("data", "conv1", 3, 96, 11, stride=4)
            relu1 = model.Relu(conv1, conv1)
            pool1 = model.MaxPool(relu1, "pool1", kernel=2, stride=2)
        with core.NameScope("classifier"):
            fc = model.FC(pool1, "fc", 4096, 1000)
            pred = model.Softmax(fc, "pred")
            xent = model.LabelCrossEntropy([pred, "label"], "xent")
            loss = model.AveragedLoss(xent, "loss")

        blob_name_tracker = {}
        graph = tb.model_to_graph_def(
            model,
            blob_name_tracker=blob_name_tracker,
            shapes={},
            show_simplified=False,
        )

        compare_proto(graph, self)

    # cnn.CNNModelHelper is deprecated, so we also test with
    # model_helper.ModelHelper. The model used in this test is taken from the
    # Caffe2 MNIST tutorial. Also use show_simplified=False here. 
开发者ID:lanpa,项目名称:tensorboardX,代码行数:29,代码来源:test_caffe2.py

示例10: test_simple_model

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def test_simple_model(self):
        model = model_helper.ModelHelper(name="mnist")
        # how come those inputs don't break the forward pass =.=a
        workspace.FeedBlob("data", np.random.randn(1, 3, 64, 64).astype(np.float32))
        workspace.FeedBlob("label", np.random.randn(1, 1000).astype(np.int))

        with core.NameScope("conv1"):
            conv1 = brew.conv(model, "data", 'conv1', dim_in=1, dim_out=20, kernel=5)
            # Image size: 24 x 24 -> 12 x 12
            pool1 = brew.max_pool(model, conv1, 'pool1', kernel=2, stride=2)
            # Image size: 12 x 12 -> 8 x 8
            conv2 = brew.conv(model, pool1, 'conv2', dim_in=20, dim_out=100, kernel=5)
            # Image size: 8 x 8 -> 4 x 4
            pool2 = brew.max_pool(model, conv2, 'pool2', kernel=2, stride=2)
        with core.NameScope("classifier"):
            # 50 * 4 * 4 stands for dim_out from previous layer multiplied by the image size
            fc3 = brew.fc(model, pool2, 'fc3', dim_in=100 * 4 * 4, dim_out=500)
            relu = brew.relu(model, fc3, fc3)
            pred = brew.fc(model, relu, 'pred', 500, 10)
            softmax = brew.softmax(model, pred, 'softmax')
            xent = model.LabelCrossEntropy([softmax, "label"], 'xent')
            # compute the expected loss
            loss = model.AveragedLoss(xent, "loss")
        model.net.RunAllOnMKL()
        model.param_init_net.RunAllOnMKL()
        model.AddGradientOperators([loss], skip=1)
        blob_name_tracker = {}
        graph = tb.model_to_graph_def(
            model,
            blob_name_tracker=blob_name_tracker,
            shapes={},
            show_simplified=False,
        )

        compare_proto(graph, self) 
开发者ID:lanpa,项目名称:tensorboardX,代码行数:37,代码来源:test_caffe2.py

示例11: run_n_times

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def run_n_times(model, num_warmup_batches, num_batches):
    """ Runs **model** multiple times (**num_warmup_batches** + **num_batches**).

    :param model: Caffe2's model helper class instances.
    :type model: :py:class:`caffe2.python.model_helper.ModelHelper`
    :param int num_warmup_batches: Number of warmup batches to process (do not contribute to computing average batch time)
    :param int num_batches: Number of batches to process (contribute to computing average batch time)
    :return: Batch times (excluding warmup batches) in seconds.
    :rtype: Numpy array of length = **num_batches**.
    """
    net_name = model.net.Proto().name
    start_time = timeit.default_timer()
    if num_warmup_batches > 0:
        workspace.RunNet(net_name, num_iter=num_warmup_batches)
        print("Average warmup batch time %f ms across %d batches" %\
              (1000.0*(timeit.default_timer() - start_time)/num_warmup_batches,\
               num_warmup_batches))
    else:
        print("Warning - no warmup iterations has been performed.")

    batch_times = np.zeros(num_batches)
    for i in range(num_batches):
        start_time = timeit.default_timer()
        workspace.RunNet(net_name, 1)
        batch_times[i] = timeit.default_timer() - start_time
    return batch_times 
开发者ID:HewlettPackard,项目名称:dlcookbook-dlbs,代码行数:28,代码来源:benchmarks.py

示例12: _test_model_fun

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def _test_model_fun(self, model_name, model_fun, inputs=None, input_shape=None, input_dtype=None,
                        test_outputs=True, feed_dict_override=None, can_run=True, can_convert=True):
        from caffe2.python.model_helper import ModelHelper

        if inputs is not None:
            assert utils.is_unique(inputs, key=lambda input: input.name)
            assert input_shape is None
            assert input_dtype is None
            if not isinstance(inputs, (list, tuple)):
                inputs = [inputs]
        if input_dtype is None:
            input_dtype = DTYPE_ID_FLOAT

        numbered_model_name = self._get_network_name(model_name)
        model = ModelHelper(name=numbered_model_name)
        outputs = model_fun(model)
        if outputs is None:
            outputs = []
        if not isinstance(outputs, (list, tuple)):
            outputs = [outputs]
        model.net.AddExternalOutputs(*outputs)
        if inputs is None:
            inputs = [Input(str(input), input_shape, input_dtype) for input in model.net.external_inputs]
        paths = self._save_model(dir=os.path.join('out', 'caffe2_orig', numbered_model_name),
                                 predict_net=model.net.Proto(),
                                 init_net=model.param_init_net.Proto(),
                                 value_info={
                                     input.name: [input.dtype, input.shape if input.shape else [1]]
                                     for input in inputs
                                 })
        debug_model_outputs = False
        if debug_model_outputs:
            if can_run:
                self._debug_model_outputs(*paths, feed_dict_override=feed_dict_override)
        else:
            self._test_model(*paths, test_outputs=test_outputs,
                             feed_dict_override=feed_dict_override, can_run=can_run, can_convert=can_convert) 
开发者ID:KhronosGroup,项目名称:NNEF-Tools,代码行数:39,代码来源:caffe2_test_runner.py

示例13: Add_Original_CIFAR10_Model

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def Add_Original_CIFAR10_Model(model, data, num_classes, image_height, image_width, image_channels):
    # Convolutional layer 1
    conv1 = brew.conv(model, data, 'conv1', dim_in=image_channels, dim_out=32, kernel=5, stride=1, pad=2)
    h,w = update_dims(height=image_height, width=image_width, kernel=5, stride=1, pad=2)
    # Pooling layer 1
    pool1 = brew.max_pool(model, conv1, 'pool1', kernel=3, stride=2)
    h,w = update_dims(height=h, width=w, kernel=3, stride=2, pad=0)
    # ReLU layer 1
    relu1 = brew.relu(model, pool1, 'relu1')
    
    # Convolutional layer 2
    conv2 = brew.conv(model, relu1, 'conv2', dim_in=32, dim_out=32, kernel=5, stride=1, pad=2)
    h,w = update_dims(height=h, width=w, kernel=5, stride=1, pad=2)
    # ReLU layer 2
    relu2 = brew.relu(model, conv2, 'relu2')
    # Pooling layer 1
    pool2 = brew.average_pool(model, relu2, 'pool2', kernel=3, stride=2)
    h,w = update_dims(height=h, width=w, kernel=3, stride=2, pad=0)
    
    # Convolutional layer 3
    conv3 = brew.conv(model, pool2, 'conv3', dim_in=32, dim_out=64, kernel=5, stride=1, pad=2)
    h,w = update_dims(height=h, width=w, kernel=5, stride=1, pad=2)
    # ReLU layer 3
    relu3 = brew.relu(model, conv3, 'relu3')
    # Pooling layer 3
    pool3 = brew.average_pool(model, relu3, 'pool3', kernel=3, stride=2)
    h,w = update_dims(height=h, width=w, kernel=3, stride=2, pad=0)
    
    # Fully connected layers
    fc1 = brew.fc(model, pool3, 'fc1', dim_in=64*h*w, dim_out=64)
    fc2 = brew.fc(model, fc1, 'fc2', dim_in=64, dim_out=num_classes)
    
    # Softmax layer
    softmax = brew.softmax(model, fc2, 'softmax')
    return softmax


# ## Test Saved Model From Part 1
# 
# ### Construct Model for Testing
# 
# The first thing we need is a model helper object that we can attach the lmdb reader to.

# In[4]:


# Create a ModelHelper object with init_params=False 
开发者ID:facebookarchive,项目名称:tutorials,代码行数:49,代码来源:CIFAR10_Part2.py

示例14: train

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def train(INIT_NET, PREDICT_NET, epochs, batch_size, device_opts) :

    data, label = get_data(batch_size)
    workspace.FeedBlob("data", data, device_option=device_opts)
    workspace.FeedBlob("label", label, device_option=device_opts)

    train_model= model_helper.ModelHelper(name="train_net")
    softmax = create_model(train_model, device_opts=device_opts)
    add_training_operators(softmax, train_model, device_opts=device_opts)
    with core.DeviceScope(device_opts):
        brew.add_weight_decay(train_model, 0.001)  # any effect???

    workspace.RunNetOnce(train_model.param_init_net)
    workspace.CreateNet(train_model.net)

    print '\ntraining for', epochs, 'epochs'

    for j in range(0, epochs):
        data, label = get_data(batch_size)

        workspace.FeedBlob("data", data, device_option=device_opts)
        workspace.FeedBlob("label", label, device_option=device_opts)

        workspace.RunNet(train_model.net, 10)   # run for 10 times
        print str(j) + ': ' + str(workspace.FetchBlob("loss")) + ' - ' + str(workspace.FetchBlob("accuracy"))

    print 'training done'

    print '\nrunning test model'

    test_model= model_helper.ModelHelper(name="test_net", init_params=False)
    create_model(test_model, device_opts=device_opts)
    workspace.RunNetOnce(test_model.param_init_net)
    workspace.CreateNet(test_model.net, overwrite=True)

    data = np.zeros((1,1,30,30)).astype('float32')
    workspace.FeedBlob("data", data, device_option=device_opts)
    workspace.RunNet(test_model.net, 1)
    print "\nInput: zeros"
    print "Output:", workspace.FetchBlob("softmax")
    print "Output class:", np.argmax(workspace.FetchBlob("softmax"))

    data = np.ones((1,1,30,30)).astype('float32')
    workspace.FeedBlob("data", data, device_option=device_opts)
    workspace.RunNet(test_model.net, 1)
    print "\nInput: ones"
    print "Output:", workspace.FetchBlob("softmax")
    print "Output class:", np.argmax(workspace.FetchBlob("softmax"))

    print '\nsaving test model'

    save_net(INIT_NET, PREDICT_NET, test_model) 
开发者ID:peterneher,项目名称:peters-stuff,代码行数:54,代码来源:classification_no_db_example.py

示例15: AddLayerWrapper

# 需要导入模块: from caffe2.python import model_helper [as 别名]
# 或者: from caffe2.python.model_helper import ModelHelper [as 别名]
def AddLayerWrapper(self, layer, inp_blobs, out_blobs,
                        add_prefix=True, reset_grad=False, **kwargs):
        # auxiliary routine to adjust tags
        def adjust_tag(blobs, on_device):
            if blobs.__class__ == str:
                _blobs = on_device + blobs
            elif blobs.__class__ == list:
                _blobs = list(map(lambda tag: on_device + tag, blobs))
            else:  # blobs.__class__ == model_helper.ModelHelper or something else
                _blobs = blobs
            return _blobs

        if self.ndevices > 1 and add_prefix:
            # add layer on multiple devices
            ll = []
            for d in range(self.ndevices):
                # add prefix on_device
                on_device = "gpu_" + str(d) + "/"
                _inp_blobs = adjust_tag(inp_blobs, on_device)
                _out_blobs = adjust_tag(out_blobs, on_device)
                # WARNING: reset_grad option was exlusively designed for WeightedSum
                #         with inp_blobs=[w, tag_one, "", lr], where "" will be replaced
                if reset_grad:
                    w_grad = self.gradientMap[_inp_blobs[0]]
                    _inp_blobs[2] = w_grad
                # add layer to the model
                with core.DeviceScope(core.DeviceOption(workspace.GpuDeviceType, d)):
                    if kwargs:
                        new_layer = layer(_inp_blobs, _out_blobs, **kwargs)
                    else:
                        new_layer = layer(_inp_blobs, _out_blobs)
                ll.append(new_layer)
            return ll
        else:
            # add layer on a single device
            # WARNING: reset_grad option was exlusively designed for WeightedSum
            #          with inp_blobs=[w, tag_one, "", lr], where "" will be replaced
            if reset_grad:
                w_grad = self.gradientMap[inp_blobs[0]]
                inp_blobs[2] = w_grad
            # add layer to the model
            if kwargs:
                new_layer = layer(inp_blobs, out_blobs, **kwargs)
            else:
                new_layer = layer(inp_blobs, out_blobs)
            return new_layer 
开发者ID:intel,项目名称:optimized-models,代码行数:48,代码来源:dlrm_s_caffe2.py


注:本文中的caffe2.python.model_helper.ModelHelper方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。