當前位置: 首頁>>代碼示例>>Python>>正文


Python optimizers.GradientDescentMomentum方法代碼示例

本文整理匯總了Python中neon.optimizers.GradientDescentMomentum方法的典型用法代碼示例。如果您正苦於以下問題:Python optimizers.GradientDescentMomentum方法的具體用法?Python optimizers.GradientDescentMomentum怎麽用?Python optimizers.GradientDescentMomentum使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在neon.optimizers的用法示例。


在下文中一共展示了optimizers.GradientDescentMomentum方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: create_model

# 需要導入模塊: from neon import optimizers [as 別名]
# 或者: from neon.optimizers import GradientDescentMomentum [as 別名]
def create_model(args, hyper_params):
    # setup layers
    imagenet_layers = [
        Conv((11, 11, 64), init=Gaussian(scale=0.01), bias=Constant(0), activation=Rectlin(),
             padding=3, strides=4),
        Pooling(3, strides=2),
        Conv((5, 5, 192), init=Gaussian(scale=0.01), bias=Constant(1), activation=Rectlin(),
             padding=2),
        Pooling(3, strides=2),
        Conv((3, 3, 384), init=Gaussian(scale=0.03), bias=Constant(0), activation=Rectlin(),
             padding=1),
        Conv((3, 3, 256), init=Gaussian(scale=0.03), bias=Constant(1), activation=Rectlin(),
             padding=1),
        Conv((3, 3, 256), init=Gaussian(scale=0.03), bias=Constant(1), activation=Rectlin(),
             padding=1),
        Pooling(3, strides=2),
        Affine(nout=4096, init=Gaussian(scale=0.01), bias=Constant(1), activation=Rectlin()),
        Dropout(keep=0.5),
        Affine(nout=4096, init=Gaussian(scale=0.01), bias=Constant(1), activation=Rectlin()),
        # The following layers are used in Alexnet, but are not used in the new model
        Dropout(keep=0.5),
        # Affine(nout=1000, init=Gaussian(scale=0.01), bias=Constant(-7), activation=Softmax())
    ]
    
    target_layers = imagenet_layers + [    
        Affine(nout=4096, init=Gaussian(scale=0.005), bias=Constant(.1), activation=Rectlin()),
        Dropout(keep=0.5),
        Affine(nout=21, init=Gaussian(scale=0.01), bias=Constant(0), activation=Softmax())]
    
    # setup optimizer
    opt = GradientDescentMomentum(hyper_params.learning_rate_scale, 
                                  hyper_params.momentum, wdecay=0.0005,
                                  schedule=hyper_params.learning_rate_sched)
    
    # setup model
    if args.model_file:
        model = Model(layers=args.model_file)
    else:
        model = Model(layers=target_layers)
    
    return model, opt 
開發者ID:NervanaSystems,項目名稱:ModelZoo,代碼行數:43,代碼來源:transfer_learning.py

示例2: get_function

# 需要導入模塊: from neon import optimizers [as 別名]
# 或者: from neon.optimizers import GradientDescentMomentum [as 別名]
def get_function(name):
    mapping = {}

    # activation
    mapping['relu'] = neon.transforms.activation.Rectlin
    mapping['sigmoid'] = neon.transforms.activation.Logistic
    mapping['tanh'] = neon.transforms.activation.Tanh
    mapping['linear'] = neon.transforms.activation.Identity

    # loss
    mapping['mse'] = neon.transforms.cost.MeanSquared
    mapping['binary_crossentropy'] = neon.transforms.cost.CrossEntropyBinary
    mapping['categorical_crossentropy'] = neon.transforms.cost.CrossEntropyMulti

    # optimizer
    def SGD(learning_rate=0.01, momentum_coef=0.9, gradient_clip_value=5):
        return GradientDescentMomentum(learning_rate, momentum_coef, gradient_clip_value)

    mapping['sgd'] = SGD
    mapping['rmsprop'] = neon.optimizers.optimizer.RMSProp
    mapping['adam'] = neon.optimizers.optimizer.Adam
    mapping['adagrad'] = neon.optimizers.optimizer.Adagrad
    mapping['adadelta'] = neon.optimizers.optimizer.Adadelta

    mapped = mapping.get(name)
    if not mapped:
        raise Exception('No neon function found for "{}"'.format(name))

    return mapped 
開發者ID:ECP-CANDLE,項目名稱:Benchmarks,代碼行數:31,代碼來源:p1b3_baseline_neon.py


注:本文中的neon.optimizers.GradientDescentMomentum方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。