本文整理匯總了Python中torch.nn.HingeEmbeddingLoss方法的典型用法代碼示例。如果您正苦於以下問題:Python nn.HingeEmbeddingLoss方法的具體用法?Python nn.HingeEmbeddingLoss怎麽用?Python nn.HingeEmbeddingLoss使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類torch.nn
的用法示例。
在下文中一共展示了nn.HingeEmbeddingLoss方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: setup
# 需要導入模塊: from torch import nn [as 別名]
# 或者: from torch.nn import HingeEmbeddingLoss [as 別名]
def setup(model, opt):
if opt.criterion == "l1":
criterion = nn.L1Loss().cuda()
elif opt.criterion == "mse":
criterion = nn.MSELoss().cuda()
elif opt.criterion == "crossentropy":
criterion = nn.CrossEntropyLoss().cuda()
elif opt.criterion == "hingeEmbedding":
criterion = nn.HingeEmbeddingLoss().cuda()
elif opt.criterion == "tripletmargin":
criterion = nn.TripletMarginLoss(margin = opt.margin, swap = opt.anchorswap).cuda()
parameters = filter(lambda p: p.requires_grad, model.parameters())
if opt.optimType == 'sgd':
optimizer = optim.SGD(parameters, lr = opt.lr, momentum = opt.momentum, nesterov = opt.nesterov, weight_decay = opt.weightDecay)
elif opt.optimType == 'adam':
optimizer = optim.Adam(parameters, lr = opt.maxlr, weight_decay = opt.weightDecay)
if opt.weight_init:
utils.weights_init(model, opt)
return model, criterion, optimizer
示例2: __init__
# 需要導入模塊: from torch import nn [as 別名]
# 或者: from torch.nn import HingeEmbeddingLoss [as 別名]
def __init__(self, margin=2.0):
super(KCL,self).__init__()
self.kld = KLDiv()
self.hingeloss = nn.HingeEmbeddingLoss(margin)