當前位置: 首頁>>代碼示例>>Python>>正文


Python scipy.subtract方法代碼示例

本文整理匯總了Python中scipy.subtract方法的典型用法代碼示例。如果您正苦於以下問題:Python scipy.subtract方法的具體用法?Python scipy.subtract怎麽用?Python scipy.subtract使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在scipy的用法示例。


在下文中一共展示了scipy.subtract方法的8個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: log_loss

# 需要導入模塊: import scipy [as 別名]
# 或者: from scipy import subtract [as 別名]
def log_loss(actual, predicted, epsilon=1e-15):
    """
    Calculates and returns the log loss (error) of a set of predicted probabilities
    (hint: see sklearn classifier's predict_proba methods).

    Source: https://www.kaggle.com/wiki/LogarithmicLoss
    
    In plain English, this error metric is typically used where you have to predict 
    that something is true or false with a probability (likelihood) ranging from 
    definitely true (1) to equally true (0.5) to definitely false(0).

    Note: also see (and use) scikitlearn: 
    http://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html#sklearn.metrics.log_loss
    """
    predicted = sp.maximum(epsilon, predicted)
    predicted = sp.minimum(1-epsilon, predicted)
    ll = sum(actual*sp.log(predicted) + sp.subtract(1,actual)*sp.log(sp.subtract(1,predicted)))
    ll = ll * -1.0/len(actual)
    return ll 
開發者ID:SMAPPNYU,項目名稱:smappPy,代碼行數:21,代碼來源:math_util.py

示例2: binary_logloss

# 需要導入模塊: import scipy [as 別名]
# 或者: from scipy import subtract [as 別名]
def binary_logloss(p, y):
    epsilon = 1e-15
    p = sp.maximum(epsilon, p)
    p = sp.minimum(1-epsilon, p)
    res = sum(y * sp.log(p) + sp.subtract(1, y) * sp.log(sp.subtract(1, p)))
    res *= -1.0/len(y)
    return res 
開發者ID:lllcho,項目名稱:CAPTCHA-breaking,代碼行數:9,代碼來源:np_utils.py

示例3: logloss

# 需要導入模塊: import scipy [as 別名]
# 或者: from scipy import subtract [as 別名]
def logloss(p, y):
    epsilon = 1e-15
    p = sp.maximum(epsilon, p)
    p = sp.minimum(1-epsilon, p)
    ll = sum(y*sp.log(p) + sp.subtract(1,y)*sp.log(sp.subtract(1,p)))
    ll = ll * -1.0/len(y)
    return ll

# B. Apply hash trick of the original csv row
# for simplicity, we treat both integer and categorical features as categorical
# INPUT:
#     csv_row: a csv dictionary, ex: {'Lable': '1', 'I1': '357', 'I2': '', ...}
#     D: the max index that we can hash to
# OUTPUT:
#     x: a list of indices that its value is 1 
開發者ID:ivanliu1989,項目名稱:Predict-click-through-rates-on-display-ads,代碼行數:17,代碼來源:py_lh_20Sep2014.py

示例4: logloss

# 需要導入模塊: import scipy [as 別名]
# 或者: from scipy import subtract [as 別名]
def logloss(p, y):
    epsilon = 1e-15
    p = max(min(p, 1. - epsilon), epsilon)
    ll = y*sp.log(p) + sp.subtract(1,y)*sp.log(sp.subtract(1,p))
    ll = ll * -1.0/1
    return ll

# B. Apply hash trick of the original csv row
# for simplicity, we treat both integer and categorical features as categorical
# INPUT:
#     csv_row: a csv dictionary, ex: {'Lable': '1', 'I1': '357', 'I2': '', ...}
#     D: the max index that we can hash to
# OUTPUT:
#     x: a list of indices that its value is 1 
開發者ID:ivanliu1989,項目名稱:Predict-click-through-rates-on-display-ads,代碼行數:16,代碼來源:py_lh_20Sep2014.py

示例5: logloss

# 需要導入模塊: import scipy [as 別名]
# 或者: from scipy import subtract [as 別名]
def logloss(act, pred):
    '''
    官方給的損失函數
    :param act: 
    :param pred: 
    :return: 
    '''
    epsilon = 1e-15
    pred = sp.maximum(epsilon, pred)
    pred = sp.minimum(1 - epsilon, pred)
    ll = sum(act * sp.log(pred) + sp.subtract(1, act) * sp.log(sp.subtract(1, pred)))
    ll = ll * -1.0 / len(act)
    return ll 
開發者ID:xjtushilei,項目名稱:pCVR,代碼行數:15,代碼來源:utils.py

示例6: log_loss

# 需要導入模塊: import scipy [as 別名]
# 或者: from scipy import subtract [as 別名]
def log_loss( act, pred ):
	epsilon = 1e-15
	pred = sp.maximum(epsilon, pred)
	pred = sp.minimum(1-epsilon, pred)
	ll = sum(act*sp.log(pred) + sp.subtract(1,act)*sp.log(sp.subtract(1,pred)))
	ll = ll * -1.0/len(act)
	return ll 
開發者ID:zygmuntz,項目名稱:classifier-calibration,代碼行數:9,代碼來源:log_loss.py

示例7: llfun

# 需要導入模塊: import scipy [as 別名]
# 或者: from scipy import subtract [as 別名]
def llfun(act, pred):
    p_true = pred[:, 1]
    epsilon = 1e-15
    p_true = sp.maximum(epsilon, p_true)
    p_true = sp.minimum(1 - epsilon, p_true)
    ll = sum(act * sp.log(p_true) + sp.subtract(1, act) * sp.log(sp.subtract(1, p_true)))
    ll = ll * -1.0 / len(act)
    return ll 
開發者ID:mkneierV,項目名稱:kaggle_avazu_benchmark,代碼行數:10,代碼來源:ml.py

示例8: logloss

# 需要導入模塊: import scipy [as 別名]
# 或者: from scipy import subtract [as 別名]
def logloss(act, pred):
    epsilon = 1e-15
    pred = sp.maximum(epsilon, pred)
    pred = sp.minimum(1-epsilon, pred)
    ll = sum(act*sp.log(pred) + sp.subtract(1,act)*sp.log(sp.subtract(1,pred)))
    ll = ll * -1.0/len(act)
    return ll 
開發者ID:DeepinSC,項目名稱:PyTorch-Luna16,代碼行數:9,代碼來源:classify_nodes.py


注:本文中的scipy.subtract方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。