当前位置: 首页>>代码示例>>Python>>正文


Python CalibratedClassifierCV.score方法代码示例

本文整理汇总了Python中sklearn.calibration.CalibratedClassifierCV.score方法的典型用法代码示例。如果您正苦于以下问题:Python CalibratedClassifierCV.score方法的具体用法?Python CalibratedClassifierCV.score怎么用?Python CalibratedClassifierCV.score使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在sklearn.calibration.CalibratedClassifierCV的用法示例。


在下文中一共展示了CalibratedClassifierCV.score方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: ReportPerfCV

# 需要导入模块: from sklearn.calibration import CalibratedClassifierCV [as 别名]
# 或者: from sklearn.calibration.CalibratedClassifierCV import score [as 别名]
def ReportPerfCV(model, feature_set, y, calibrated = False, n_folds = 5, 
                    short = False):
    kcv = StratifiedKFold(y, n_folds, shuffle = True); i = 1
    res = np.empty((len(y), len(np.unique(y))))
    X, Xtest = GetDataset(feature_set)
    if calibrated: 
        logger.info("Enabling probability calibration...")
        model = CalibratedClassifierCV(model, 'sigmoid', cv = n_folds - 1)
    for train_idx, valid_idx in kcv:
        logger.info("Running fold %d...", i);
        model.fit(X[train_idx], y[train_idx])
        logger.info("Fold %i Accuracy: %.4f", i, 
                model.score(X[valid_idx], y[valid_idx]))
        res[valid_idx, :] = model.predict_proba(X[valid_idx])
        logger.info("Fold %i Log Loss: %.4f", i, 
                log_loss(y[valid_idx], res[valid_idx]))
        i += 1
        if short: break
    if short: return -log_loss(y[valid_idx], res[valid_idx])
    yhat = np.argmax(res, axis = 1) + 1
    Y    = np.array([int(i[-1]) for i in y])
    logger.info("CV Accuracy: %.5f", accuracy_score(Y, yhat))
    logger.info("CV Log Loss: %.4f", log_loss(y, res))
    return res, -log_loss(y, res)
开发者ID:cwjacklin,项目名称:Otto,代码行数:26,代码来源:ml.py

示例2: dmatrices

# 需要导入模块: from sklearn.calibration import CalibratedClassifierCV [as 别名]
# 或者: from sklearn.calibration.CalibratedClassifierCV import score [as 别名]
df_test.loc[df_test['Age'].isnull(), 'Age'] = np.nanmedian(df_test['Age'])

# Training/testing array creation
y_train, X_train = dmatrices('Survived ~ Age + Sex + Pclass + SibSp + Parch + Embarked', df_train)
X_test = dmatrix('Age + Sex + Pclass + SibSp + Parch + Embarked', df_test)

# Creating processing pipelines with preprocessing. Hyperparameters selected using cross validation
steps1 = [('poly_features', PolynomialFeatures(3, interaction_only=True)),
          ('logistic', LogisticRegression(C=5555., max_iter=16, penalty='l2'))]
steps2 = [('rforest', RandomForestClassifier(min_samples_split=15, n_estimators=73, criterion='entropy'))]
pipeline1 = Pipeline(steps=steps1)
pipeline2 = Pipeline(steps=steps2)

# Logistic model with cubic features
pipeline1.fit(X_train, y_train.ravel())
print('Accuracy (Logistic Regression-Poly Features (cubic)): {:.4f}'.format(pipeline1.score(X_train, y_train.ravel())))

# Random forest with calibration
pipeline2.fit(X_train[:600], y_train[:600].ravel())
calibratedpipe2 = CalibratedClassifierCV(pipeline2, cv=3, method='sigmoid')
calibratedpipe2.fit(X_train[600:], y_train[600:].ravel())
print('Accuracy (Random Forest - Calibration): {:.4f}'.format(calibratedpipe2.score(X_train, y_train.ravel())))

# Create the output dataframe
output = pd.DataFrame(columns=['PassengerId', 'Survived'])
output['PassengerId'] = df_test['PassengerId']

# Predict the survivors and output csv
output['Survived'] = pipeline1.predict(X_test).astype(int)
output.to_csv('output.csv', index=False)
开发者ID:jonikmen,项目名称:Librivox-Grabber,代码行数:32,代码来源:pandas.py


注:本文中的sklearn.calibration.CalibratedClassifierCV.score方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。