本文整理匯總了Python中xgboost.sklearn.XGBClassifier.get_params方法的典型用法代碼示例。如果您正苦於以下問題:Python XGBClassifier.get_params方法的具體用法?Python XGBClassifier.get_params怎麽用?Python XGBClassifier.get_params使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類xgboost.sklearn.XGBClassifier
的用法示例。
在下文中一共展示了XGBClassifier.get_params方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: XGBClassifier
# 需要導入模塊: from xgboost.sklearn import XGBClassifier [as 別名]
# 或者: from xgboost.sklearn.XGBClassifier import get_params [as 別名]
train.drop(x, axis=1, inplace=True)
test.drop(x, axis=1, inplace=True)
y_train = train['TARGET'].values
X_train = train.drop(['ID','TARGET'], axis=1).values
y_test = test['ID']
X_test = test.drop(['ID'], axis=1).values
xgb1 = XGBClassifier(
learning_rate =0.1,
n_estimators=600,
max_depth=5,
min_child_weight=1,
gamma=0,
subsample=0.6815,
colsample_bytree=0.701,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
xgtrain = xgb.DMatrix(X_train, label=y_train)
cvresult = xgb.cv(xgb1.get_xgb_params(), xgtrain, num_boost_round=xgb1.get_params()['n_estimators'], nfold=5,
metrics=['auc'], early_stopping_rounds=50, show_progress=False)
xgb1.set_params(n_estimators=cvresult.shape[0])
xgb1.fit(X_train, y_train, eval_metric='auc')
output = xgb1.predict_proba(X_test)[:,1]
submission = pd.DataFrame({"ID":y_test, "TARGET":output})
submission.to_csv("submission.csv", index=False)
示例2: LabelEncoder
# 需要導入模塊: from xgboost.sklearn import XGBClassifier [as 別名]
# 或者: from xgboost.sklearn.XGBClassifier import get_params [as 別名]
label_encoder = LabelEncoder()
encoded_y_train = label_encoder.fit_transform(y_train)
xgb = XGBClassifier(
max_depth=args.max_depth,
learning_rate=args.learning_rate,
n_estimators=args.n_estimators,
objective="multi:softprob",
gamma=0,
min_child_weight=1,
max_delta_step=0,
subsample=args.subsample,
colsample_bytree=args.colsample_bytree,
colsample_bylevel=args.colsample_bylevel,
reg_alpha=0,
reg_lambda=1,
scale_pos_weight=1,
base_score=0.5,
missing=None,
silent=True,
nthread=-1,
seed=42
)
kf = KFold(len(x_train), n_folds=10, random_state=42)
score = cross_val_score(xgb, x_train, encoded_y_train,
cv=kf, scoring=ndcg_scorer)
print(xgb.get_params(), score.mean())