当前位置: 首页>>代码示例>>Python>>正文


Python GradientBoostingClassifier.fit_transform方法代码示例

本文整理汇总了Python中sklearn.ensemble.GradientBoostingClassifier.fit_transform方法的典型用法代码示例。如果您正苦于以下问题:Python GradientBoostingClassifier.fit_transform方法的具体用法?Python GradientBoostingClassifier.fit_transform怎么用?Python GradientBoostingClassifier.fit_transform使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在sklearn.ensemble.GradientBoostingClassifier的用法示例。


在下文中一共展示了GradientBoostingClassifier.fit_transform方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: GradientBoostingClassifier

# 需要导入模块: from sklearn.ensemble import GradientBoostingClassifier [as 别名]
# 或者: from sklearn.ensemble.GradientBoostingClassifier import fit_transform [as 别名]
gb = GradientBoostingClassifier()
gb.fit(x_train,y_train)
gb.score(x_test,y_test)
proba=pd.DataFrame(gb.predict_proba(x_test))[1]
false_positive_rate, true_positive_rate, thresholds = skrc(y_test,proba)
auc(false_positive_rate, true_positive_rate)

#find best features for Gradient Boost
#Feature selection based on AUC
X,X_test,y,y_test=train_test_split(X,y,train_size=.9)
model=GradientBoostingClassifier()
features=[]
scores=[]
for i in X:
    features.append(i)
    model.fit_transform(X[[i]],y)
    proba=model.predict_proba(X_test[[i]])
    proba=pd.DataFrame(proba)[1]
    false_positive_rate, true_positive_rate, thresholds = skrc(y_test,proba)
    scores.append(auc(false_positive_rate, true_positive_rate))
df_f=pd.DataFrame({'features':features, 'scores':scores})
df_f=df_f.sort_values(by='scores',ascending=False)
best=df_f.features


#Find best AUC
#build new train and test sets
train,test=train_test_split(df,train_size=.9)
y_train=train['2015h']
x_train=train.drop('2015h',axis=1)
y_test=test['2015h']
开发者ID:absulier,项目名称:waterquality,代码行数:33,代码来源:test_model.py

示例2: write2X

# 需要导入模块: from sklearn.ensemble import GradientBoostingClassifier [as 别名]
# 或者: from sklearn.ensemble.GradientBoostingClassifier import fit_transform [as 别名]
	X = write2X(aspects_1)[:1000]
	clear_trainning_set(X,y)
	#clear_trainning_set(X2,y2)
	#balance_trainning_set(X,y)
	y1 = 0 
	y0 = 0
	for i in range(len(y)):
		if y[i] == 1:
			y1 += 1
		else:
			y0 += 1

	print "We got X for " + str(len(X)) +" and Y for " + str(len(y))
	print "we have " + str(y1) + "for 1 and " + str(y0) + " for 0" 
	clf = GradientBoostingClassifier(n_estimators=47, learning_rate=0.03,max_depth=3,random_state=0)
	test_X = clf.fit_transform(X,y)
	#clf.fit(X,y)
	importances  = clf.feature_importances_
	position_propotion = 0.0 # 0-8
	vertical_propotion = 0.0 # 9-74
	query_propotion  =0.0 #75-77
	text_propotion = 0.0 #78 - last 
	#print "size of importances " + str(len(importances))

	#indices = np.argsort(importances)[::-1]
	#for f in range(10):
	#	print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
	
	#print len(test_X[0])
	#clf2 = svm.SVC(C=1.0, kernel='rbf', degree=3, gamma=0.0, coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, random_state=None)
	test = ['accuracy','recall_macro','f1_macro','roc_auc']
开发者ID:luochengleo,项目名称:User-Preference-Predicting,代码行数:33,代码来源:predicting.py


注:本文中的sklearn.ensemble.GradientBoostingClassifier.fit_transform方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。