本文整理汇总了Python中sklearn.ensemble.RandomForestClassifier.classes_方法的典型用法代码示例。如果您正苦于以下问题:Python RandomForestClassifier.classes_方法的具体用法?Python RandomForestClassifier.classes_怎么用?Python RandomForestClassifier.classes_使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类sklearn.ensemble.RandomForestClassifier
的用法示例。
在下文中一共展示了RandomForestClassifier.classes_方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: range
# 需要导入模块: from sklearn.ensemble import RandomForestClassifier [as 别名]
# 或者: from sklearn.ensemble.RandomForestClassifier import classes_ [as 别名]
feature_importance = []
for k in range(5):
model = AdaBoostClassifier(n_estimators=100, learning_rate=1,
base_estimator=DecisionTreeClassifier(max_depth=1, class_weight={True: 0.8, False: 0.2}))
# model1 = RandomForestClassifier(n_estimators=100, max_depth=1, class_weight={True: 0.8, False: 0.2})
# model2 = BaggingClassifier(base_estimator=DecisionTreeClassifier(max_depth=1, class_weight={True: 0.8, False: 0.2}),
# n_estimators=100, max_samples=1.0, max_features=1.0)
# model1 = LinearSVM(C=1, class_weight={True: 0.8, False: 0.2})
# model1 = SGDClassifier(shuffle=True, loss="log", class_weight={True: 0.8, False: 0.2})
model1 = SVC(kernel="rbf", probability=True, class_weight={True: 0.8, False: 0.2})
# model2 = SVC(kernel="poly", probability=True, class_weight={True: 0.8, False: 0.2}, degree=2)
# model1 = LinearDiscriminantAnalysis()
# svm.classes_ = [True, False]
# pre_model = BernoulliRBM(learning_rate=0.1, n_components=10, n_iter=20)
# model3 = Pipeline(steps=[("rbm", pre_model), ("svm", svm)])
model.classes_ = [True, False]
model1.classes_ = [True, False]
# model2.classes_ = [True, False]
positive_train_labels = [i for i in range(len(train_1_labels)) if train_1_labels[i]]
positive_train_labels = list(np.random.choice(positive_train_labels, size=int(len(positive_train_labels)*0.8), replace=False))
# positive_test_labels = [i for i in range(len(test_1_labels)) if test_1_labels[i]]
negative_train_labels = list(np.random.choice([i for i in range(len(train_1_labels)) if i not in positive_train_labels], replace=False, size=len(positive_train_labels)))
# negative_test_labels = list(np.random.choice([i for i in range(len(test_1_labels)) if i not in positive_test_labels], replace=False, size=len(positive_test_labels)))
# test_data = [test_data[i] for i in positive_test_labels + negative_test_labels]
train_data1 = [train_data[i] for i in positive_train_labels + negative_train_labels]
# test_1_labels = [test_1_labels[i] for i in positive_test_labels + negative_test_labels]
train_1_labels1 = [train_1_labels[i] for i in positive_train_labels + negative_train_labels]
model.fit(train_data1, train_1_labels1)