當前位置: 首頁>>代碼示例>>Python>>正文


Python PLSRegression.fit_transform方法代碼示例

本文整理匯總了Python中sklearn.cross_decomposition.PLSRegression.fit_transform方法的典型用法代碼示例。如果您正苦於以下問題:Python PLSRegression.fit_transform方法的具體用法?Python PLSRegression.fit_transform怎麽用?Python PLSRegression.fit_transform使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在sklearn.cross_decomposition.PLSRegression的用法示例。


在下文中一共展示了PLSRegression.fit_transform方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: PLS

# 需要導入模塊: from sklearn.cross_decomposition import PLSRegression [as 別名]
# 或者: from sklearn.cross_decomposition.PLSRegression import fit_transform [as 別名]
    ax.scatter(X_r[y == i, 0], X_r[y == i, 1], X_r[y == i, 2], c=c)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.axis('equal')
ax.set_xlim([-1000,4000])
ax.set_ylim([-1000,4000])
ax.set_zlim([-1000,4000])

plt.show()

# part b
PLS1 = PLS(n_components=3)
number_map = {"M": 0,"B": 1}
numeric_y = np.array(map(lambda x : number_map[x],y))
result = PLS1.fit_transform(x,numeric_y)
X_r = result[0]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for c, i, target_name in zip("rb", target_names, target_names):
    ax.scatter(X_r[y == i, 0], X_r[y == i, 1], X_r[y == i, 2], c=c)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.axis('equal')

plt.show()

validation = data[:100]
test = data[100:200]
train = data[200:]
開發者ID:js345,項目名稱:applied-machine-learning,代碼行數:33,代碼來源:p3.7.py

示例2: LinearSVC

# 需要導入模塊: from sklearn.cross_decomposition import PLSRegression [as 別名]
# 或者: from sklearn.cross_decomposition.PLSRegression import fit_transform [as 別名]
# Make predictions using an SVM with PCA and PLS
pca_error = 0
pls_error = 0
n_folds = 10

svc = LinearSVC()

for train_inds, test_inds in KFold(X.shape[0], n_folds=n_folds):
    X_train, X_test = X[train_inds], X[test_inds]
    y_train, y_test = y[train_inds], y[test_inds]

    # Use PCA and then classify using an SVM
    X_train2 = pca.fit_transform(X_train)
    X_test2 = pca.transform(X_test)

    svc.fit(X_train2, y_train)
    y_pred = svc.predict(X_test2)
    pca_error += zero_one_loss(y_test, y_pred)

    # Use PLS and then classify using an SVM
    X_train2, y_train2 = pls.fit_transform(X_train, y_train)
    X_test2 = pls.transform(X_test)

    svc.fit(X_train2, y_train)
    y_pred = svc.predict(X_test2)
    pls_error += zero_one_loss(y_test, y_pred)

print(pca_error / n_folds)
print(pls_error / n_folds)
開發者ID:charanpald,項目名稱:tyre-hug,代碼行數:31,代碼來源:feexp.py

示例3: zip

# 需要導入模塊: from sklearn.cross_decomposition import PLSRegression [as 別名]
# 或者: from sklearn.cross_decomposition.PLSRegression import fit_transform [as 別名]
plt.figure()
for c, i, target_name in zip("rgb", ["Iris-setosa", "Iris-versicolor", "Iris-virginica"], target_names):
    plt.scatter(X_r[y == i, 0], X_r[y == i, 1], c=c, label=target_name)
plt.legend()
plt.title('PCA of IRIS dataset')
plt.axis('equal')
plt.show()

# PLS1
PLS1 = PLS(n_components=2)
X = df.as_matrix()[:, :4]
y = np.array(map(lambda x : number_map[x],df.as_matrix()[:, 4]))
string_map = {-1.2206555615733703 : "Iris-setosa", 0 : "Iris-versicolor", 1.2206555615733703 : "Iris-virginica"}

result = PLS1.fit_transform(X,y)
y = np.array(map(lambda x : string_map[x],result[1]))
target_names = ["Iris-setosa", "Iris-versicolor", "Iris-virginica"]
for c, i, target_name in zip("rgb", ["Iris-setosa", "Iris-versicolor", "Iris-virginica"], target_names):
    plt.scatter(result[0][y == i, 0],result[0][y == i, 1], c=c, label=target_name)
plt.legend()
plt.title('PLS1 of IRIS dataset')
plt.axis('equal')

plt.show()

# PLS2
PLS2 = PLS(n_components=2)
X = df.as_matrix()[:, :4]
y = np.array(map(lambda x : number_map[x],df.as_matrix()[:, 4]))
one_hot_y = np.zeros((len(y),3))
開發者ID:js345,項目名稱:applied-machine-learning,代碼行數:32,代碼來源:p3.4.py


注:本文中的sklearn.cross_decomposition.PLSRegression.fit_transform方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。