本文整理汇总了Python中loader.Loader.save_data方法的典型用法代码示例。如果您正苦于以下问题:Python Loader.save_data方法的具体用法?Python Loader.save_data怎么用?Python Loader.save_data使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类loader.Loader
的用法示例。
在下文中一共展示了Loader.save_data方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: algorithms
# 需要导入模块: from loader import Loader [as 别名]
# 或者: from loader.Loader import save_data [as 别名]
# Apply the clustering algorithms to the same dataset to which you just applied the dimensionality reduction algorithms (you've probably already done this), treating the clusters as if they were new features. In other words, treat the clustering algorithms as if they were dimensionality reduction algorithms. Again, rerun your neural network learner on the newly projected data.
print "Run the clustering algorithms on the data sets and describe what you see."
k_means_results('KMeans Curious George No Feature Selection', [X,y], [X_test, y_test], '1st Feature', '2nd Feature', colormap = False)
print "Standardize Data"
stdsc = StandardScaler()
start = time.time()
X_scaled = stdsc.fit_transform(X)
end = time.time()
print "Fit Time: " + str(end - start)
X_test_scaled = stdsc.transform(X_test)
k_means_results('KMeans Curious George Standardized Data No Feature Selection', [X_scaled,y], [X_test_scaled, y_test], '1st Feature', '2nd Feature', colormap = False)
ld.save_data('datasets/Curious_George_train_features_100_percent_standardize_features.csv', [X_scaled, y])
ld.save_data('datasets/Curious_George_test_features_standardize_features.csv', [X_test_scaled, y_test])
print "Apply the dimensionality reduction algorithms to the two datasets and describe what you see."
stdsc = StandardScaler()
print "Get Explained Variance"
pca = decomposition.PCA()
X_pca = stdsc.fit_transform(X)
start = time.time()
pca.fit(X_pca)
end = time.time()
print "Fit Time: " + str(end - start)
plt.figure('Explained Variance')
plt.plot(pca.explained_variance_ratio_ )
print (pca.explained_variance_ratio_ )
示例2: like
# 需要导入模块: from loader import Loader [as 别名]
# 或者: from loader.Loader import save_data [as 别名]
# You are to run a number of experiments. Come up with at least two datasets. If you'd like (and it makes a lot of sense in this case) you can use the ones you used in the first assignment.
# Run the clustering algorithms on the data sets and describe what you see.
# Apply the dimensionality reduction algorithms to the two datasets and describe what you see.
# Reproduce your clustering experiments, but on the data after you've run dimensionality reduction on it.
# Apply the dimensionality reduction algorithms to one of your datasets from assignment #1 (if you've reused the datasets from assignment #1 to do experiments 1-3 above then you've already done this) and rerun your neural network learner on the newly projected data.
# Apply the clustering algorithms to the same dataset to which you just applied the dimensionality reduction algorithms (you've probably already done this), treating the clusters as if they were new features. In other words, treat the clustering algorithms as if they were dimensionality reduction algorithms. Again, rerun your neural network learner on the newly projected data.
print "Run the clustering algorithms on the data sets and describe what you see."
k_means_results('KMeans Curious George No Feature Selection', [X,y], [X_test, y_test], '1st Feature', '2nd Feature', colormap = False)
for i in range(0,100):
print "Random Projection Data components"
stdsc = StandardScaler()
rp = random_projection.GaussianRandomProjection(n_components=2)
X_rp = stdsc.fit_transform(X)
X_test_rp = stdsc.transform(X_test)
start = time.time()
X_rp = rp.fit_transform(X_rp)
end = time.time()
print "Fit Time: " + str(end - start)
X_test_rp = rp.transform(X_test_rp)
k_means_results('KMeans Curious George RP Feature Selection ' + str(i), [X_rp,y], [X_test_rp, y_test], '1st RP Component', '2nd RP Component', colormap = True)
plot_scatter('KMeans Curious George Feature Selection ' + str(i), 'RP', X_rp, y, '1st RP Component', '2nd RP Component')
ld.save_data('datasets/Curious_George_train_features_100_percent_random_projection_' + str(i) + '.csv', [X_rp, y])
ld.save_data('datasets/Curious_George_test_features_random_projection_' + str(i) + '.csv', [X_test_rp, y_test])
示例3: plot_scatter
# 需要导入模块: from loader import Loader [as 别名]
# 或者: from loader.Loader import save_data [as 别名]
"KMeans Live Linear Kernel PCA Feature Selection",
[X_pca, y],
[X_test_pca, y_test],
"1st Kernel Principal Component",
"2nd Kernel Principal Component",
colormap=True,
)
plot_scatter(
"KMeans Live Feature Selection ",
"Linear Kernel PCA",
X_pca,
y,
"1st Kernel Principal Component",
"2nd Kernel Principal Component",
)
ld.save_data("datasets/Live_train_features_100_percent_linear_kernel_pca_components.csv", [X_pca, y])
ld.save_data("datasets/Live_test_features_linear_kernel_pca_components.csv", [X_test_pca, y_test])
start = time.time()
print "Poly Kernel PCA Data components"
stdsc = StandardScaler()
pca = decomposition.KernelPCA(n_components=2, kernel="poly")
X_pca = stdsc.fit_transform(X)
X_test_pca = stdsc.transform(X_test)
start = time.time()
X_pca = pca.fit_transform(X_pca)
end = time.time()
print "Fit time: " + str(end - start)
X_test_pca = pca.transform(X_test_pca)