当前位置: 首页>>代码示例>>Python>>正文


Python Loader.load_data方法代码示例

本文整理汇总了Python中loader.Loader.load_data方法的典型用法代码示例。如果您正苦于以下问题:Python Loader.load_data方法的具体用法?Python Loader.load_data怎么用?Python Loader.load_data使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在loader.Loader的用法示例。


在下文中一共展示了Loader.load_data方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: Loader

# 需要导入模块: from loader import Loader [as 别名]
# 或者: from loader.Loader import load_data [as 别名]
from __future__ import print_function
from loader import Loader
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
ld = Loader()

plt_X = []
plt_Y = []

[X, y] = ld.load_data('datasets/train.csv')

for n_clusters in range(1, 100):
    km = KMeans(n_clusters=n_clusters)
    km.fit(X)
    plt_X.append(n_clusters)
    plt_Y.append(km.inertia_)

plt.plot(plt_X, plt_Y)
plt.ylabel('Within groups sum of squares')
plt.xlabel('Number of Clusters')
plt.savefig('figures/' + 'Raw_Kmeans_Curious_George_Elbow_Curve.png')

开发者ID:onaclov2000,项目名称:GATech,代码行数:27,代码来源:raw_curious_elbow_kmeans.py

示例2: Loader

# 需要导入模块: from loader import Loader [as 别名]
# 或者: from loader.Loader import load_data [as 别名]
        plt.scatter(px, py, c=colors[i])
    plt.xlabel(x_label)
    plt.ylabel(y_label)
    plt.title(name)
    plt.savefig('figures/' + name.replace(' ', '_') + classifier + '.png')
    figure_identifier.clf()
    plt.close(figure_identifier)
    
    
ld = Loader()
#np.random.seed(5)

centers = [[1, 1], [-1, -1], [1, -1]]  
 

[X, y] = ld.load_data('datasets/Curious_George_train_features_100_percent.csv')
[X_test, y_test] = ld.load_data('datasets/Curious_George_test_features.csv')


# You are to implement (or find the code for) six algorithms. The first two are clustering algorithms:

    # k-means clustering
    # Expectation Maximization

# You can choose your own measures of distance/similarity. Naturally, you'll have to justify your choices, but you're practiced at that sort of thing by now.

# The last four algorithms are dimensionality reduction algorithms:

    # PCA
    # ICA
    # Randomized Projections
开发者ID:onaclov2000,项目名称:GATech,代码行数:33,代码来源:curious_random_projection_kmeans.py

示例3: Loader

# 需要导入模块: from loader import Loader [as 别名]
# 或者: from loader.Loader import load_data [as 别名]
        plt.xticks(())
        plt.yticks(())
        plt.title(name)

    plt.legend(loc='lower right', prop=dict(size=12))

    plt.savefig('figures/' + title.replace(' ', '_') + '_Training_results.png')
    #plt.show()
    
ld = Loader()
#np.random.seed(5)

centers = [[1, 1], [-1, -1], [1, -1]]  
 

[X, y] = ld.load_data('datasets/Live_Television.train_features_100_percent.csv')
[X_test, y_test] = ld.load_data('datasets/Live_Television.test_features.csv')

# You are to implement (or find the code for) six algorithms. The first two are clustering algorithms:
    # Expectation Maximization

# The last four algorithms are dimensionality reduction algorithms:
    # PCA
    # ICA
    # Randomized Projections
    # Any other feature selection algorithm you desire

# You are to run a number of experiments. Come up with at least two datasets. If you'd like (and it makes a lot of sense in this case) you can use the ones you used in the first assignment.

    # Run the clustering algorithms on the data sets and describe what you see.
    # Apply the dimensionality reduction algorithms to the two datasets and describe what you see.
开发者ID:onaclov2000,项目名称:GATech,代码行数:33,代码来源:live_gmm.py

示例4: Loader

# 需要导入模块: from loader import Loader [as 别名]
# 或者: from loader.Loader import load_data [as 别名]
from __future__ import print_function
from loader import Loader
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

ld = Loader()

plt_X = []
plt_Y = []

[X, y] = ld.load_data("datasets/Curious_George_train_features_100_percent.csv")

for n_clusters in range(1, 16):
    km = KMeans(n_clusters=n_clusters)
    km.fit(X)
    plt_X.append(n_clusters)
    plt_Y.append(km.inertia_)

plt.plot(plt_X, plt_Y)
plt.ylabel("Within groups sum of squares")
plt.xlabel("Number of Clusters")
plt.savefig("figures/" + "Kmeans_Curious_George_Elbow_Curve.png")
开发者ID:onaclov2000,项目名称:GATech,代码行数:28,代码来源:curious_elbow_kmeans.py


注:本文中的loader.Loader.load_data方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。