本文简要介绍
pyspark.mllib.regression.LassoModel
的用法。用法:
class pyspark.mllib.regression.LassoModel(weights, intercept)
从带有 l_1 惩罚项的最小二乘拟合导出的线性回归模型。
0.9.0 版中的新函数。
例子:
>>> from pyspark.mllib.linalg import SparseVector >>> from pyspark.mllib.regression import LabeledPoint >>> data = [ ... LabeledPoint(0.0, [0.0]), ... LabeledPoint(1.0, [1.0]), ... LabeledPoint(3.0, [2.0]), ... LabeledPoint(2.0, [3.0]) ... ] >>> lrm = LassoWithSGD.train( ... sc.parallelize(data), iterations=10, initialWeights=np.array([1.0])) >>> abs(lrm.predict(np.array([0.0])) - 0) < 0.5 True >>> abs(lrm.predict(np.array([1.0])) - 1) < 0.5 True >>> abs(lrm.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5 True >>> abs(lrm.predict(sc.parallelize([[1.0]])).collect()[0] - 1) < 0.5 True >>> import os, tempfile >>> path = tempfile.mkdtemp() >>> lrm.save(sc, path) >>> sameModel = LassoModel.load(sc, path) >>> abs(sameModel.predict(np.array([0.0])) - 0) < 0.5 True >>> abs(sameModel.predict(np.array([1.0])) - 1) < 0.5 True >>> abs(sameModel.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5 True >>> from shutil import rmtree >>> try: ... rmtree(path) ... except: ... pass >>> data = [ ... LabeledPoint(0.0, SparseVector(1, {0: 0.0})), ... LabeledPoint(1.0, SparseVector(1, {0: 1.0})), ... LabeledPoint(3.0, SparseVector(1, {0: 2.0})), ... LabeledPoint(2.0, SparseVector(1, {0: 3.0})) ... ] >>> lrm = LinearRegressionWithSGD.train(sc.parallelize(data), iterations=10, ... initialWeights=np.array([1.0])) >>> abs(lrm.predict(np.array([0.0])) - 0) < 0.5 True >>> abs(lrm.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5 True >>> lrm = LassoWithSGD.train(sc.parallelize(data), iterations=10, step=1.0, ... regParam=0.01, miniBatchFraction=1.0, initialWeights=np.array([1.0]), intercept=True, ... validateData=True) >>> abs(lrm.predict(np.array([0.0])) - 0) < 0.5 True >>> abs(lrm.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5 True
相关用法
- Python pyspark LDA.setLearningDecay用法及代码示例
- Python pyspark LogisticRegressionWithLBFGS.train用法及代码示例
- Python pyspark LDA.setDocConcentration用法及代码示例
- Python pyspark LDA用法及代码示例
- Python pyspark LDAModel用法及代码示例
- Python pyspark LinearRegressionModel用法及代码示例
- Python pyspark LDA.setOptimizer用法及代码示例
- Python pyspark LinearSVC用法及代码示例
- Python pyspark LDA.setK用法及代码示例
- Python pyspark LDA.setLearningOffset用法及代码示例
- Python pyspark LinearRegression用法及代码示例
- Python pyspark LDA.setTopicDistributionCol用法及代码示例
- Python pyspark LogisticRegressionModel用法及代码示例
- Python pyspark LogisticRegression用法及代码示例
- Python pyspark LDA.setKeepLastCheckpoint用法及代码示例
- Python pyspark LDA.setSubsamplingRate用法及代码示例
- Python pyspark LDA.setTopicConcentration用法及代码示例
- Python pyspark LDA.setOptimizeDocConcentration用法及代码示例
- Python pyspark create_map用法及代码示例
- Python pyspark date_add用法及代码示例
- Python pyspark DataFrame.to_latex用法及代码示例
- Python pyspark DataStreamReader.schema用法及代码示例
- Python pyspark MultiIndex.size用法及代码示例
- Python pyspark arrays_overlap用法及代码示例
- Python pyspark Series.asof用法及代码示例
注:本文由纯净天空筛选整理自spark.apache.org大神的英文原创作品 pyspark.mllib.regression.LassoModel。非经特殊声明,原始代码版权归原作者所有,本译文未经允许或授权,请勿转载或复制。