当前位置: 首页>>代码示例 >>用法及示例精选 >>正文


Python pyspark Normalizer用法及代码示例


本文简要介绍 pyspark.ml.feature.Normalizer 的用法。

用法:

class pyspark.ml.feature.Normalizer(*, p=2.0, inputCol=None, outputCol=None)

使用给定的p-norm 将向量归一化为具有单位范数。

1.4.0 版中的新函数。

例子

>>> from pyspark.ml.linalg import Vectors
>>> svec = Vectors.sparse(4, {1: 4.0, 3: 3.0})
>>> df = spark.createDataFrame([(Vectors.dense([3.0, -4.0]), svec)], ["dense", "sparse"])
>>> normalizer = Normalizer(p=2.0)
>>> normalizer.setInputCol("dense")
Normalizer...
>>> normalizer.setOutputCol("features")
Normalizer...
>>> normalizer.transform(df).head().features
DenseVector([0.6, -0.8])
>>> normalizer.setParams(inputCol="sparse", outputCol="freqs").transform(df).head().freqs
SparseVector(4, {1: 0.8, 3: 0.6})
>>> params = {normalizer.p: 1.0, normalizer.inputCol: "dense", normalizer.outputCol: "vector"}
>>> normalizer.transform(df, params).head().vector
DenseVector([0.4286, -0.5714])
>>> normalizerPath = temp_path + "/normalizer"
>>> normalizer.save(normalizerPath)
>>> loadedNormalizer = Normalizer.load(normalizerPath)
>>> loadedNormalizer.getP() == normalizer.getP()
True
>>> loadedNormalizer.transform(df).take(1) == normalizer.transform(df).take(1)
True

相关用法


注:本文由纯净天空筛选整理自spark.apache.org大神的英文原创作品 pyspark.ml.feature.Normalizer。非经特殊声明,原始代码版权归原作者所有,本译文未经允许或授权,请勿转载或复制。