本文簡要介紹
pyspark.ml.clustering.PowerIterationClustering
的用法。用法:
class pyspark.ml.clustering.PowerIterationClustering(*, k=2, maxIter=20, initMode='random', srcCol='src', dstCol='dst', weightCol=None)
Power Iteration Clustering (PIC),一種由Lin and Cohen 開發的可擴展圖聚類算法。摘要:PIC 使用數據的歸一化成對相似性矩陣上的截斷冪迭代來找到數據集的極低維嵌入。
該類還不是 Estimator/Transformer,請使用
assignClusters()
方法運行 PowerIterationClustering 算法。2.4.0 版中的新函數。
注意:
見Wikipedia on Spectral clustering
例子:
>>> data = [(1, 0, 0.5), ... (2, 0, 0.5), (2, 1, 0.7), ... (3, 0, 0.5), (3, 1, 0.7), (3, 2, 0.9), ... (4, 0, 0.5), (4, 1, 0.7), (4, 2, 0.9), (4, 3, 1.1), ... (5, 0, 0.5), (5, 1, 0.7), (5, 2, 0.9), (5, 3, 1.1), (5, 4, 1.3)] >>> df = spark.createDataFrame(data).toDF("src", "dst", "weight").repartition(1) >>> pic = PowerIterationClustering(k=2, weightCol="weight") >>> pic.setMaxIter(40) PowerIterationClustering... >>> assignments = pic.assignClusters(df) >>> assignments.sort(assignments.id).show(truncate=False) +---+-------+ |id |cluster| +---+-------+ |0 |0 | |1 |0 | |2 |0 | |3 |0 | |4 |0 | |5 |1 | +---+-------+ ... >>> pic_path = temp_path + "/pic" >>> pic.save(pic_path) >>> pic2 = PowerIterationClustering.load(pic_path) >>> pic2.getK() 2 >>> pic2.getMaxIter() 40 >>> pic2.assignClusters(df).take(6) == assignments.take(6) True
相關用法
- Python pyspark PowerIterationClusteringModel用法及代碼示例
- Python pyspark PolynomialExpansion用法及代碼示例
- Python pyspark PandasCogroupedOps.applyInPandas用法及代碼示例
- Python pyspark PCA用法及代碼示例
- Python pyspark PrefixSpanModel用法及代碼示例
- Python pyspark PrefixSpan用法及代碼示例
- Python pyspark ParamGridBuilder用法及代碼示例
- Python pyspark create_map用法及代碼示例
- Python pyspark date_add用法及代碼示例
- Python pyspark DataFrame.to_latex用法及代碼示例
- Python pyspark DataStreamReader.schema用法及代碼示例
- Python pyspark MultiIndex.size用法及代碼示例
- Python pyspark arrays_overlap用法及代碼示例
- Python pyspark Series.asof用法及代碼示例
- Python pyspark DataFrame.align用法及代碼示例
- Python pyspark Index.is_monotonic_decreasing用法及代碼示例
- Python pyspark IsotonicRegression用法及代碼示例
- Python pyspark DataFrame.plot.bar用法及代碼示例
- Python pyspark DataFrame.to_delta用法及代碼示例
- Python pyspark element_at用法及代碼示例
- Python pyspark explode用法及代碼示例
- Python pyspark MultiIndex.hasnans用法及代碼示例
- Python pyspark Series.to_frame用法及代碼示例
- Python pyspark DataFrame.quantile用法及代碼示例
- Python pyspark Column.withField用法及代碼示例
注:本文由純淨天空篩選整理自spark.apache.org大神的英文原創作品 pyspark.ml.clustering.PowerIterationClustering。非經特殊聲明,原始代碼版權歸原作者所有,本譯文未經允許或授權,請勿轉載或複製。