本文簡要介紹
pyspark.pandas.DataFrame.spark.persist
的用法。用法:
spark.persist(storage_level: pyspark.storagelevel.StorageLevel = StorageLevel(True, True, False, False, 1)) → CachedDataFrame
生成並緩存具有特定存儲級別的當前DataFrame。如果未給出 StogeLevel,則默認使用
MEMORY_AND_DISK
級別,如 PySpark。pandas-on-Spark DataFrame 作為受保護資源生成,其相應的數據被緩存,在上下文執行結束後,這些數據將被取消緩存。
例子:
>>> import pyspark >>> df = ps.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)], ... columns=['dogs', 'cats']) >>> df dogs cats 0 0.2 0.3 1 0.0 0.6 2 0.6 0.0 3 0.2 0.1
將 StorageLevel 設置為
MEMORY_ONLY
。>>> with df.spark.persist(pyspark.StorageLevel.MEMORY_ONLY) as cached_df: ... print(cached_df.spark.storage_level) ... print(cached_df.count()) ... Memory Serialized 1x Replicated dogs 4 cats 4 dtype: int64
將 StorageLevel 設置為
DISK_ONLY
。>>> with df.spark.persist(pyspark.StorageLevel.DISK_ONLY) as cached_df: ... print(cached_df.spark.storage_level) ... print(cached_df.count()) ... Disk Serialized 1x Replicated dogs 4 cats 4 dtype: int64
如果未給出StorageLevel,則默認使用
MEMORY_AND_DISK
。>>> with df.spark.persist() as cached_df: ... print(cached_df.spark.storage_level) ... print(cached_df.count()) ... Disk Memory Serialized 1x Replicated dogs 4 cats 4 dtype: int64
>>> df = df.spark.persist() >>> df.to_pandas().mean(axis=1) 0 0.25 1 0.30 2 0.30 3 0.15 dtype: float64
要取消緩存數據幀,請使用
unpersist
函數>>> df.spark.unpersist()
相關用法
- Python pyspark DataFrame.spark.to_table用法及代碼示例
- Python pyspark DataFrame.spark.frame用法及代碼示例
- Python pyspark DataFrame.spark.cache用法及代碼示例
- Python pyspark DataFrame.spark.to_spark_io用法及代碼示例
- Python pyspark DataFrame.spark.coalesce用法及代碼示例
- Python pyspark DataFrame.spark.repartition用法及代碼示例
- Python pyspark DataFrame.spark.hint用法及代碼示例
- Python pyspark DataFrame.spark.apply用法及代碼示例
- Python pyspark DataFrame.sum用法及代碼示例
- Python pyspark DataFrame.sort_index用法及代碼示例
- Python pyspark DataFrame.sem用法及代碼示例
- Python pyspark DataFrame.sort_values用法及代碼示例
- Python pyspark DataFrame.sampleBy用法及代碼示例
- Python pyspark DataFrame.select用法及代碼示例
- Python pyspark DataFrame.style用法及代碼示例
- Python pyspark DataFrame.sortWithinPartitions用法及代碼示例
- Python pyspark DataFrame.skew用法及代碼示例
- Python pyspark DataFrame.set_index用法及代碼示例
- Python pyspark DataFrame.sub用法及代碼示例
- Python pyspark DataFrame.shape用法及代碼示例
- Python pyspark DataFrame.sample用法及代碼示例
- Python pyspark DataFrame.std用法及代碼示例
- Python pyspark DataFrame.schema用法及代碼示例
- Python pyspark DataFrame.size用法及代碼示例
- Python pyspark DataFrame.show用法及代碼示例
注:本文由純淨天空篩選整理自spark.apache.org大神的英文原創作品 pyspark.pandas.DataFrame.spark.persist。非經特殊聲明,原始代碼版權歸原作者所有,本譯文未經允許或授權,請勿轉載或複製。