当前位置: 首页>>代码示例>>Python>>正文


Python LogisticRegression.explainParams方法代码示例

本文整理汇总了Python中pyspark.ml.classification.LogisticRegression.explainParams方法的典型用法代码示例。如果您正苦于以下问题:Python LogisticRegression.explainParams方法的具体用法?Python LogisticRegression.explainParams怎么用?Python LogisticRegression.explainParams使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在pyspark.ml.classification.LogisticRegression的用法示例。


在下文中一共展示了LogisticRegression.explainParams方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: explainParams

# 需要导入模块: from pyspark.ml.classification import LogisticRegression [as 别名]
# 或者: from pyspark.ml.classification.LogisticRegression import explainParams [as 别名]
# COMMAND ----------

# MAGIC %md The evaluator currently accepts 2 kinds of metrics - areaUnderROC and areaUnderPR.
# MAGIC We can set it to areaUnderPR by using evaluator.setMetricName("areaUnderPR").

# COMMAND ----------

# MAGIC %md
# MAGIC Now we will try tuning the model with the ParamGridBuilder and the CrossValidator.
# MAGIC 
# MAGIC If you are unsure what params are available for tuning, you can use explainParams() to print a list of all params.

# COMMAND ----------

print lr.explainParams()

# COMMAND ----------

# MAGIC %md As we indicate 5 values for regParam, 4 values for maxIter, and 5 values for elasticNetParam, this grid will have 5 x 4 x 5 = 100 parameter settings for CrossValidator to choose from. We will create a 5-fold cross validator.

# COMMAND ----------

from pyspark.ml.tuning import ParamGridBuilder, CrossValidator

# Create ParamGrid for Cross Validation
paramGrid = (ParamGridBuilder()
             .addGrid(lr.regParam, [0.01, 0.1, 0.5, 1.0, 2.0])
             .addGrid(lr.elasticNetParam, [0.0, 0.1, 0.5, 0.8, 1.0])
             .addGrid(lr.maxIter, [1, 5, 10, 20])
             .build())
开发者ID:yoavfreund,项目名称:databricks,代码行数:32,代码来源:Binary+Classification+Algorithms.py

示例2: SparkContext

# 需要导入模块: from pyspark.ml.classification import LogisticRegression [as 别名]
# 或者: from pyspark.ml.classification.LogisticRegression import explainParams [as 别名]
sc = SparkContext(appName="ML Example")
sc.setLogLevel("FATAL")
sqlContext = SQLContext(sc)

# Prepare training data from a list of (label, features) tuples.
training = sqlContext.createDataFrame([
    (1.0, Vectors.dense([0.0, 1.1, 0.1])),
    (0.0, Vectors.dense([2.0, 1.0, -1.0])),
    (0.0, Vectors.dense([2.0, 1.3, 1.0])),
    (1.0, Vectors.dense([0.0, 1.2, -0.5]))], ["label", "features"])

# Create a LogisticRegression instance. This instance is an Estimator.
lr = LogisticRegression(maxIter=10, regParam=0.01)
# Print out the parameters, documentation, and any default values.
print("LogisticRegression parameters:\n" + lr.explainParams() + "\n")

# Learn a LogisticRegression model. This uses the parameters stored in lr.
model1 = lr.fit(training)

# Since model1 is a Model (i.e., a transformer produced by an Estimator),
# we can view the parameters it used during fit().
# This prints the parameter (name: value) pairs, where names are unique IDs for this
# LogisticRegression instance.
print("Model 1 was fit using parameters: ")
print(model1.extractParamMap())

# We may alternatively specify parameters using a Python dictionary as a paramMap
paramMap = {lr.maxIter: 20}
paramMap[lr.maxIter] = 30 # Specify 1 Param, overwriting the original maxIter.
paramMap.update({lr.regParam: 0.1, lr.threshold: 0.55}) # Specify multiple Params.
开发者ID:cnglen,项目名称:learning_spark,代码行数:32,代码来源:ml_demo.py


注:本文中的pyspark.ml.classification.LogisticRegression.explainParams方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。