如果需要修改模型規範的參數,可以使用update()
代替從頭開始重新創建對象。
用法
# S3 method for bag_mars
update(
object,
parameters = NULL,
num_terms = NULL,
prod_degree = NULL,
prune_method = NULL,
fresh = FALSE,
...
)
# S3 method for bag_mlp
update(
object,
parameters = NULL,
hidden_units = NULL,
penalty = NULL,
epochs = NULL,
fresh = FALSE,
...
)
# S3 method for bag_tree
update(
object,
parameters = NULL,
cost_complexity = NULL,
tree_depth = NULL,
min_n = NULL,
class_cost = NULL,
fresh = FALSE,
...
)
# S3 method for bart
update(
object,
parameters = NULL,
trees = NULL,
prior_terminal_node_coef = NULL,
prior_terminal_node_expo = NULL,
prior_outcome_range = NULL,
fresh = FALSE,
...
)
# S3 method for boost_tree
update(
object,
parameters = NULL,
mtry = NULL,
trees = NULL,
min_n = NULL,
tree_depth = NULL,
learn_rate = NULL,
loss_reduction = NULL,
sample_size = NULL,
stop_iter = NULL,
fresh = FALSE,
...
)
# S3 method for C5_rules
update(
object,
parameters = NULL,
trees = NULL,
min_n = NULL,
fresh = FALSE,
...
)
# S3 method for cubist_rules
update(
object,
parameters = NULL,
committees = NULL,
neighbors = NULL,
max_rules = NULL,
fresh = FALSE,
...
)
# S3 method for decision_tree
update(
object,
parameters = NULL,
cost_complexity = NULL,
tree_depth = NULL,
min_n = NULL,
fresh = FALSE,
...
)
# S3 method for discrim_flexible
update(
object,
num_terms = NULL,
prod_degree = NULL,
prune_method = NULL,
fresh = FALSE,
...
)
# S3 method for discrim_linear
update(
object,
penalty = NULL,
regularization_method = NULL,
fresh = FALSE,
...
)
# S3 method for discrim_quad
update(object, regularization_method = NULL, fresh = FALSE, ...)
# S3 method for discrim_regularized
update(
object,
frac_common_cov = NULL,
frac_identity = NULL,
fresh = FALSE,
...
)
# S3 method for gen_additive_mod
update(
object,
select_features = NULL,
adjust_deg_free = NULL,
parameters = NULL,
fresh = FALSE,
...
)
# S3 method for linear_reg
update(
object,
parameters = NULL,
penalty = NULL,
mixture = NULL,
fresh = FALSE,
...
)
# S3 method for logistic_reg
update(
object,
parameters = NULL,
penalty = NULL,
mixture = NULL,
fresh = FALSE,
...
)
# S3 method for mars
update(
object,
parameters = NULL,
num_terms = NULL,
prod_degree = NULL,
prune_method = NULL,
fresh = FALSE,
...
)
# S3 method for mlp
update(
object,
parameters = NULL,
hidden_units = NULL,
penalty = NULL,
dropout = NULL,
epochs = NULL,
activation = NULL,
learn_rate = NULL,
fresh = FALSE,
...
)
# S3 method for multinom_reg
update(
object,
parameters = NULL,
penalty = NULL,
mixture = NULL,
fresh = FALSE,
...
)
# S3 method for naive_Bayes
update(object, smoothness = NULL, Laplace = NULL, fresh = FALSE, ...)
# S3 method for nearest_neighbor
update(
object,
parameters = NULL,
neighbors = NULL,
weight_func = NULL,
dist_power = NULL,
fresh = FALSE,
...
)
# S3 method for pls
update(
object,
parameters = NULL,
predictor_prop = NULL,
num_comp = NULL,
fresh = FALSE,
...
)
# S3 method for poisson_reg
update(
object,
parameters = NULL,
penalty = NULL,
mixture = NULL,
fresh = FALSE,
...
)
# S3 method for proportional_hazards
update(
object,
parameters = NULL,
penalty = NULL,
mixture = NULL,
fresh = FALSE,
...
)
# S3 method for rand_forest
update(
object,
parameters = NULL,
mtry = NULL,
trees = NULL,
min_n = NULL,
fresh = FALSE,
...
)
# S3 method for rule_fit
update(
object,
parameters = NULL,
mtry = NULL,
trees = NULL,
min_n = NULL,
tree_depth = NULL,
learn_rate = NULL,
loss_reduction = NULL,
sample_size = NULL,
penalty = NULL,
fresh = FALSE,
...
)
# S3 method for surv_reg
update(object, parameters = NULL, dist = NULL, fresh = FALSE, ...)
# S3 method for survival_reg
update(object, parameters = NULL, dist = NULL, fresh = FALSE, ...)
# S3 method for svm_linear
update(
object,
parameters = NULL,
cost = NULL,
margin = NULL,
fresh = FALSE,
...
)
# S3 method for svm_poly
update(
object,
parameters = NULL,
cost = NULL,
degree = NULL,
scale_factor = NULL,
margin = NULL,
fresh = FALSE,
...
)
# S3 method for svm_rbf
update(
object,
parameters = NULL,
cost = NULL,
rbf_sigma = NULL,
margin = NULL,
fresh = FALSE,
...
)
參數
- object
-
型號規格。
- parameters
-
包含要更新的主要參數的 1 行 tibble 或命名列表。更新時直接使用
parameters
或主要參數。如果使用主要參數,這些參數將取代parameters
中的值。此外,在此對象中使用引擎參數將導致錯誤。 - num_terms
-
最終模型中將保留的特征數量,包括截距。
- prod_degree
-
盡可能高的交互程度。
- prune_method
-
修剪方法。
- fresh
-
是否應該就地修改參數或批量替換參數的邏輯。
- ...
-
不用於
update()
。 - hidden_units
-
隱藏模型中單元數量的整數。
- penalty
-
一個非負數,表示某些引擎使用的正則化量。
- epochs
-
訓練迭代次數的整數。
- cost_complexity
-
CART 模型(僅限特定引擎)使用的成本/複雜性參數(也稱為
Cp
)的正數。 - tree_depth
-
樹的最大深度的整數。
- min_n
-
節點中進一步拆分節點所需的最小數據點數量的整數。
- class_cost
-
類成本的非負標量(其中成本為 1 意味著沒有額外成本)。當結果因子的第一級是少數群體時,這非常有用。如果不是這種情況,則可以使用零到一之間的值來偏置到因子的第二級。
- trees
-
集合中包含的樹數的整數。
- prior_terminal_node_coef
-
節點是終端節點的先驗概率的係數。
- prior_terminal_node_expo
-
節點是終端節點的先驗概率的指數。
- prior_outcome_range
-
一個正值,定義預測結果在一定範圍內的先驗寬度。對於回歸來說,它與觀察到的數據範圍有關;先驗是由數據觀測範圍定義的高斯分布的標準差數。對於分類,它被定義為+/-3的範圍(假設在logit標度上)。默認值為 2。
- mtry
-
創建樹模型時將在每次分割時隨機采樣的預測變量的數量(或比例)的數字(僅限特定引擎)
- learn_rate
-
提升算法從迭代到迭代的適應速率的數字(僅限特定引擎)。這有時稱為收縮參數。
- loss_reduction
-
進一步分割所需的損失函數減少的數字(僅限特定引擎)。
- sample_size
-
一個數字,表示暴露給擬合例程的數據數量(或比例)。對於
xgboost
,在每次迭代時進行采樣,而C5.0
在訓練期間采樣一次。 - stop_iter
-
停止前沒有改進的迭代次數(僅限特定引擎)。
- committees
-
集合成員數量的非負整數(不大於 100)。
- neighbors
-
用於調整基於模型的預測的訓練集實例數量的 0 到 9 之間的整數。
- max_rules
-
規則數量最多。
- regularization_method
-
正則化估計類型的字符串。可能的值為:“
diagonal
”、“min_distance
”、“shrink_cov
”和“shrink_mean
”(僅限sparsediscrim
引擎)。 - frac_common_cov, frac_identity
-
零到一之間的數值。
- select_features
-
TRUE
或FALSE.
如果是TRUE
,則模型有能力消除預測變量(通過懲罰)。增加adjust_deg_free
將增加刪除預測變量的可能性。 - adjust_deg_free
-
如果是
select_features = TRUE
,則充當平滑度的乘數。將此值增加到 1 以上以生成更平滑的模型。 - mixture
-
0 到 1(含)之間的數字,表示模型中 L1 正則化(即 lasso)的比例。
-
mixture = 1
指定純套索模型, -
mixture = 0
指定嶺回歸模型,並且 -
0 < mixture < 1
指定彈性網絡模型,插值套索和嶺。
僅適用於特定發動機。
-
- dropout
-
0(含)到 1 之間的數字,表示模型訓練期間隨機設置為零的模型參數的比例。
- activation
-
表示原始預測變量和隱藏單元層之間關係類型的單個字符串。隱藏層和輸出層之間的激活函數根據結果類型自動設置為 "linear" 或 "softmax"。可能的值為:"linear"、"softmax"、"relu" 和 "elu"
- smoothness
-
一個非負數,表示類邊界的相對平滑度。較小的示例會導致模型靈活的邊界,較大的值會生成適應性較差的類邊界
- Laplace
-
用於平滑 low-frequency 計數的拉普拉斯校正的非負值。
- weight_func
-
用於對樣本之間的距離進行加權的核函數類型的單個字符。有效的選擇是:
"rectangular"
,"triangular"
,"epanechnikov"
,"biweight"
,"triweight"
,"cos"
,"inv"
,"gaussian"
,"rank"
或"optimal"
。 - dist_power
-
用於計算 Minkowski 距離的參數的單個數字。
- predictor_prop
-
每個 PLS 分量可以具有非零係數的原始預測變量的最大比例(通過正則化)。該值用於 X 的所有 PLS 分量。
- num_comp
-
要保留的 PLS 組件的數量。
- dist
-
結果概率分布的字符串。默認為"weibull"。
- cost
-
預測樣本在邊以內或錯誤一側的成本的正數
- margin
-
SVM 不敏感損失函數中 epsilon 的正數(僅回歸)
- degree
-
多項式次數為正數。
- scale_factor
-
多項式比例因子的正數。
- rbf_sigma
-
徑向基函數的正數。
例子
# ------------------------------------------------------------------------------
model <- C5_rules(trees = 10, min_n = 2)
model
#> ! parsnip could not locate an implementation for `C5_rules` model
#> specifications.
#> ℹ The parsnip extension package rules implements support for this
#> specification.
#> ℹ Please install (if needed) and load to continue.
#> C5.0 Model Specification (classification)
#>
#> Main Arguments:
#> trees = 10
#> min_n = 2
#>
#> Computational engine: C5.0
#>
update(model, trees = 1)
#> ! parsnip could not locate an implementation for `C5_rules` model
#> specifications.
#> ℹ The parsnip extension package rules implements support for this
#> specification.
#> ℹ Please install (if needed) and load to continue.
#> C5.0 Model Specification (classification)
#>
#> Main Arguments:
#> trees = 1
#> min_n = 2
#>
#> Computational engine: C5.0
#>
update(model, trees = 1, fresh = TRUE)
#> ! parsnip could not locate an implementation for `C5_rules` model
#> specifications.
#> ℹ The parsnip extension package rules implements support for this
#> specification.
#> ℹ Please install (if needed) and load to continue.
#> C5.0 Model Specification (classification)
#>
#> Main Arguments:
#> trees = 1
#>
#> Computational engine: C5.0
#>
# ------------------------------------------------------------------------------
model <- cubist_rules(committees = 10, neighbors = 2)
model
#> ! parsnip could not locate an implementation for `cubist_rules` model
#> specifications.
#> ℹ The parsnip extension package rules implements support for this
#> specification.
#> ℹ Please install (if needed) and load to continue.
#> Cubist Model Specification (regression)
#>
#> Main Arguments:
#> committees = 10
#> neighbors = 2
#>
#> Computational engine: Cubist
#>
update(model, committees = 1)
#> ! parsnip could not locate an implementation for `cubist_rules` model
#> specifications.
#> ℹ The parsnip extension package rules implements support for this
#> specification.
#> ℹ Please install (if needed) and load to continue.
#> Cubist Model Specification (regression)
#>
#> Main Arguments:
#> committees = 1
#> neighbors = 2
#>
#> Computational engine: Cubist
#>
update(model, committees = 1, fresh = TRUE)
#> ! parsnip could not locate an implementation for `cubist_rules` model
#> specifications.
#> ℹ The parsnip extension package rules implements support for this
#> specification.
#> ℹ Please install (if needed) and load to continue.
#> Cubist Model Specification (regression)
#>
#> Main Arguments:
#> committees = 1
#>
#> Computational engine: Cubist
#>
model <- pls(predictor_prop = 0.1)
model
#> ! parsnip could not locate an implementation for `pls` model
#> specifications.
#> ℹ The parsnip extension package plsmod implements support for this
#> specification.
#> ℹ Please install (if needed) and load to continue.
#> PLS Model Specification (unknown mode)
#>
#> Main Arguments:
#> predictor_prop = 0.1
#>
#> Computational engine: mixOmics
#>
update(model, predictor_prop = 1)
#> ! parsnip could not locate an implementation for `pls` model
#> specifications.
#> ℹ The parsnip extension package plsmod implements support for this
#> specification.
#> ℹ Please install (if needed) and load to continue.
#> PLS Model Specification (unknown mode)
#>
#> Main Arguments:
#> predictor_prop = 1
#>
#> Computational engine: mixOmics
#>
update(model, predictor_prop = 1, fresh = TRUE)
#> ! parsnip could not locate an implementation for `pls` model
#> specifications.
#> ℹ The parsnip extension package plsmod implements support for this
#> specification.
#> ℹ Please install (if needed) and load to continue.
#> PLS Model Specification (unknown mode)
#>
#> Main Arguments:
#> predictor_prop = 1
#>
#> Computational engine: mixOmics
#>
# ------------------------------------------------------------------------------
model <- rule_fit(trees = 10, min_n = 2)
model
#> ! parsnip could not locate an implementation for `rule_fit` model
#> specifications.
#> ℹ The parsnip extension packages agua and rules implement support for
#> this specification.
#> ℹ Please install (if needed) and load to continue.
#> RuleFit Model Specification (unknown mode)
#>
#> Main Arguments:
#> trees = 10
#> min_n = 2
#>
#> Computational engine: xrf
#>
update(model, trees = 1)
#> ! parsnip could not locate an implementation for `rule_fit` model
#> specifications.
#> ℹ The parsnip extension packages agua and rules implement support for
#> this specification.
#> ℹ Please install (if needed) and load to continue.
#> RuleFit Model Specification (unknown mode)
#>
#> Main Arguments:
#> trees = 1
#> min_n = 2
#>
#> Computational engine: xrf
#>
update(model, trees = 1, fresh = TRUE)
#> ! parsnip could not locate an implementation for `rule_fit` model
#> specifications.
#> ℹ The parsnip extension packages agua and rules implement support for
#> this specification.
#> ℹ Please install (if needed) and load to continue.
#> RuleFit Model Specification (unknown mode)
#>
#> Main Arguments:
#> trees = 1
#>
#> Computational engine: xrf
#>
model <- boost_tree(mtry = 10, min_n = 3)
model
#> Boosted Tree Model Specification (unknown mode)
#>
#> Main Arguments:
#> mtry = 10
#> min_n = 3
#>
#> Computational engine: xgboost
#>
update(model, mtry = 1)
#> Boosted Tree Model Specification (unknown mode)
#>
#> Main Arguments:
#> mtry = 1
#> min_n = 3
#>
#> Computational engine: xgboost
#>
update(model, mtry = 1, fresh = TRUE)
#> Boosted Tree Model Specification (unknown mode)
#>
#> Main Arguments:
#> mtry = 1
#>
#> Computational engine: xgboost
#>
param_values <- tibble::tibble(mtry = 10, tree_depth = 5)
model %>% update(param_values)
#> Boosted Tree Model Specification (unknown mode)
#>
#> Main Arguments:
#> mtry = 10
#> min_n = 3
#> tree_depth = 5
#>
#> Computational engine: xgboost
#>
model %>% update(param_values, mtry = 3)
#> Boosted Tree Model Specification (unknown mode)
#>
#> Main Arguments:
#> mtry = 10
#> min_n = 3
#> tree_depth = 5
#>
#> Computational engine: xgboost
#>
param_values$verbose <- 0
# Fails due to engine argument
# model %>% update(param_values)
model <- linear_reg(penalty = 10, mixture = 0.1)
model
#> Linear Regression Model Specification (regression)
#>
#> Main Arguments:
#> penalty = 10
#> mixture = 0.1
#>
#> Computational engine: lm
#>
update(model, penalty = 1)
#> Linear Regression Model Specification (regression)
#>
#> Main Arguments:
#> penalty = 1
#> mixture = 0.1
#>
#> Computational engine: lm
#>
update(model, penalty = 1, fresh = TRUE)
#> Linear Regression Model Specification (regression)
#>
#> Main Arguments:
#> penalty = 1
#>
#> Computational engine: lm
#>
相關用法
- R parsnip predict.model_fit 模型預測
- R parsnip proportional_hazards 比例風險回歸
- R parsnip logistic_reg 邏輯回歸
- R parsnip linear_reg 線性回歸
- R parsnip C5_rules C5.0 基於規則的分類模型
- R parsnip set_engine 聲明計算引擎和特定參數
- R parsnip condense_control 將控製對象壓縮為更小的控製對象
- R parsnip control_parsnip 控製擬合函數
- R parsnip augment 通過預測增強數據
- R parsnip repair_call 修複模型調用對象
- R parsnip dot-model_param_name_key 翻譯模型調整參數的名稱
- R parsnip glm_grouped 將數據集中的分組二項式結果與個案權重擬合
- R parsnip rule_fit 規則擬合模型
- R parsnip svm_rbf 徑向基函數支持向量機
- R parsnip set_args 更改模型規範的元素
- R parsnip translate 解決計算引擎的模型規範
- R parsnip max_mtry_formula 根據公式確定 mtry 的最大值。此函數可能會根據公式和數據集限製 mtry 的值。對於生存和/或多變量模型來說,這是一種安全的方法。
- R parsnip svm_linear 線性支持向量機
- R parsnip set_new_model 注冊模型的工具
- R parsnip rand_forest 隨機森林
- R parsnip mlp 單層神經網絡
- R parsnip nearest_neighbor K-最近鄰
- R parsnip fit 將模型規範擬合到數據集
- R parsnip boost_tree 增強樹
- R parsnip bart 貝葉斯加性回歸樹 (BART)
注:本文由純淨天空篩選整理自Max Kuhn等大神的英文原創作品 Updating a model specification。非經特殊聲明,原始代碼版權歸原作者所有,本譯文未經允許或授權,請勿轉載或複製。