step_lda()
创建配方步骤的规范,该步骤将返回文本变量的 lda 维度估计值。
用法
step_lda(
recipe,
...,
role = "predictor",
trained = FALSE,
columns = NULL,
lda_models = NULL,
num_topics = 10L,
prefix = "lda",
keep_original_cols = FALSE,
skip = FALSE,
id = rand_id("lda")
)
参数
- recipe
-
一个recipe 对象。该步骤将添加到此配方的操作序列中。
- ...
-
一个或多个选择器函数用于选择受该步骤影响的变量。有关更多详细信息,请参阅
recipes::selections()
。 - role
-
对于此步骤创建的模型项,应为它们分配什么分析角色?默认情况下,该函数假定由原始变量创建的新列将用作模型中的预测变量。
- trained
-
指示预处理数量是否已估计的逻辑。
- columns
-
将由
terms
参数(最终)填充的变量名称字符串。在recipes::prep.recipe()
训练该步骤之前,这是NULL
。 - lda_models
-
text2vec 包中的 WarpLDA 模型对象。如果保留为 NULL(默认值),它将根据训练数据训练其模型。查看示例,了解如何拟合 WarpLDA 模型。
- num_topics
-
整数所需的潜在主题数。
- prefix
-
生成的列名称的前缀,默认为"lda"。
- keep_original_cols
-
将原始变量保留在输出中的逻辑。默认为
FALSE
。 - skip
-
一个合乎逻辑的。当
recipes::bake.recipe()
烘焙食谱时是否应该跳过此步骤?虽然所有操作都是在recipes::prep.recipe()
运行时烘焙的,但某些操作可能无法对新数据进行(例如处理结果变量)。使用skip = FALSE
时应小心。 - id
-
该步骤特有的字符串,用于标识它。
整理
当您tidy()
此步骤时,会出现一个包含列terms
(选择的选择器或变量)和num_topics
(主题数)的小标题。
也可以看看
来自标记的数字变量的其他步骤:step_texthash()
、step_tfidf()
、step_tf()
、step_word_embeddings()
例子
library(recipes)
library(modeldata)
data(tate_text)
tate_rec <- recipe(~., data = tate_text) %>%
step_tokenize(medium) %>%
step_lda(medium)
tate_obj <- tate_rec %>%
prep()
#> 'as(<dgTMatrix>, "dgCMatrix")' is deprecated.
#> Use 'as(., "CsparseMatrix")' instead.
#> See help("Deprecated") and help("Matrix-deprecated").
bake(tate_obj, new_data = NULL) %>%
slice(1:2)
#> # A tibble: 2 × 14
#> id artist title year lda_medium_1 lda_medium_2 lda_medium_3
#> <dbl> <fct> <fct> <dbl> <dbl> <dbl> <dbl>
#> 1 21926 Absalon Prop… 1990 0.7 0.0143 0.0143
#> 2 20472 Auerbach, Frank Mich… 1990 0 0 0
#> # ℹ 7 more variables: lda_medium_4 <dbl>, lda_medium_5 <dbl>,
#> # lda_medium_6 <dbl>, lda_medium_7 <dbl>, lda_medium_8 <dbl>,
#> # lda_medium_9 <dbl>, lda_medium_10 <dbl>
tidy(tate_rec, number = 2)
#> # A tibble: 1 × 3
#> terms num_topics id
#> <chr> <int> <chr>
#> 1 medium 10 lda_UfL6S
tidy(tate_obj, number = 2)
#> # A tibble: 1 × 3
#> terms num_topics id
#> <chr> <int> <chr>
#> 1 medium 10 lda_UfL6S
# Changing the number of topics.
recipe(~., data = tate_text) %>%
step_tokenize(medium, artist) %>%
step_lda(medium, artist, num_topics = 20) %>%
prep() %>%
bake(new_data = NULL) %>%
slice(1:2)
#> # A tibble: 2 × 43
#> id title year lda_medium_1 lda_medium_2 lda_medium_3 lda_medium_4
#> <dbl> <fct> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 21926 Proposa… 1990 0.0286 0.0286 0 0
#> 2 20472 Michael 1990 0 0 0 0
#> # ℹ 36 more variables: lda_medium_5 <dbl>, lda_medium_6 <dbl>,
#> # lda_medium_7 <dbl>, lda_medium_8 <dbl>, lda_medium_9 <dbl>,
#> # lda_medium_10 <dbl>, lda_medium_11 <dbl>, lda_medium_12 <dbl>,
#> # lda_medium_13 <dbl>, lda_medium_14 <dbl>, lda_medium_15 <dbl>,
#> # lda_medium_16 <dbl>, lda_medium_17 <dbl>, lda_medium_18 <dbl>,
#> # lda_medium_19 <dbl>, lda_medium_20 <dbl>, lda_artist_1 <dbl>,
#> # lda_artist_2 <dbl>, lda_artist_3 <dbl>, lda_artist_4 <dbl>, …
# Supplying A pre-trained LDA model trained using text2vec
library(text2vec)
tokens <- word_tokenizer(tolower(tate_text$medium))
it <- itoken(tokens, ids = seq_along(tate_text$medium))
v <- create_vocabulary(it)
dtm <- create_dtm(it, vocab_vectorizer(v))
lda_model <- LDA$new(n_topics = 15)
recipe(~., data = tate_text) %>%
step_tokenize(medium, artist) %>%
step_lda(medium, artist, lda_models = lda_model) %>%
prep() %>%
bake(new_data = NULL) %>%
slice(1:2)
#> # A tibble: 2 × 33
#> id title year lda_medium_1 lda_medium_2 lda_medium_3 lda_medium_4
#> <dbl> <fct> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 21926 Proposa… 1990 0.0143 0.129 0 0.0143
#> 2 20472 Michael 1990 0 0 0 0
#> # ℹ 26 more variables: lda_medium_5 <dbl>, lda_medium_6 <dbl>,
#> # lda_medium_7 <dbl>, lda_medium_8 <dbl>, lda_medium_9 <dbl>,
#> # lda_medium_10 <dbl>, lda_medium_11 <dbl>, lda_medium_12 <dbl>,
#> # lda_medium_13 <dbl>, lda_medium_14 <dbl>, lda_medium_15 <dbl>,
#> # lda_artist_1 <dbl>, lda_artist_2 <dbl>, lda_artist_3 <dbl>,
#> # lda_artist_4 <dbl>, lda_artist_5 <dbl>, lda_artist_6 <dbl>,
#> # lda_artist_7 <dbl>, lda_artist_8 <dbl>, lda_artist_9 <dbl>, …
相关用法
- R textrecipes step_lemma 标记变量的词形还原
- R textrecipes step_tokenize_wordpiece 字符变量的Wordpiece标记化
- R textrecipes step_tokenfilter 根据词频过滤标记
- R textrecipes step_text_normalization 字符变量的标准化
- R textrecipes step_clean_names 干净的变量名称
- R textrecipes step_tokenize_sentencepiece 字符变量的句子标记化
- R textrecipes step_tokenmerge 将多个令牌变量合并为一个
- R textrecipes step_tf 代币的使用频率
- R textrecipes step_tokenize 字符变量的标记化
- R textrecipes step_tfidf 词频-令牌的逆文档频率
- R textrecipes step_word_embeddings 令牌的预训练词嵌入
- R textrecipes step_stem 令牌变量的词干
- R textrecipes step_textfeature 计算文本特征集
- R textrecipes step_texthash 代币的特征哈希
- R textrecipes step_ngram 从标记变量生成 n-gram
- R textrecipes step_stopwords 过滤标记变量的停用词
- R textrecipes step_pos_filter 令牌变量的语音过滤部分
- R textrecipes step_untokenize 令牌变量的取消令牌化
- R textrecipes step_tokenize_bpe 字符变量的 BPE 标记化
- R textrecipes step_clean_levels 清晰的分类级别
- R textrecipes step_sequence_onehot 令牌的位置 One-Hot 编码
- R textrecipes step_dummy_hash 通过特征哈希的指示变量
- R textrecipes show_tokens 显示配方的令牌输出
- R textrecipes tokenlist 创建令牌对象
- R update_PACKAGES 更新现有的 PACKAGES 文件
注:本文由纯净天空筛选整理自等大神的英文原创作品 Calculate LDA Dimension Estimates of Tokens。非经特殊声明,原始代码版权归原作者所有,本译文未经允许或授权,请勿转载或复制。