当前位置: X-MOL 学术Psychological Methods › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Regularized structural equation modeling with stability selection.
Psychological Methods ( IF 10.929 ) Pub Date : 2021-01-28 , DOI: 10.1037/met0000389
Xiaobei Li 1 , Ross Jacobucci 1
Affiliation  

Regularization methods such as the least absolute shrinkage and selection operator (LASSO) are commonly used in high dimensional data to achieve sparser solutions. Recently, methods such as regularized structural equation modeling (SEM) and penalized likelihood SEM have been proposed, trying to transfer the benefits of regularization to models commonly used in social and behavioral research. These methods allow researchers to estimate large models even in the presence of small sample sizes. However, some drawbacks of the LASSO, such as high false positive rates (FPRs) and inconsistency in selection results, persist at the same time. We propose the application of stability selection, a method based on repeated resampling of the data to select stable coefficients, to regularized SEM as a mechanism to overcome these limitations. Across 2 simulation studies, we find that stability selection greatly improves upon the LASSO in selecting the correct paths, specifically through reducing the number of false positives. We close the article by demonstrating the application of stability selection in 2 empirical examples and presenting several future research directions.

中文翻译:

具有稳定性选择的正则化结构方程建模。

诸如最小绝对收缩和选择算子(LASSO)等正则化方法通常用于高维数据,以实现更稀疏的解决方案。最近,已经提出了诸如正则化结构方程建模(SEM)和惩罚似然SEM等方法,试图将正则化的好处转移到社会和行为研究中常用的模型中。这些方法使研究人员即使在样本量较小的情况下也能估计大型模型。然而,LASSO 的一些缺点,例如高误报率 (FPR) 和选择结果的不一致,同时仍然存在。我们建议应用稳定性选择,一种基于重复重采样数据来选择稳定系数的方法,作为一种克服这些限制的机制,正则化 SEM。在 2 项模拟研究中,我们发现稳定性选择大大提高了 LASSO 在选择正确路径方面的能力,特别是通过减少误报的数量。我们通过在 2 个经验示例中展示稳定性选择的应用并提出几个未来的研究方向来结束本文。
更新日期:2021-01-28
down
wechat
bug