当前位置: X-MOL 学术Can. J. Stat. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Principal component-guided sparse regression
The Canadian Journal of Statistics ( IF 0.6 ) Pub Date : 2021-04-16 , DOI: 10.1002/cjs.11617
Jingyi K. Tay 1 , Jerome Friedman 1 , Robert Tibshirani 1, 2
Affiliation  

We propose a new method for supervised learning, the “principal components lasso” (“pcLasso”). It combines the lasso (1) penalty with a quadratic penalty that shrinks the coefficient vector toward the feature matrix's leading principal components (PCs). pcLasso can be especially powerful if the features are preassigned to groups. In that case, pcLasso shrinks each group-wise component of the solution toward the leading PCs of that group. The pcLasso method also carries out selection of feature groups. We provide some theory and illustrate the method on some simulated and real data examples.

中文翻译:

主成分引导的稀疏回归

我们提出了一种新的监督学习方法,即“主成分套索”(“pcLasso”)。它将套索 ( 1 ) 惩罚与二次惩罚相结合,将系数向量缩小到特征矩阵的主要主成分 (PC)。如果将功能预先分配给组,则 pcLasso 会特别强大。在这种情况下,pcLasso 将解决方案的每个分组组件缩小到该组的领先 PC。pcLasso 方法也进行特征组的选择。我们提供了一些理论,并在一些模拟和真实数据示例上说明了该方法。
更新日期:2021-04-16
down
wechat
bug