当前位置: X-MOL 学术Stat. Sin. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A small-sample choice of the tuning parameter in ridge regression
Statistica Sinica ( IF 1.5 ) Pub Date : 2015-01-01 , DOI: 10.5705/ss.2013.284
Philip S Boonstra 1 , Bhramar Mukherjee 1 , Jeremy M G Taylor 1
Affiliation  

We propose new approaches for choosing the shrinkage parameter in ridge regression, a penalized likelihood method for regularizing linear regression coefficients, when the number of observations is small relative to the number of parameters. Existing methods may lead to extreme choices of this parameter, which will either not shrink the coefficients enough or shrink them by too much. Within this "small-n, large-p" context, we suggest a correction to the common generalized cross-validation (GCV) method that preserves the asymptotic optimality of the original GCV. We also introduce the notion of a "hyperpenalty", which shrinks the shrinkage parameter itself, and make a specific recommendation regarding the choice of hyperpenalty that empirically works well in a broad range of scenarios. A simple algorithm jointly estimates the shrinkage parameter and regression coefficients in the hyperpenalized likelihood. In a comprehensive simulation study of small-sample scenarios, our proposed approaches offer superior prediction over nine other existing methods.

中文翻译:

岭回归中调整参数的小样本选择

我们提出了在岭回归中选择收缩参数的新方法,岭回归是一种用于正则化线性回归系数的惩罚似然方法,当观察的数量相对于参数的数量较小时。现有方法可能会导致此参数的极端选择,这将不会充分缩小系数或将其缩小太多。在这种“小n,大p”的背景下,我们建议对通用的广义交叉验证(GCV)方法进行修正,以保留原始GCV的渐近最优性。我们还引入了“超罚金”的概念,它缩小了收缩参数本身,并就超罚金的选择提出了具体建议,该建议根据经验在广泛的场景中运作良好。一个简单的算法联合估计超惩罚似然中的收缩参数和回归系数。在小样本场景的综合模拟研究中,我们提出的方法提供了优于其他九种现有方法的预测。
更新日期:2015-01-01
down
wechat
bug