当前位置: X-MOL 学术Ann. Inst. Stat. Math. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Equivalence between adaptive Lasso and generalized ridge estimators in linear regression with orthogonal explanatory variables after optimizing regularization parameters
Annals of the Institute of Statistical Mathematics ( IF 1 ) Pub Date : 2019-10-25 , DOI: 10.1007/s10463-019-00734-2
Mineaki Ohishi , Hirokazu Yanagihara , Shuichi Kawano

In this paper, we deal with a penalized least-squares (PLS) method for a linear regression model with orthogonal explanatory variables. The used penalties are an adaptive Lasso (AL)-type $$\ell _1$$ penalty (AL penalty) and a generalized ridge (GR)-type $$\ell _2$$ penalty (GR penalty). Since the estimators obtained by minimizing the PLS methods strongly depend on the regularization parameters, we optimize them by a model selection criterion (MSC) minimization method. The estimators based on the AL penalty and the GR penalty have different properties, and it is universally recognized that these are completely different estimators. However, in this paper, we show an interesting result that the two estimators are exactly equal when the explanatory variables are orthogonal after optimizing the regularization parameters by the MSC minimization method.

中文翻译:

优化正则化参数后,具有正交解释变量的线性回归中自适应 Lasso 和广义脊估计量的等价性

在本文中,我们处理具有正交解释变量的线性回归模型的惩罚最小二乘 (PLS) 方法。使用的惩罚是自适应套索(AL)类型的 $$\ell_1$$ 惩罚(AL 惩罚)和广义脊(GR)类型的 $$\ell_2$$ 惩罚(GR 惩罚)。由于通过最小化 PLS 方法获得的估计量强烈依赖于正则化参数,我们通过模型选择标准 (MSC) 最小化方法优化它们。基于AL惩罚和GR惩罚的估计量具有不同的性质,普遍认为它们是完全不同的估计量。然而,在本文中,我们展示了一个有趣的结果,即在通过 MSC 最小化方法优化正则化参数后,当解释变量正交时,两个估计量完全相等。
更新日期:2019-10-25
down
wechat
bug