当前位置: X-MOL 学术Stat. Interface › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Generalized Newton–Raphson algorithm for high dimensional LASSO regression
Statistics and Its Interface ( IF 0.3 ) Pub Date : 2021-01-01 , DOI: 10.4310/20-sii643
Yueyong Shi 1 , Jian Huang 2 , Yuling Jiao 3 , Yicheng Kang 4 , Hu Zhang 5
Affiliation  

The least absolute shrinkage and selection operator (LASSO) penalized regression is a state-of-the-art statistical method in high dimensional data analysis, when the number of predictors exceeds the number of observations. The commonly used Newton–Raphson algorithm is not very successful in solving the non-smooth optimization in LASSO. In this paper, we propose a fast generalized Newton–Raphson (GNR) algorithm for LASSO-type problems. The proposed algorithm, derived from a suitable Karush–Kuhn–Tucker (KKT) conditions based on generalized Newton derivatives, is a non-smooth Newton-type method. We first establish the local one-step convergence of GNR and then show that it is very efficient and accurate when coupled with a constinuation strategy. We also develop a novel parameter selection method. Numerical studies of simulated and real data analysis suggest that the GNR algorithm, with better (or comparable) accuracy, is faster than the algorithm implemented in the popular glmnet package.

中文翻译:

高维LASSO回归的广义Newton-Raphson算法。

当预测变量的数量超过观察数量时,最小绝对收缩和选择算子(LASSO)的惩罚回归是高维数据分析中的最新统计方法。常用的Newton-Raphson算法在解决LASSO中的非平滑优化方面不是很成功。在本文中,我们提出了一种用于LASSO型问题的快速广义牛顿-拉夫森(GNR)算法。所提出的算法是基于广义牛顿导数从合适的Karush-Kuhn-Tucker(KKT)条件得出的,是一种非光滑的牛顿型方法。我们首先建立了GNR的局部单步收敛,然后证明了与拥塞策略结合使用时,它非常有效且准确。我们还开发了一种新颖的参数选择方法。
更新日期:2021-02-10
down
wechat
bug