当前位置: X-MOL 学术J. Comput. Graph. Stat. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Variable Screening for Sparse Online Regression
Journal of Computational and Graphical Statistics ( IF 1.4 ) Pub Date : 2022-09-15 , DOI: 10.1080/10618600.2022.2099872
Jingwei Liang 1 , Clarice Poon 2
Affiliation  

Abstract

Sparsity-promoting regularizers are widely used to impose low-complexity structure (e.g., l1-norm for sparsity) to the regression coefficients of supervised learning. In the realm of deterministic optimization, the sequence generated by iterative algorithms (such as proximal gradient descent) exhibit “finite activity identification” property, that is, they can identify the low-complexity structure of the solution in a finite number of iterations. However, many online algorithms (such as proximal stochastic gradient descent) do not have this property owing to the vanishing step-size and nonvanishing variance. In this article, by combining with a screening rule, we show how to eliminate useless features of the iterates generated by online algorithms, and thereby enforce finite sparsity identification. One advantage of our scheme is that when combined with any convergent online algorithm, sparsity properties imposed by the regularizer can be exploited to improve computational efficiency. Numerically, significant acceleration can be obtained.



中文翻译:

稀疏在线回归的变量筛选

摘要

稀疏性促进正则化器广泛用于强加低复杂度结构(例如,1个-稀疏性范数)到监督学习的回归系数。在确定性优化领域,迭代算法(如近端梯度下降)生成的序列表现出“有限活动识别”特性,即它们可以在有限次数的迭代中识别解的低复杂度结构。然而,由于步长消失和方差不消失,许多在线算法(例如近端随机梯度下降)不具有此属性。在本文中,通过结合筛选规则,我们展示了如何消除在线算法生成的迭代中无用的特征,从而加强有限稀疏性识别。我们的方案的一个优点是,当与任何收敛的在线算法结合时,可以利用正则化器强加的稀疏性来提高计算效率。在数值上,可以获得显着的加速。

更新日期:2022-09-15
down
wechat
bug