当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Efficient Gradient Support Pursuit With Less Hard Thresholding for Cardinality-Constrained Learning
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.4 ) Pub Date : 2021-06-23 , DOI: 10.1109/tnnls.2021.3087805
Fanhua Shang 1 , Bingkun Wei 2 , Hongying Liu 3 , Yuanyuan Liu 2 , Pan Zhou 4 , Maoguo Gong 5
Affiliation  

Recently, stochastic hard thresholding (HT) optimization methods [e.g., stochastic variance reduced gradient hard thresholding (SVRGHT)] are becoming more attractive for solving large-scale sparsity/rank-constrained problems. However, they have much higher HT oracle complexities, especially for high-dimensional data or large-scale matrices. To address this issue and inspired by the well-known Gradient Support Pursuit (GraSP) method, this article proposes a new Relaxed Gradient Support Pursuit (RGraSP) framework. Unlike GraSP, RGraSP only requires to yield an approximation solution at each iteration. Based on the property of RGraSP, we also present an efficient stochastic variance reduction-gradient support pursuit algorithm and its fast version (called stochastic variance reduced gradient support pursuit (SVRGSP+). We prove that the gradient oracle complexity of both our algorithms is two times less than that of SVRGHT. In particular, their HT complexity is about $\kappa _{\widehat {s}}$ times less than that of SVRGHT, where $\kappa _{\widehat {s}}$ is the restricted condition number. Moreover, we prove that our algorithms enjoy fast linear convergence to an approximately global optimum, and also present an asynchronous parallel variant to deal with very high-dimensional and sparse data. Experimental results on both synthetic and real-world datasets show that our algorithms yield superior results than the state-of-the-art gradient HT methods.

中文翻译:

基数约束学习中较少硬阈值的高效梯度支持追求

最近,随机硬阈值 (HT) 优化方法 [例如,随机方差减少梯度硬阈值 (SVRGHT)] 在解决大规模稀疏性/等级约束问题方面变得越来越有吸引力。然而,它们具有更高的 HT oracle 复杂度,特别是对于高维数据或大规模矩阵。为了解决这个问题,并受著名的梯度支持追踪 (Gradient Support Pursuit, GraSP) 方法的启发,本文提出了一种新的 Relaxed Gradient Support Pursuit (RGraSP) 框架。与 GraSP 不同,RGraSP 只需要在每次迭代时产生一个近似解。基于RGraSP的特性,我们还提出了一种高效的随机方差减少梯度支持追踪算法及其快速版本(称为随机方差减少梯度支持追踪(SVRGSP +)。我们证明我们两种算法的梯度预言复杂度比 SVRGHT 小两倍。特别是,它们的 HT 复杂度大约是 $\kappa _{\widehat {s}}$次小于 SVRGHT,其中 $\kappa _{\widehat {s}}$是限制条件数。此外,我们证明我们的算法可以快速线性收敛到近似全局最优值,并且还提供异步并行变体来处理非常高维和稀疏的数据。在合成数据集和真实数据集上的实验结果表明,我们的算法产生的结果优于最先进的梯度 HT 方法。
更新日期:2021-06-23
down
wechat
bug