当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On Asymptotic Linear Convergence of Projected Gradient Descent for Constrained Least Squares
IEEE Transactions on Signal Processing ( IF 5.4 ) Pub Date : 2022-07-19 , DOI: 10.1109/tsp.2022.3192142
Trung Vu 1 , Raviv Raich 1
Affiliation  

Many recent problems in signal processing and machine learning such as compressed sensing, image restoration, matrix/tensor recovery, and non-negative matrix factorization can be cast as constrained optimization. Projected gradient descent is a simple yet efficient method for solving such constrained optimization problems. Local convergence analysis furthers our understanding of its asymptotic behavior near the solution, offering sharper bounds on the convergence rate compared to global convergence analysis. However, local guarantees often appear scattered in problem-specific areas of machine learning and signal processing. This manuscript presents a unified framework for the local convergence analysis of projected gradient descent in the context of constrained least squares. The proposed analysis offers insights into pivotal local convergence properties such as the conditions for linear convergence, the region of convergence, the exact asymptotic rate of convergence, and the bound on the number of iterations needed to reach a certain level of accuracy. To demonstrate the applicability of the proposed approach, we present a recipe for the convergence analysis of projected gradient descent and demonstrate it via a beginning-to-end application of the recipe on four fundamental problems, namely, linear equality-constrained least squares, sparse recovery, least squares with the unit norm constraint, and matrix completion.

中文翻译:

约束最小二乘投影梯度下降的渐近线性收敛

信号处理和机器学习中的许多最新问题,例如压缩感知、图像恢复、矩阵/张量恢复和非负矩阵分解,都可以被视为约束优化。投影梯度下降是解决此类约束优化问题的一种简单而有效的方法。局部收敛分析进一步加深了我们对其解附近渐近行为的理解,与全局收敛分析相比,提供了更清晰的收敛速度界限。然而,局部保证通常分散在机器学习和信号处理的特定问题领域。这份手稿提出了一个统一的框架,用于在约束最小二乘的情况下对投影梯度下降进行局部收敛分析。所提出的分析提供了对关键局部收敛特性的见解,例如线性收敛的条件、收敛区域、精确的渐近收敛速率以及达到一定精度所需的迭代次数的界限。为了证明所提出方法的适用性,我们提出了一个投影梯度下降收敛性分析的方法,并通过该方法在四个基本问题上的从头到尾应用来证明它,即线性等式约束最小二乘、稀疏恢复,具有单位范数约束的最小二乘法和矩阵完成。以及达到一定精度所需的迭代次数的界限。为了证明所提出方法的适用性,我们提出了一个投影梯度下降收敛性分析的方法,并通过该方法在四个基本问题上的从头到尾应用来证明它,即线性等式约束最小二乘、稀疏恢复,具有单位范数约束的最小二乘法和矩阵完成。以及达到一定精度所需的迭代次数的界限。为了证明所提出方法的适用性,我们提出了一个投影梯度下降收敛性分析的方法,并通过该方法在四个基本问题上的从头到尾应用来证明它,即线性等式约束最小二乘、稀疏恢复,具有单位范数约束的最小二乘法和矩阵完成。
更新日期:2022-07-19
down
wechat
bug