当前位置: X-MOL 学术arXiv.cs.DS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression
arXiv - CS - Data Structures and Algorithms Pub Date : 2019-07-16 , DOI: arxiv-1907.07167
Deeksha Adil, Richard Peng, Sushant Sachdeva

Linear regression in $\ell_p$-norm is a canonical optimization problem that arises in several applications, including sparse recovery, semi-supervised learning, and signal processing. Generic convex optimization algorithms for solving $\ell_p$-regression are slow in practice. Iteratively Reweighted Least Squares (IRLS) is an easy to implement family of algorithms for solving these problems that has been studied for over 50 years. However, these algorithms often diverge for p > 3, and since the work of Osborne (1985), it has been an open problem whether there is an IRLS algorithm that is guaranteed to converge rapidly for p > 3. We propose p-IRLS, the first IRLS algorithm that provably converges geometrically for any $p \in [2,\infty).$ Our algorithm is simple to implement and is guaranteed to find a $(1+\varepsilon)$-approximate solution in $O(p^{3.5} m^{\frac{p-2}{2(p-1)}} \log \frac{m}{\varepsilon}) \le O_p(\sqrt{m} \log \frac{m}{\varepsilon} )$ iterations. Our experiments demonstrate that it performs even better than our theoretical bounds, beats the standard Matlab/CVX implementation for solving these problems by 10--50x, and is the fastest among available implementations in the high-accuracy regime.

中文翻译:

用于 p 范数线性回归的快速、可证明收敛的 IRLS 算法

$\ell_p$-norm 中的线性回归是一个典型的优化问题,出现在多个应用中,包括稀疏恢复、半监督学习和信号处理。用于解决 $\ell_p$-regression 的通用凸优化算法在实践中很慢。迭代重加权最小二乘法 (IRLS) 是一个易于实现的算法系列,用于解决这些问题,已经研究了 50 多年。然而,这些算法通常在 p > 3 时发散,并且自从 Osborne (1985) 的工作以来,是否有一种 IRLS 算法可以保证在 p > 3 时快速收敛一直是一个悬而未决的问题。我们提出 p-IRLS,第一个 IRLS 算法可证明对任何 $p \in [2,\infty) 几何收敛。$ 我们的算法很容易实现,并且保证在 $O(p^{3.5} m^{\frac{p-2}{2(p-1)} 中找到 $(1+\varepsilon)$-近似解}} \log \frac{m}{\varepsilon}) \le O_p(\sqrt{m} \log \frac{m}{\varepsilon} )$ 迭代。我们的实验表明,它的性能甚至比我们的理论界限还要好,比解决这些问题的标准 Matlab/CVX 实现高 10--50 倍,并且是高精度范围内可用实现中最快的。
更新日期:2020-01-13
down
wechat
bug