Skip to main content
Log in

Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

The \(\ell _p\) regularization problem with \(0< p< 1\) has been widely studied for finding sparse solutions of linear inverse problems and gained successful applications in various mathematics and applied science fields. The proximal gradient algorithm is one of the most popular algorithms for solving the \(\ell _p\) regularisation problem. In the present paper, we investigate the linear convergence issue of one inexact descent method and two inexact proximal gradient algorithms (PGA). For this purpose, an optimality condition theorem is explored to provide the equivalences among a local minimum, second-order optimality condition and second-order growth property of the \(\ell _p\) regularization problem. By virtue of the second-order optimality condition and second-order growth property, we establish the linear convergence properties of the inexact descent method and inexact PGAs under some simple assumptions. Both linear convergence to a local minimal value and linear convergence to a local minimum are provided. Finally, the linear convergence results of these methods are extended to the infinite-dimensional Hilbert spaces. Our results cannot be established under the framework of Kurdyka–Łojasiewicz theory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. This condition is satisfied automatically for the \(\ell _p\) regularization problem (1).

References

  1. Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka–Łojasiewicz inequality. Math. Oper. Res. 35, 438–457 (2010)

    MathSciNet  MATH  Google Scholar 

  2. Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods. Math. Program. 137, 91–129 (2013)

    MathSciNet  MATH  Google Scholar 

  3. Bach, F., Jenatton, R., Mairal, J., Obozinski, G.: Structured sparsity through convex optimization. Stat. Sci. 27, 450–468 (2012)

    MathSciNet  MATH  Google Scholar 

  4. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)

    MathSciNet  MATH  Google Scholar 

  5. Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Cambridge (1999)

    MATH  Google Scholar 

  6. Blumensath, T., Davies, M.E.: Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 14, 629–654 (2008)

    MathSciNet  MATH  Google Scholar 

  7. Bolte, J., Nguyen, T.P., Peypouquet, J., Suter, B.W.: From error bounds to the complexity of first-order descent methods for convex functions. Math. Program. 2016, 1–37 (2016)

    MATH  Google Scholar 

  8. Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146, 459–494 (2013)

    MathSciNet  MATH  Google Scholar 

  9. Bredies, K., Lorenz, D.A.: Linear convergence of iterative soft-thresholding. J. Fourier Anal. Appl. 14, 813–837 (2008)

    MathSciNet  MATH  Google Scholar 

  10. Bredies, K., Lorenz, D.A., Reiterer, S.: Minimization of non-smooth, non-convex functionals by iterative thresholding. J. Optim. Theory App. 165, 78–112 (2015)

    MathSciNet  MATH  Google Scholar 

  11. Burachik, R.S., Rubinov, A.: Abstract convexity and augmented Lagrangians. SIAM J. Optim. 18, 413–436 (2007)

    MathSciNet  MATH  Google Scholar 

  12. Byrd, R.H., Nocedal, J., Oztoprak, F.: An inexact successive quadratic approximation method for \(L-1\) regularized optimization. Math. Program. 157, 375–396 (2016)

    MathSciNet  MATH  Google Scholar 

  13. Candès, E., Tao, T.: Decoding by linear programming. IEEE Trans. Inform. Theory 51, 4203–4215 (2005)

    MathSciNet  MATH  Google Scholar 

  14. Cao, W., Sun, J., Xu, Z.: Fast image deconvolution using closed-form thresholding formulas of \(L_q\)\((q=\frac{1}{2},\frac{2}{3})\) regularization. J. Vis. Commun. Image R. 24, 31–41 (2013)

    Google Scholar 

  15. Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 24, 1–14 (2008)

    MathSciNet  MATH  Google Scholar 

  16. Chen, X.: Smoothing methods for nonsmooth, nonconvex minimization. Math. Program. 134, 71–99 (2012)

    MathSciNet  MATH  Google Scholar 

  17. Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of \(\ell _2\)\(\ell _p\) minimization. SIAM J. Sci. Comput. 32, 2832–2852 (2010)

    MathSciNet  Google Scholar 

  18. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Sim. 4, 1168–1200 (2005)

    MathSciNet  MATH  Google Scholar 

  19. Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pur. Appl. Math. 57, 1413–1457 (2004)

    MathSciNet  MATH  Google Scholar 

  20. Donoho, D.L.: Compressed sensing. IEEE Trans. Inform. Theory 52, 1289–1306 (2006)

    MathSciNet  MATH  Google Scholar 

  21. Elad, M.: Sparse and Redundant Representations. Springer, New York (2010)

    MATH  Google Scholar 

  22. Frankel, P., Garrigos, G., Peypouquet, J.: Splitting methods with variable metric for Kurdyka–Lojasiewicz functions and general convergence rates. J. Optim. Theory Appl. 165, 874–900 (2015)

    MathSciNet  MATH  Google Scholar 

  23. Ge, D., Jiang, X., Ye, Y.: A note on complexity of \(L_p\) minimization. Mathm. Program. 129, 285–299 (2011)

    MATH  Google Scholar 

  24. Hu, Y., Li, C., Meng, K., Qin, J., Yang, X.: Group sparse optimizatin via \(\ell _{p, q}\) regularization. J. Mach. Learn. Res 18, 1–52 (2017)

    Google Scholar 

  25. Hu, Y., Li, C., Yang, X.: On convergence rates of linearized proximal algorithms for convex composite optimization with applications. SIAM J. Optim. 26, 1207–1235 (2016)

    MathSciNet  MATH  Google Scholar 

  26. Huang, X., Yang, X.: A unified augmented Lagrangian approach to duality and exact penalization. Math. Oper. Res. 28, 533–552 (2003)

    MathSciNet  MATH  Google Scholar 

  27. Lai, M., Wang, J.: An unconstrained \(\ell _q\) minimization with \(0< q \le 1\) for sparse solution of underdetermined linear systems. SIAM J. Optim. 21, 82–101 (2011)

    MathSciNet  Google Scholar 

  28. Lee, J.D., Sun, Y., Saunders, M.A.: Proximal Newton-type methods for minimizing composite functions. SIAM J. Optim. 24, 1420–1443 (2014)

    MathSciNet  MATH  Google Scholar 

  29. Li, G., Pong, T.K.: Douglas–Rachford splitting for nonconvex optimization with application to nonconvex feasibility problems. Math. Program. 159, 1–31 (2015)

    MathSciNet  Google Scholar 

  30. Li, G., Pong, T.K.: Global convergence of splitting methods for nonconvex composite optimization. SIAM J. Optim. 25, 2434–2460 (2015)

    MathSciNet  MATH  Google Scholar 

  31. Lu, Z., Zhang, Y., Lu, J.: \(\ell _p\) Regularized low-rank approximation via iterative reweighted singular value minimization. Comput. Optim. Appl. 68, 619–642 (2017)

    MathSciNet  MATH  Google Scholar 

  32. Lu, Z., Zhang, Y.: Sparse approximation via penalty decomposition methods. SIAM J. Optim. 23, 2448–2478 (2013)

    MathSciNet  MATH  Google Scholar 

  33. Lu, J., Qiao, K., Li, X., Zou, Y., Lu, Z.: \(\ell _0\)-minimization methods for image restoration problems based on wavelet frames. Inverse Probl. 35, 064001 (2019)

    MATH  Google Scholar 

  34. Luo, Z., Pang, J., Ralph, D.: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, Cambridge (1996)

    MATH  Google Scholar 

  35. Mairal, J.: Incremental majorization-minimization optimization with application to large-scale machine learning. SIAM J. Optim. 25, 829–855 (2015)

    MathSciNet  MATH  Google Scholar 

  36. Marjanovic, G., Solo, V.: On \(l_q\) optimization and sparse inverse covariance selection. IEEE Trans. Sig. Proc. 62, 1644–1654 (2014)

    MATH  Google Scholar 

  37. Nesterov, Y.: Gradient methods for minimizing composite functions. Math. Program. 140, 125–161 (2013)

    MathSciNet  MATH  Google Scholar 

  38. Ochs, P., Chen, Y., Brox, T., Pock, T.: iPiano: inertial proximal algorithm for nonconvex optimization. SIAM J. Imaging Sci. 7, 1388–1419 (2014)

    MathSciNet  MATH  Google Scholar 

  39. Nikolova, M.: Description of the minimizers of least squares regularized with \(\ell _0\)-norm. Uniqueness of the global minimizer. SIAM J. Imaging Sci. 6, 904–937 (2013)

    MathSciNet  MATH  Google Scholar 

  40. Pant, J.K., Lu, W.S., Antoniou, A.: New improved algorithms for compressive sensing based on \(\ell _{p}\) norm. IEEE Trans. Circuits II 61, 198–202 (2014)

    Google Scholar 

  41. Qin, J., Hu, Y.H., Xu, F., Yalamanchili, H.K., Wang, J.: Inferring gene regulatory networks by integrating ChIP-seq/chip and transcriptome data via LASSO-type regularization methods. Methods 67, 294–303 (2014)

    Google Scholar 

  42. Razaviyayn, M., Hong, M., Luo, Z.: A unified convergence analysis of block successive minimization methods for nonsmooth optimization. SIAM J. Optim. 23, 1126–1153 (2013)

    MathSciNet  MATH  Google Scholar 

  43. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)

    MathSciNet  MATH  Google Scholar 

  44. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)

    MATH  Google Scholar 

  45. Schmidt, M., Roux, N.L., Bach, F.: Convergence rates of inexact proximal-gradient methods for convex optimization. Adv. Neural Inf. Process. Syst. 24, 1458–1466 (2011)

    Google Scholar 

  46. Simon, N., Friedman, J., Hastie, T., Tibshirani, R.: A sparse-group Lasso. J. Comput. Graph. Stat. 22, 231–245 (2013)

    MathSciNet  Google Scholar 

  47. Tao, S., Boley, D., Zhang, S.: Local linear convergence of ISTA and FISTA on the LASSO problem. SIAM J. Optim. 26, 313–336 (2016)

    MathSciNet  MATH  Google Scholar 

  48. Tseng, P.: Approximation accuracy, gradient methods, and error bound for structured convex optimization. Math. Program. 125, 263–295 (2010)

    MathSciNet  MATH  Google Scholar 

  49. Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117, 387–423 (2009)

    MathSciNet  MATH  Google Scholar 

  50. Wang, J., Hu, Y., Li, C., Yao, J.-C.: Linear convergence of CQ algorithms and applications in gene regulatory network inference. Inverse Probl. 33, 055017 (2017)

    MathSciNet  MATH  Google Scholar 

  51. Wang, J., Li, C., Lopez, G., Yao, J.-C.: Proximal point algorithms on Hadamard manifolds: linear convergence and finite termination. SIAM J. Optim. 26, 2696–2729 (2017)

    MathSciNet  MATH  Google Scholar 

  52. Wen, B., Chen, X., Pong, T.K.: Linear convergence of proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems. SIAM J. Optim. 27, 124–145 (2017)

    MathSciNet  MATH  Google Scholar 

  53. Xiao, L., Zhang, T.: A proximal-gradient homotopy method for the sparse least-squares problem. SIAM J. Optim. 23, 1062–1091 (2013)

    MathSciNet  MATH  Google Scholar 

  54. Xu, Y., Yin, W.: A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM J. Imaging Sci. 6, 1758–1789 (2013)

    MathSciNet  MATH  Google Scholar 

  55. Xu, Z., Chang, X., Xu, F., Zhang, H.: \({L}_{1/2}\) regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neur. Net. Lear. 23, 1013–1027 (2012)

    Google Scholar 

  56. Yang, J., Zhang, Y.: Alternating direction algorithms for \(\ell _1\)-problems in compressive sensing. SIAM J. Sci. Comput. 33, 250–278 (2011)

    MathSciNet  Google Scholar 

  57. Zeng, J., Lin, S., Xu, Z.: Sparse regularization: convergence of iterative jumping thresholding algorithm. IEEE Trans. Sig. Proc. 64, 5106–5118 (2016)

    MathSciNet  MATH  Google Scholar 

  58. Zhang, H., Jiang, J., Luo, Z.-Q.: On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems. J. Oper. Res. Soc. China 1, 163–186 (2013)

    MATH  Google Scholar 

  59. Zhang, L., Hu, Y., Li, C., Yao, J.-C.: A new linear convergence result for the iterative soft thresholding algorithm. Optimization 66, 1177–1189 (2017)

    MathSciNet  MATH  Google Scholar 

  60. Zhang, L., Hu, Y., Yu, C.K.W., Wang, J.: Iterative positive thresholding algorithm for nonnegative sparse optimization. Optimization 67, 1345–1363 (2018)

    MathSciNet  MATH  Google Scholar 

  61. Zhang, T.: Analysis of multi-stage convex relaxation for sparse regularization. J. Mach. Learn. Res. 11, 1081–1107 (2010)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the editor and the anonymous reviewer for their valuable comments and suggestions toward the improvement of this paper. Yaohua Hu’s work was supported in part by the National Natural Science Foundation of China (12071306, 11871347), Natural Science Foundation of Guangdong Province of China (2019A1515011917, 2020B1515310008), Project of Educational Commission of Guangdong Province of China (2019KZDZX1007), Natural Science Foundation of Shenzhen (JCYJ20190808173603590, JCYJ20170817100950436) and Interdisciplinary Innovation Team of Shenzhen University. Chong Li’s work was supported in part by the National Natural Science Foundation of China (11971429) and Zhejiang Provincial Natural Science Foundation of China (LY18A010004). Kaiwen Meng’s work was supported in part by the National Natural Science Foundation of China (11671329) and the Fundamental Research Funds for the Central Universities (JBK1805001). Xiaoqi Yang’s work was supported in part by the Research Grants Council of Hong Kong (PolyU 152342/16E).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kaiwen Meng.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, Y., Li, C., Meng, K. et al. Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems. J Glob Optim 79, 853–883 (2021). https://doi.org/10.1007/s10898-020-00955-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-020-00955-3

Keywords

Navigation