Skip to main content
Log in

On the Linear Convergence of Forward–Backward Splitting Method: Part I—Convergence Analysis

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

In this paper, we study the complexity of the forward–backward splitting method with Beck–Teboulle’s line search for solving convex optimization problems, where the objective function can be split into the sum of a differentiable function and a nonsmooth function. We show that the method converges weakly to an optimal solution in Hilbert spaces, under mild standing assumptions without the global Lipschitz continuity of the gradient of the differentiable function involved. Our standing assumptions is weaker than the corresponding conditions in the paper of Salzo (SIAM J Optim 27:2153–2181, 2017). The conventional complexity of sublinear convergence for the functional value is also obtained under the local Lipschitz continuity of the gradient of the differentiable function. Our main results are about the linear convergence of this method (in the quotient type), in terms of both the function value sequence and the iterative sequence, under only the quadratic growth condition. Our proof technique is direct from the quadratic growth conditions and some properties of the forward–backward splitting method without using error bounds or Kurdya-Łojasiewicz inequality as in other publications in this direction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. We are indebted to some remarks from one referee that allow us to extend the results from finite dimensions to Hilbert spaces in the current version.

  2. We are grateful to one of the referees whose remarks lead us to this simple proof.

  3. This observation comes from one of the referees.

References

  1. Daubechies, I., Defrise, M., De Mol, D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004)

    Article  MathSciNet  Google Scholar 

  2. Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for \(\ell _1\)-minimization: methodology and convergence. SIAM J. Optim. 19, 1107–1130 (2008)

    Article  MathSciNet  Google Scholar 

  3. Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. 58, 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  4. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)

    Book  Google Scholar 

  5. Bredies, K., Lorenz, D.A.: Linear convergence of iterative soft-thresholding. J. Fourier Anal. Appl. 14, 813–837 (2008)

    Article  MathSciNet  Google Scholar 

  6. Beck, A., Teboulle, M.: Gradient-Based Algorithms with Applications to Signal Recovery Problems. In: Palomar, D., Eldar, Y. (Eds.), Convex Optimization in Signal Processing and Communications, pp. 42–88. University Press, Cambribge (2010)

  7. Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Fixed-Point Algorithms for Inverse Problems. Science and Engineering. Springer Optimization and Its Applications, vol. 49, pp. 185–212. Springer, New York (2011)

  8. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)

    Article  MathSciNet  Google Scholar 

  9. Neal, P., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1, 127–239 (2014)

    Article  Google Scholar 

  10. Davis, D., Yin, W.: Convergence rate analysis of several splitting schemes. In: Splitting Methods in Communications, Image Science, and Engineering. Scientific Computation, Springer, Cham (2016)

  11. Tseng, P.: A modified forward–backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)

    Article  MathSciNet  Google Scholar 

  12. Bello Cruz, J.Y., Nghia, T.T.A.: On the convergence of the proximal forward–backward splitting method with linesearches. Optim. Method Softw. 31, 1209–1238 (2016)

    Article  Google Scholar 

  13. Salzo, S.: The variable metric forward–backward splitting algorithm under mild differentiability assumptions. SIAM J. Optim. 27, 2153–2181 (2017)

    Article  MathSciNet  Google Scholar 

  14. Bauschke, H.H., Bolte, J., Teboulle, M.: A descent lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications. Math. Oper. Res. 42, 330–348 (2017)

    Article  MathSciNet  Google Scholar 

  15. Csiszár, I.: Why least squares and maximum entropy? An axiomatic approach to inference for linear inverse problems. Ann. Statist. 19, 2032–2066 (1991)

    Article  MathSciNet  Google Scholar 

  16. Vardi, Y., Shepp, L.A., Kaufman, L.: A statistical model for positron emission tomography. J. Amer. Statist. Assoc. 80, 8–37 (1985)

    Article  MathSciNet  Google Scholar 

  17. Bello-Cruz, J.Y., Li, G., Nghia, T.T.A.: On the linear convergence of forward-backward splitting method. Quadratic growth condition and uniqueness of optimal solution to Lasso, preprint, Part II (2020)

  18. Bolte, J., Nguyen, T.P., Peypouquet, J., Suter, B.W.: From error bounds to the complexity of first-order descent methods for convex functions. Math. Program. 165, 471–507 (2017)

    Article  MathSciNet  Google Scholar 

  19. Garrigos, G., Rosasco, L., Villa, S.: Convergence of the forward-backward algorithm: beyond the worst case with the help of geometry, arXiv:1703.09477 (2017)

  20. Garrigos, G., Rosasco, L., Villa, S.: Thresholding gradient methods in Hilbert spaces: support identification and linear convergence, arXiv:1712.00357 (2017)

  21. Li, G., Pong, T.K.: Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods. Found. Comp. Math. 18, 1199–1232 (2018)

    Article  Google Scholar 

  22. Li, G., Mordukhovich, B.S., Nghia, T.T.A., Pham, T.S.: Error bounds for parametric polynomial systems with applications to higher-order stability analysis and convergence rates. Math. Program. 168, 313–346 (2018)

    Article  MathSciNet  Google Scholar 

  23. Zhou, Z., So, A.M.-C.: A unified approach to error bounds for structured convex optimization. Math. Program. 165, 689–728 (2017)

    Article  MathSciNet  Google Scholar 

  24. Li, G.: Global error bounds for piecewise convex polynomials. Math. Program. 137(1–2), 37–64 (2013)

    Article  MathSciNet  Google Scholar 

  25. Drusvyatskiy, D., Lewis, A.: Error bounds, quadratic growth, and linear convergence of proximal methods. Math. Oper. Res. 43, 693–1050 (2018)

    Article  MathSciNet  Google Scholar 

  26. Necoara, I., Nesterov, Yu., Glineur, F.: Linear convergence of first order methods for non-strongly convex optimization. Math. Program. 175, 69–107 (2019)

    Article  MathSciNet  Google Scholar 

  27. Tao, S., Boley, D., Zhang, S.: Local linear convergence of ISTA and FISTA on the Lasso problem. SIAM J. Optim. 26, 313–336 (2016)

    Article  MathSciNet  Google Scholar 

  28. Luo, Z.-Q., Tseng, P.: Error bounds and convergence analysis of feasible descent methods: a general approach. Ann. Oper. Res. 46, 157–178 (1993)

    Article  MathSciNet  Google Scholar 

  29. Azé, D., Corvellec, J.-N.: Nonlinear local error bounds via a change of metric. J. Fixed Point Theory Appl. 16, 251–372 (2014)

    Article  MathSciNet  Google Scholar 

  30. Aragón Artacho, F.J., Geoffroy, M.H.: Characterizations of metric regularity of subdifferentials. J. Convex Anal. 15, 365–380 (2008)

    MathSciNet  MATH  Google Scholar 

  31. Drusvyatskiy, D., Mordukhovich, B.S., Nghia, T.T.A.: Second-order growth, tilt stability, and metric regularity of the subdifferential. J. Convex Anal. 21, 1165–1192 (2014)

    MathSciNet  MATH  Google Scholar 

  32. Liang, J., Fadili, J., Peyré, G.: Local linear convergence of forward-backward under partial smoothness. Adv. Neural Inf. Process Syst. (2014)

  33. Liang, J., Fadili, J., Peyré, G.: Activity identification and local linear convergence of forward–backward type methods. SIAM J. Optim. 27, 408–437 (2017)

    Article  MathSciNet  Google Scholar 

  34. Aragón Artacho, F.J., Geoffroy, M.H.: Metric subregularity of the convex subdifferential in Banach spaces. J. Nonlinear Convex Anal. 15, 35–47 (2014)

    MathSciNet  MATH  Google Scholar 

  35. Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117, 387–423 (2000)

    Article  MathSciNet  Google Scholar 

  36. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)

    Book  Google Scholar 

  37. Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings. A View from Variational Analysis. Springer, Dordrecht (2009)

    Book  Google Scholar 

  38. Burachik, R.S., Iusem, A.N.: Set-Valued Mappings and Enlargements of Monotone Operators. Springer, Berlin (2008)

    MATH  Google Scholar 

Download references

Acknowledgements

This work was partially supported by the National Science Foundation (NSF) Grants DMS—1816386 and 1816449, and a Discovery Project from Australian Research Council (ARC), DP190100555. The authors are deeply grateful to both anonymous referees for their careful readings and thoughtful suggestions that allowed us to improve the original presentation significantly. Many insightful remarks from one referee allow us to rewrite Section 3 in infinite dimensional spaces.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoyin Li.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bello-Cruz, Y., Li, G. & Nghia, T.T.A. On the Linear Convergence of Forward–Backward Splitting Method: Part I—Convergence Analysis. J Optim Theory Appl 188, 378–401 (2021). https://doi.org/10.1007/s10957-020-01787-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-020-01787-7

Keywords

Mathematics Subject Classification

Navigation