Skip to main content
Log in

First-Order Frameworks for Continuous Newton-like Dynamics Governed by Maximally Monotone Operators

  • Published:
Set-Valued and Variational Analysis Aims and scope Submit manuscript

Abstract

In a Hilbert framework, we discuss a continuous Newton-like model that is well-adapted in view to numerical purposes for solving convex minimization and more general monotone inclusion problems. Algorithmic solutions to these problems were recently inspired by implicit temporal discretizations of the (stabilized) continuous version of Nesterov’s accelerated gradient method with an additional Hessian damping term (so as to attenuate the oscillation effects). Unfortunately, due to the presence of the Hessian term, these discrete variants require several gradients or proximal evaluations (per iteration). An alternative methodology can be realized by means of a first-order model that no more involves the Hessian term and that can be extended to the case of an arbitrary maximally monotone operator. Our first-order model originates from the reformulation of a closely related variant to the Nesterov-like equation. Its dynamics are studied (simultaneously) with regard to convex minimization and monotone inclusion problems by considering it when governed by the sum of the gradient of a convex differentiable function and (up to a multiplicative constant) the Yosida approximation of a maximally monotone operator, with an appropriate adjustment of the regularization parameter. It turns out that our model, offers a new framework for discrete variants, while keeping the main asymptotic features of the (stabilized) Nesterov-like equation. Two new algorithms are then suggested relative to the considered optimization problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Alvarez, F., Attouch, H., Bolte, J., Redont, P.: A second-order gradient-like dissipative dynamical system with Hessian driven damping. Application to Optimization and Mechanics. J. Math. Pures Appl. 81(8), 747–779 (2002)

    Article  MathSciNet  Google Scholar 

  2. Attouch, H., Balhag, A., Chbani, Z., Riahi, H.: Fast convex optimization via inertial dynamics combining viscous and Hessian-driven damping with time rescaling. Evolution Equations and Control Theory. https://doi.org/10.3934/eect.2021010. arXiv:2009.07620v1 (2020) (2021)

  3. Attouch, H., Cabot, A.: Convergence rates of inertial forward-backward algorithms. SIAM J. Optim. 28(1), 849–874 (2018)

    Article  MathSciNet  Google Scholar 

  4. Attouch, H., Chbani, Z., Fadili, J., Riahi, H.: First-order optimization algorithms via inertial systems with Hessian driven damping. Math. Programming. https://doi.org/10.1007/s10107-020-01591-1. arXiv:1907.10536 (2019)

  5. Attouch, H., Laszlo, S.C.: Newton-like inertial dynamics and proximal algorithms governed by maximally monotone operators. SIAM Journal on Optimization 30(4). https://hal.archives-ouvertes.fr/hal-02549730 (2020)

  6. Attouch, H., Laszlo, S.C.: Continuous Newton-like inertial dynamics for monotone inclusions set-valued and variational Analysis. https://doi.org/10.1007/s11228-020-00564-y (2020)

  7. Attouch, H., Peypouquet, J.: The rate of convergence of Nesterov’s accelerated forward-backward method is actually faster than 1/k2. SIAM J. Optimization 26(3), 1824–1834 (2016)

    Article  MathSciNet  Google Scholar 

  8. Attouch, H., Peypouquet, J.: Convergence of inertial dynamics and proximal algorithms governed by maximally monotone operators. Math. Programming 174, 391–432 (2019)

    Article  MathSciNet  Google Scholar 

  9. Attouch, H., Peypouquet, J., Redont, P.: Fast convex minimization via inertial dynamics with Hessian driven damping. J. Differential Equations 261, 5734–5783 (2016)

    Article  MathSciNet  Google Scholar 

  10. Belgioioso, G., Grammatico, S.: Semi-Decentralized Nash equilibrium seeking in aggregative games with separable coupling constraints and Non-Differentiable cost functions. IEEE Control Systems Letters 1(2), 400–405 (2017)

    Article  MathSciNet  Google Scholar 

  11. Boţ, R.I., Csetnek, E.R.: Second order forward-backward dynamical systems for monotone inclusion problems. SIAM J. Control. Optim. 54(3), 1423–1443 (2016)

    Article  MathSciNet  Google Scholar 

  12. Brezis, H.: Opérateurs Maximaux Monotones Et Semi-Groupes De Contractions Dans Les Espaces De Hilbert. Math. Stud., vol. 5. North-Holland, Amsterdam (1973)

    MATH  Google Scholar 

  13. Brezis, H.: Function Analysis, Sobolev Spaces and Partial Differential Equations. Springer, New York (2010). https://doi.org/10.1007/978-0-387-70914-7

    Google Scholar 

  14. Briceño-Arias, L.M., Combettes, P.L., et al.: Monotone Operator Methods for Nash Equilibria in Non-potential Games. In: Bailey, D. (ed.) Computational and Analytical Mathematics. Springer Proceedings in Mathematics & Statistics, p 50. Springer, New York (2013)

  15. Csetnek, E.R.: Continuous dynamics related to monotone inclusions and non-smooth optimization problems. Set Valued and Variational Analysis 28, 611–642 (2020)

    Article  MathSciNet  Google Scholar 

  16. Deimling, K.: Zeros of accretive operators. Manuscripta Mathematica 13(4), 365–374 (1974)

    Article  MathSciNet  Google Scholar 

  17. Haraux, A.: Systèmes dynamiques dissipatifs et applications. RMA17 Masson (1991)

  18. Malitsky, Y.: Proximal extrapolated gradient methods for variational inequalities. Optimization Methods and Software 33(1), 140–164 (2018)

    Article  MathSciNet  Google Scholar 

  19. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course, Volume 87 of Applied Optimization. Kluwer Academic Publishers, Boston (2004)

    Book  Google Scholar 

  20. Nesterov, Y.: Smooth minimization of non-smooth functions. Mathematical Programming 103 (1), 127–152 (2005)

    Article  MathSciNet  Google Scholar 

  21. Nesterov, Y.: Gradient methods for minimizing composite objective function. Math. Programming 140, 125–161 (2013)

    Article  MathSciNet  Google Scholar 

  22. Ochs, P., Brox, T., Pock. T.: ipiasco: inertial proximal algorithm for strongly convex optimization. J Math Imaging Vision 53, 171–181 (2015)

    Article  MathSciNet  Google Scholar 

  23. Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Amer. Math. Soc. 73, 591–597 (1967)

    Article  MathSciNet  Google Scholar 

  24. Reich, S.: An iterative procedure for constructing zeros of accretive sets in Banach spaces. Nonlinear Analysis: TMA 2(1), 85–92 (1978)

    Article  MathSciNet  Google Scholar 

  25. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis Fundamental Principles of Mathematical Sciences. Springer 317, Berlin (1998)

  26. Shi, B., Du, S.S., Jordan, M.I., Su, W.J.: Understanding the Acceleration Phenomenon via High-Resolution Differential Equations. arXiv:1810.08907v3 (2018)

  27. Su, W., Boyd, S., Candès, E.J.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. Neural Information Processing Systems 27, 2510–2518 (2014)

    MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the two anonymous referees for their careful readings of the manuscript and their insightful comments and observations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul-Emile Maingé.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 A.1 The Yosida Approximation

The Yosida approximation enjoys numerous nice properties which often facilitating and simplifying calculations. Some of them are recalled below (see [12, 13]):

Proposition A.1

Let \(A : {\mathscr{H}} \to 2^{{\mathscr{H}}}\) be a maximal monotone operator and set JλA := (I + λA)− 1. Then, we have the following properties :

  1. (a)

    JλA is single-valued, everywhere defined and nonexpansive,

  2. (b)

    \( \forall v\in {\mathscr{H}}, \forall \lambda >0, A_{\lambda } v \in A(J_{\lambda A}v),\)

  3. (c)

    \(\forall (u,v)\in {\mathscr{H}}^{2}, \forall \lambda >0, ~\langle A_{\lambda } u - A_{\lambda } v, u - v \rangle \geq \lambda \|A_{\lambda } u - A_{\lambda } v\|^{2},\)

  4. (d)

    \(\forall (u,v)\in {\mathscr{H}}^{2}, \forall \lambda >0, ~\| A_{\lambda } u - A_{\lambda } v\| \leq \frac {1}{\lambda }\|u - v\|,\)

  5. (e)

    \(\forall \lambda >0, A^{-1}(\{0\}) = A_{\lambda }^{-1}(\{0\}),\)

  6. (f)

    λ > 0,∀μ > 0, (Aλ)μ = Aλ+μ.

Lemma A.1

Let γ,δ > 0 and \(x, y \in {\mathscr{H}}\). Then for zA− 1({0}), we have

$$ \begin{array}{@{}rcl@{}} && \|\gamma A_{\gamma}x-\delta A_{\delta}y \| \leq 2\|x-y\| + 2\frac{|\gamma - \delta|}{\gamma}\|x-z\|, \end{array} $$
(A.1)
$$ \begin{array}{@{}rcl@{}} && \| A_{\gamma}x- A_{\delta}y \| \le \left ( 3\frac{| \delta-\gamma|}{\delta\gamma} \times \|x-z\| + \frac{2}{\delta}\|x-y\|\right ). \end{array} $$
(A.2)

Proof

The proof of (A.1) can be found in [8]. Let us prove (A.2). To get this we simply have

\(\begin {array}{l} A_{\gamma }x- A_{\delta }y= \frac {1}{ \delta } \left (\delta A_{\gamma }x- \delta A_{\delta }y\right ) \\ \hskip 2cm = \frac {1}{ \delta } \left ((\delta -\gamma ) A_{\gamma }x + (\gamma A_{\gamma }x -\delta A_{\delta }y\right ), \end {array}\)

hence

\(\begin {array}{l} \| A_{\gamma }x- A_{\delta }y \| \le \frac {1}{ \delta } \left (| \delta -\gamma | \times \|A_{\gamma }x \|+ \| \gamma A_{\gamma }x -\delta A_{\delta }y\| \right ). \end {array}\)

Consequently, by \( \|A_{\gamma }x\|\le \frac {1}{\gamma } \|x-z\|\) and using (A.1), we obtain

\(\begin {array}{l} \| A_{\gamma }x- A_{\delta }y \| \le \frac {1}{ \delta } \left (\frac {| \delta -\gamma |}{\gamma } \times \|x-z\| + 2\|x-y\| + 2\frac {|\gamma - \delta |}{\gamma }\|x-z\| \right ) \\ \hskip 2.cm = \frac {1}{ \delta } \left (3\frac {| \delta -\gamma |}{\gamma } \times \|x-z\| + 2\|x-y\| \right ), \end {array}\) that is the desired inequality □

1.2 A.2 A Technical Result

Lemma A.2 (8 Lemma A.5)

Let \(\omega ,\eta : [0,+\infty [ \to [0,+\infty [\) be absolutely continuous functions such that \(\eta \notin L^{1}(0,+\infty )\) and which satisfy \({\int \limits }_{0}^{+\infty } \omega (t)\eta (t) dt < \infty \), along with \(|\dot {\omega }(t)|\leq \eta (t)\) for almost every t > 0. Then \(\displaystyle \lim _{t\to +\infty } \omega (t) = 0\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Labarre, F., Maingé, PE. First-Order Frameworks for Continuous Newton-like Dynamics Governed by Maximally Monotone Operators. Set-Valued Var. Anal 30, 425–451 (2022). https://doi.org/10.1007/s11228-021-00593-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11228-021-00593-1

Keywords

Mathematics Subject Classification (2010)

Navigation