Skip to content
BY 4.0 license Open Access Published by De Gruyter August 25, 2020

Inducing strong convergence of trajectories in dynamical systems associated to monotone inclusions with composite structure

  • Radu Ioan Boţ EMAIL logo , Sorin-Mihai Grad , Dennis Meier and Mathias Staudigl

Abstract

In this work we investigate dynamical systems designed to approach the solution sets of inclusion problems involving the sum of two maximally monotone operators. Our aim is to design methods which guarantee strong convergence of trajectories towards the minimum norm solution of the underlying monotone inclusion problem. To that end, we investigate in detail the asymptotic behavior of dynamical systems perturbed by a Tikhonov regularization where either the maximally monotone operators themselves, or the vector field of the dynamical system is regularized. In both cases we prove strong convergence of the trajectories towards minimum norm solutions to an underlying monotone inclusion problem, and we illustrate numerically qualitative differences between these two complementary regularization strategies. The so-constructed dynamical systems are either of Krasnoselskiĭ-Mann, of forward-backward type or of forward-backward-forward type, and with the help of injected regularization we demonstrate seminal results on the strong convergence of Hilbert space valued evolutions designed to solve monotone inclusion and equilibrium problems.

MSC 2010: 34G25; 37N40; 47H05; 90C25

1 Introduction

In 1974, Bruck showed in [1] that trajectories of the steepest descent system

x˙(t)+Φ(x(t))0 (1.1)

minimize the convex, proper, lower semi-continuous potential Φ defined on a real Hilbert space 𝓗. They weakly converge towards a minimum of Φ and the potential decreases along the trajectory towards its minimal value, provided that Φ attains its minimum. Subsequently, Baillon and Brezis generalized in [2] this result to differential inclusions whose drift is a maximally monotone operator A:𝓗 ⇉ 2𝓗, and dynamics

x˙(t)+A(x(t))0. (1.2)

Baillon provided in [3] an example where the trajectories of the steepest descent system converge weakly but not strongly. A key tool in the study of the convergence of the steepest descent method is the association of Fejér monotonicity with the Opial lemma.

Understanding the asymptotic behavior of continuous systems can be helpful in studying the convergence properties of discrete algorithms as well, although no direct implications between the convergence properties of trajectories of dynamical systems associated to certain problems and the ones of discrete iterative sequences for solving those problems are known yet. Arguments in this direction can be found, for instance in [4, 5], where continuous versions of Nesterov’s accelerated gradient for the minimization of a smooth convex function as well as their discrete counterparts are discussed. We want also to mention [6, 7], where continuous and discrete versions of the same algorithm are combined with Tikhonov regularization terms, and [8, 9], where second order dynamical systems of Nesterov type with Hessian driven damping and corresponding inertial proximal point type algorithms are investigated. In the context of monotone inclusions we refer to [10] for a so-called shadow Douglas-Rachford splitting in both continuous and discrete versions, and to [11] for a forward-backward-forward differential equation and its discrete counterpart.

In 1996, Attouch and Cominetti coupled in [12] approximation methods with the steepest descent system by adding a Tikhonov regularization term

x˙(t)+Φ(x(t))+ϵ(t)x(t)0. (1.3)

The time-varying parameter ϵ(t) tends to zero and the potential field ∂Φ satisfies the usual assumptions for strong existence and uniqueness of trajectories. The striking point of their analysis is the strong convergence of the trajectories when the regularization function tϵ(t) tends to zero at a sufficiently slow rate. In particular, ϵL1(ℝ+; ℝ). Then the strong limit is the point of minimal norm among the minima of the convex function Φ. This is a rather surprising result since we know that if ϵ = 0 we can only expect weak convergence of the induced trajectories under the standard hypotheses, and suddenly with the regularization term the convergence is strong without imposing additional demanding assumptions. These papers were the starting point for a flourishing line of research in which dynamical systems motivated by solving challenging optimization and monotone inclusion problems are studied. The formulation of numerical algorithms as continuous-in-time dynamical systems makes it possible to understand the asymptotic properties of the algorithms by relating them to their descent properties in terms of energy and/or Lyapunov functions, and to derive new numerical algorithms via sophisticated numerical discretization techniques (see, for instance, [13, 14, 15, 16, 17]). This paper follows this line of research. In particular, our main aim in this work is to construct dynamical systems designed to solve Hilbert space valued monotone inclusions of the form

find xH such that 0Ax+Bx, (MIP)

where A : 𝓗 ⇉ 𝓗 is a maximally monotone operator and B : 𝓗 → 𝓗 a β-cocoercive (respectively a (1/β)-Lipschitz continuous) operator with β > 0, such that Zer(A + B) is nonempty, and our focus is to design methods which guarantee strong convergence of the trajectories towards a solution of (MIP). This is a considerable advancement when contrasted with existing methods, where usually only weak convergence of trajectories is to be expected, for the strong one additional demanding hypotheses being imposed. Indeed, departing from the seminal work of Attouch and Cominetti [12] a thriving series of papers on dynamical systems for solving monotone inclusions of type (MIP) emerged, relating continuous-time methods to classical operator splitting iterations. A general overview of this still very active topic is given in [18]. In [19], Bolte studied the weak convergence of the trajectories of the dynamical system

x˙(t)+x(t)=PC(x(t)yΦ(x(t)),x(0)=x0 (1.4)

where Φ : 𝓗 → ℝ is a convex and continuously differentiable function defined on a real Hilbert space 𝓗, and C ⊆ 𝓗 is a closed and convex subset with an easy to evaluate orthogonal projector PC. Bolte shows that the trajectories of the dynamical system converge weakly to a solution of (MIP) with A = NC, the normal cone mapping of C, and B = ∇Φ, which is actually an optimal solution to the optimization problem

infxCΦ(x).

Moreover, in [19, Section 5] a Tikhonov regularization term is added to the differential equation, guaranteeing strong convergence of the trajectories of the perturbed dynamical system. More recently, [20] provided a generalization of Bolte’s work where the authors proposed the dynamical system

x˙(t)+x(t)=JyΦ(x(t)yB(x(t))),x(0)=x0, (1.5)

where Φ : 𝓗 → ℝ ∪ {+∞} is a proper, convex and lower semi-continuous function defined on a real Hilbert space 𝓗, and B : 𝓗 → 𝓗 is a cocoercive operator.

This projection-differential dynamical system relies on the resolvent operator Jy∂Φ ≜ (Id + y ∂Φ)−1, and reduces to the system (1.4) when the function Φ is the indicator function of a closed convex set C ⊆ 𝓗. It is shown that the trajectories of the dynamical system converge weakly to a solution of the associated (MIP) in which A = ∂Φ. In this paper, we continue this line of research and generalize it in two directions. Our first set of results is concerned with dynamical systems of the form (1.5) involving a nonexpansive mapping. Building on [21, 22], we perturb such a dynamical system with a Tikhonov regularization term that induces the strong convergence of the thus generated trajectories. This family of dynamical systems is of Krasnoselskiĭ-Mann type whose explicit or implicit numerical discretizations are well studied (see [23]). Next, we consider a family of asymptotically autonomous semi-flows derived from operator splitting algorithms. These splitting techniques originate from the theoretical analysis of PDEs, and can be traced back to classical work of [24, 25]. In this direction, we generalize recent results of [11] for dynamical systems of forward-backward type, and [11] for dynamical systems of forward-backward-forward type. In both of these papers the strong convergence of the trajectories is guaranteed only under demanding additional hypotheses like strong monotonicity of one of the involved operators. On the other hand, in articles like [21, 26] strong converge of trajectories of dynamical systems involving a single monotone operator or function is achieved by means of a suitable Tikhonov regularization under mild conditions. They motivated us to perturb the mentioned dynamical systems from [11, 22] in a similar manner in order to achieve strong convergence of the trajectories of the resulting Tikhonov regularized dynamical systems under natural assumptions. To the best of our knowledge the only previous contribution in the literature in this direction is the very recent preprint [27], where a Tikhonov regularized dynamical system involving a nonexpansive operator is investigated, whose trajectories strongly converge towards a fixed point of the latter. All these results are special cases of the analysis provided in this paper.

In the first part of our paper we deal with a Tikhonov regularized Krasnoselskiĭ-Mann dynamical system and show that its trajectories strongly converge towards a fixed point of the governing nonexpansive operator. Afterwards a modification of this dynamical system inspired by [27] is proposed, where the involved operator maps a closed convex set to itself and a similar result is obtained under a different hypothesis. The main result of [27] is then recovered as a special case, while another special case concerns a Tikhonov regularized forward-backward dynamical system whose trajectories strongly converge towards a zero of a sum of a maximally monotone operator with a single-valued cocoercive one. Because the regularization term is applied to the whole differential equation, we speak in this case of an outer Tikhonov regularized forward-backward dynamical system. In the next section another forward-backward dynamical system, this time with dynamic stepsizes and an inner Tikhonov regularization of the single-valued operator, is investigated and we show the strong convergence of its trajectories towards the minimum norm zero of a similar sum of operators. Afterwards we consider an implicit forward-backward-forward dynamical system with a similar inner Tikhonov regularization of the involved single-valued operator, whose trajectories strongly converge towards the minimum norm zero of a sum of a maximally monotone operator with a single-valued Lipschitz continuous one. In order to illustrate the theoretical results we present some numerical experiments as well, which shed some light on the role of the regularization parameter on the long-run behavior of trajectories.

2 Setup and preliminaries

We collect in this section some general concepts from variational and functional analysis. We follow standard notation, as developed in [23]. Let 𝓗 be a real Hilbert space. A set-valued operator M : 𝓗 ⇉ 𝓗 maps points in 𝓗 to subsets of 𝓗. We denote by

dom(M){xH|Mx},ran(M){yH|(xH):yMx},gr(M){(x,y)H×H|yMx},Zer(M){xH|0Mx},

its domain, range, graph and set of zeros, respectively. A set-valued operator M : 𝓗 ⇉ 𝓗 is called monotone if

xy,xy0(x,x),(y,y)gr(M).

The operator M : 𝓗 ⇉ 𝓗 is called maximally monotone if it is monotone and there is no monotone operator : 𝓗 ⇉ 𝓗 such that gr(M) ⊆ gr(). M is said to be ρ-strongly monotone if

xy,xyρxy2(x,x),(y,y)gr(M).

If M is maximally monotone and strongly monotone, then Zer(M) is singleton [23, Corollary 23.37].

A single-valued operator T : 𝓗 → 𝓗 is called nonexpansive when ∥TxTy∥ ≤ ∥xy∥ for all x, y ∈ 𝓗, while, given some β > 0, T is said to be β-cocoercive if 〈 xy, TxTy〉 ≥ βTxTy2 for all x, y ∈ 𝓗.

Let α ∈ (0, 1) be fixed. We say that R : 𝓗 → 𝓗 is α-averaged if there exists a nonexpansive operator T : 𝓗 → 𝓗 such that R = (1 − α) Id + α T, where Id is the identity mapping on 𝓗. An important instance of α-averaged operators are firmly nonexpansive mappings, which we recover for α = 1/2. For further insights into averaged operators we refer the reader to [23, Section 4.5].

The resolvent of the maximally monotone operator M is defined as JM ≜ (Id + M)−1. It is a single-valued operator with dom(JM) = 𝓗 and it is firmly nonexpansive, i.e.

JMxJMy2JMxJMy,xyx,yH. (2.1)

For all λ, μ > 0 and x ∈ 𝓗 it holds that (see [23, Proposition 23.28])

JλMxJμMx|λμ|Mλx, (2.2)

where Mλ ≜ (1/λ) (Id − JλM) is the Yosida approximation of the maximal monotone operator M with parameter λ > 0.

By PC we denote the orthogonal projector onto a closed convex set C ⊆ 𝓗, while the normal cone of a set C ⊆ 𝓗 is NC ≜ {z ∈ 𝓗 : 〈 z, yx〉 ≤ 0 ∀ yC} if xC and NC(x) = ∅ otherwise.

We also need the following basic identity (cf. [23])

αx+(1α)y2+α(1α)xy2=αx2+(1α)y2αRx,yH. (2.3)

2.1 Properties of perturbed operators

Let 𝓗 be a real Hilbert space, A : 𝓗 ⇉ 𝓗 a maximally monotone operator and B : 𝓗 → 𝓗 a monotone and (1/β)-Lipschitz continuous operator, for some β > 0. Let ε > 0 and denote BεB + εId : 𝓗 → 𝓗. Since B is maximally monotone and (1/β)-Lipschitz, the perturbed operator Bε is ϵ-strongly monotone and (ε + 1/β)-Lipschitz continuous. In particular, the operator A + Bε is ϵ-strongly monotone. Hence, for every ϵ > 0 the set 𝓢ε ≜ Zer(A + Bε) is a singleton with the unique element denoted xε.

Assumption 2.1

𝓢0 ≜ Zer(A + B) ≠ ∅.

The following lemma is a classical result due to Bruck [28]. A short proof can be found in [21, Lemma 4].

Lemma 2.2

It holds xεx ≜ inf{∥x∥ : x ∈ 𝓢0} as ε → 0.

Lemma 2.2 implies that the net (xε)ε>0 ⊂ 𝓗 is locally bounded. The next result establishes continuity and differentiability properties of the trajectory εxϵ. The proof relies on the characterization of zeros of a monotone operator via its resolvent, and can be found in [12, page 533]. For the reader’s convenience, we include it here as well.

Lemma 2.3

Let ε1, ε2 > 0. Then

xε1xε2xε1ε2|ε1ε2|,

i.e. εxϵ is locally Lipschitz continuous on (0, +∞), and therefore differentiable almost everywhere. Furthermore,

ddεxεxεεε(0,+).

Proof

First, we observe that, for ε > 0, 0 ∈ Axε + Bxε + ε xε is equivalent to

xε=Id+1ε(A+B)1(0)=J1ε(A+B)(0).

Using this fact, and combining it with relation (2.2), we obtain

xε1xε2=J1ε1(A+B)(0)J1ε2(A+B)(0)1ε1ε2xε1,

which is equivalent to

xε1xε2xε1ε2|ε1ε2|.

This proves the first statement.

For the second statement, we note that the previous inequality yields for ϵ = ε1 and ε2 = ϵ + h the estimate

0xεxε+hhxεϵ+hh(0,+).

Passing to the limit h → 0 completes the proof. ◼

2.2 Dynamical systems

In our analysis, we will make use of the following standard terminology from dynamical systems theory.

A continuous function f : [0, T] → 𝓗 (where T > 0) is said to be absolutely continuous when its distributional derivative is Lebesgue integrable on [0, T].

We remark that this definition implies that an absolutely continuous function is differentiable almost everywhere, and its derivative coincides with its distributional derivative almost everywhere. Moreover, one can recover the function from its derivative via the integration formula f(t) = f(0) + 0t g(s)d s for all t ∈ [0, T].

The solutions of the dynamical systems we are considering in this paper are understood in the following sense.

Definition 2.4

We say that x : [0, +∞) → 𝓗 is a strong global solution with initial condition x0 ∈ 𝓗 of the dynamical system

x˙(t)=f(t,x(t))x(0)=x0, (2.4)

where f : [0, +∞) × 𝓗 → 𝓗, if the following properties are satisfied:

  1. x : [0, +∞) → 𝓗 is absolutely continuous on each interval [0, T], 0 < T < +∞;

  2. it holds (t) = f(t, x(t)) for almost every t ≥ 0;

  3. x(0) = x0.

Existence and strong uniqueness of nonautonomous systems of the form (2.4) can be proven by means of the classical Cauchy-Lipschitz Theorem (see, for instance, [29, Proposition 6.2.1] or [30, Theorem 54]). To use this, we need to ensure the following properties enjoyed by the vector field f.

Theorem 2.5

Let f : [0, +∞) × 𝓗 → 𝓗 be a given function satisfying:

  1. f(⋅, x) : [0, +∞) → 𝓗 is measurable for each x ∈ 𝓗;

  2. f(t, ⋅):𝓗 → 𝓗 is continuous for each t ≥ 0;

  3. there exists a function ℓ(⋅) ∈ Lloc1 (ℝ+; ℝ) such that

    f(t,x)f(t,y)(t)xyt[0,b]bR+x,yH; (2.5)
  4. for each x ∈ 𝓗 there exists a function Δ(⋅) ∈ Lloc1 (ℝ+; ℝ) such that

    f(t,x)Δ(t)t[0,b]bR+. (2.6)

Then, the dynamical system (2.4) admits a unique strong solution tx(t), t ≥ 0.

3 A Tikhonov regularized Krasnoselskiĭ-Mann dynamical system

Let T : 𝓗 → 𝓗 be a nonexpansive mapping with Fix(T) ≠ ∅, where Fix(T) denotes the fixed point set of T. We are interested in investigating the trajectories of the following dynamical system

x˙(t)=λ(t)T(x(t))x(t)ϵ(t)x(t)x(0)=x0. (3.1)

where x0 ∈ 𝓗 is a given reference point, and λ(⋅) and ϵ(⋅) are user-defined functions, satisfying the following standing assumption:

Assumption 3.1

λ : [0, +∞) → (0, 1] and ϵ : [0, +∞) → [0, +∞) are Lebesgue measurable functions.

Motivated by [11], where it is shown that the trajectories of the dynamical system

x˙(t)=λ(t)(T(x(t))x(t))x(0)=x0,

converge weakly towards a fixed point of T, and [21], where the strong convergence of the trajectories of a dynamical system involving a maximally monotone operator is induced by means of a Tikhonov regularization, we show that, under mild hypotheses, the trajectories of (3.1) strongly converge to PFix(T)(0), the minimum norm fixed point of T. Moreover we also address the question about viability of trajectories in case where T is defined on a nonempty, closed and convex set D ⊆ 𝓗.

3.1 Existence and uniqueness of global solutions

Existence and uniqueness of solutions to the dynamics (3.1) follow from the general existence statement, i.e. Theorem 2.5. First, notice that the dynamical system (3.1) can be rewritten in the form of (2.4) where f : [0, +∞) × 𝓗 → 𝓗 is defined by f(t, x) ≜ λ(t)[T(x) − x] − ϵ(t)x. This shows that properties (f1) and (f2) in Theorem 2.5 are satisfied. It remains to verify properties (f3) and (f4).

Lemma 3.2

When ϵ Lloc1 (ℝ+; ℝ), then, for each x0 ∈ 𝓗, there exists a unique strong global solution of (3.1).

Proof

  1. Let x, y ∈ 𝓗, then, since T is nonexpansive, we have

    f(t,x)f(t,y)2λ(t)xy+ϵ(t)xy=[2λ(t)+ϵ(t)]xy.

    Since λ is bounded from above and due to the assumption made on ϵ, one has (⋅) ≜ 2λ(⋅) + ϵ(⋅) ∈ Lloc1 (ℝ+; ℝ), so that (f3) holds.

  2. For x ∈ 𝓗 and ∈ Fix(T) one has

    f(t,x)f(t,x¯)+f(t,x)f(t,x¯)ϵ(t)x¯+(t)xx¯Δ(t)

    for any t ∈ [0, +∞). Existence and uniqueness now follow from Theorem 2.5. ◼

3.2 Convergence of the trajectories: first approach

The following observation, which is based on a time rescaling argument similar to [26, Lemma 4.1], will be fundamental for the convergence analysis of the trajectories. We give it without proof since it can be derived as a special case of Theorem 3.6.

Theorem 3.3

Let τ1 : [0, +∞) → [0, +∞) be the function which is implicitly defined by

0τ1(t)λ(s)ds=t,τ1(0)=0.

Similarly, let τ2 : [0, +∞) → [0, +∞) be the function given by

τ2(t)0tλ(s)ds.

Set ϵ̃ϵτ1, λ̃λτ1, and consider the system

u˙(t)=T(u(t))u(t)ϵ~(t)λ~(t)u(t)u(0)=x0, (3.2)

where x0 ∈ 𝓗. If x is a strong solution of (3.1), then uxτ1 is a strong solution of the system (3.2). Conversely, if u is a strong solution of (3.2), then xuτ2 is a strong solution of the system (3.1).

Theorem 3.3 suggests that one can also study the dynamical system (3.2) instead of (3.1). Moreover, in [21, Theorem 9] the strong convergence of the trajectories of the differential inclusion

u˙(t)A(u(t))+ϵ(t)u(t)u(0)=x0, (3.3)

where A : 𝓗 ⇉ 𝓗 is a maximally monotone operator such that A−1(0) ≠ ∅, towards the minimum norm zero of A was obtained provided that limt→+∞ϵ(t) = 0, ϵL1(ℝ+; ℝ) and |ϵ̇| ∈ L1(ℝ+; ℝ). The connection between (3.3) and (3.1) (as well as (3.2)) is achieved through the fact that the nonexpansiveness of T guarantees that the operator A ≜ Id − T is maximally monotone and, furthermore, xA−1(0) holds if and only if x ∈ Fix T.

Now we establish the convergence of the trajectories of the dynamical system (3.1), noting that the employed hypotheses coincide with those of [21, Theorem 9] when λ (t) = 1 for all t ∈ [0, +∞).

Theorem 3.4

Let tx(t), t ≥ 0, be the strong solution of (3.1) with initial condition x0 ∈ 𝓗, and assume that

(i)0+ϵ(t)dt=+,(ii)0+λ(t)dt=+,(iii)ϵandλareabsolutelycontinuousandϵ(t)λ(t)0ast+,(iv)0+ddtϵ(t)λ(t)dt<+.

Then x(t) → PFix(T)(0) as t → +∞.

Proof

By Theorem 3.3, the dynamical system (3.2) has a strong solution, too. Since Id − T is maximally monotone, and x ∈ Fix T holds if and only if x ∈ (Id − T)−1(0), we verify first that the function ϵ~λ~=ϵτ1λτ1 fulfills the assumptions of [21, Theorem 9]. First,

0+ϵ~(t)λ~(t)dt=0+τ1˙(t)ϵ(τ1(t))dt=0+ϵ(s)ds=+.

Further, for almost all t ≥ 0 it holds, taking into consideration that τ1˙ (t) λ (τ1(t)) = 1,

ddtϵ~(t)λ~(t)=ϵ~˙(t)λ~(t)ϵ~(t)λ~˙(t)λ~(t)2=τ1˙(t)ϵ˙(τ1(t))λ~(t)ϵ~(t)τ1˙(t)λ˙(τ1(t))λ~(t)2=ϵ˙(τ1(t))λ(τ1(t))2ϵ(τ1(t))λ˙(τ1(t))λ(τ1(t))3,

hence

0+ddtϵ~(t)λ~(t)dt=0+ϵ˙(τ1(t))λ(τ1(t))2ϵ(τ1(t))λ˙(τ1(t))λ(τ1(t))3dt=0+ϵ˙(s)λ(s)ϵ(s)λ˙(s)λ(s)2ds=0+ddtϵ(t)λ(t)dt<+,

where we used that τ1(t) → +∞ as t → +∞. Therefore the strong convergence of the strong solution of (3.2) with initial condition x0 ∈ 𝓗 towards PFix(T)(0) is proven. The assertion follows by Theorem 3.3. ◼

3.3 Convergence of the trajectories: second approach

Our second convergence statement concerns a generalization of the system (3.1) where T maps from a closed and convex set D ⊆ 𝓗 to D. For such dynamics, a key condition is to ensure invariance with respect to the domain D of the trajectory tx(t), t ≥ 0, when issued from an initial condition x0D. Such viability results are key to the control of dynamical systems [31, 32]. To that end, we consider the differential equation

x˙(t)=λ(t)(T(x(t))x(t))ϵ(t)(x(t)y)x(0)=x0D, (3.4)

where yD is fixed reference point and Fix (T) ≠ ∅. In the very recent note [27] the strong convergence of the trajectory for the case λ(t) = 1 for all t ∈ [0, +∞) towards PFix(T)(y) has been demonstrated in [27, Theorem 4.1] by assuming that ϵ Lloc1 (ℝ+; ℝ) is absolutely continuous and nonincreasing, ϵ(t) → 0 as t → +∞, 0+ ϵ(s) ds = +∞ and limt→+∞ ϵ̇(t)/ϵ2(t) = 0.

First we give the existence and uniqueness statement for the strong global solution of (3.4), whose proof is skipped as it follows Proposition 3.2 and [27, Proposition 4.1].

Proposition 3.5

Assume that ϵ Lloc1 (ℝ+; ℝ). Then, for any pair (x0, y) ∈ D × D the dynamical system (3.4) admits a unique strong global solution tx(t), t ≥ 0 which leaves the domain D forward invariant, i.e. x(t) ∈ D for all t ∈ [0, +∞).

A result similar to Theorem 3.3 for (3.4) is provided next.

Theorem 3.6

Let τ1 : [0, +∞) → [0, +∞) be the function implicitly defined by

0τ1(t)λ(s)ds=t,τ1(0)=0.

Furthermore, let τ2 : [0, +∞) → [0, +∞) the function given by

τ2(t)0tλ(s)ds.

Set ϵ̃ϵτ1, λ̃λτ1 and consider the system

u˙(t)=T(u(t))u(t)ϵ~(t)λ~(t)(u(t)y)u(0)=x0, (3.5)

where (x0, y) ∈ D × D. If tx(t), t ≥ 0, is the strong solution of (3.4), then uxτ1 is a strong solution of the system (3.5). Conversely, if u is a strong solution of (3.5), then xuτ2 is a strong solution of the system (3.4).

Proof

Let tx(t), t ≥ 0, be a strong solution of (3.4). Since we already know that x(t) ∈ D for all t ≥ 0, the first line of (3.4) written at point T1(t) and multiplied by τ1˙ (t) yields

τ1˙(t)x˙(τ1(t))=τ1˙(t)λ(τ1(t))[T(x(τ1(t)))x(τ1(t))]τ1˙(t)ϵ(τ1(t))[x(τ1(t))y].

Since u(t) ≜ x(τ1(t)), (t) = τ1˙ (t) (τ1(t)) and τ1˙ (t) = 1/λ(τ1(t)), we obtain from the line above

u˙(t)=T(u(t))u(t)ϵ~(t)λ~(t)(u(t)y)

for almost every t ≥ 0. Moreover, u(0) = x(τ1(0)) = x0.

Now, let u be a strong solution of (3.2). From [27, Proposition 4.1] we deduce that that u(t) ∈ D for all t ≥ 0. The first line of (3.5) written at point τ2(t) and multiplied by τ2˙ (t) reads

τ2˙(t)u˙(τ2(t))=τ2˙(t)(T(u(τ2(t)))u(τ2(t)))τ2˙(t)ϵ~(τ2(t))λ~(τ2(t))[u(τ2(t))y]t0.

Observing that x(t) = u(τ2(t)), (t) = τ2˙ (t) (τ2(t)), τ2˙ (t) = λ(t) and τ1τ2 = Id, the previous line becomes for almost every t ≥ 0

x˙(t)=λ(t)(T(x(t))x(t))ϵ(t)(x(t)y).

Moreover, x(0) = u(0) = x0. This concludes the proof. ◼

Employing the time rescaling arguments from Theorem 3.6, we are able to derive the following statement, which extends [27, Theorem 4.1] that is recovered as special case when λ(t) = 1 for all t ∈ [0, +∞).

Theorem 3.7

Let tx(t), t ≥ 0, be the strong solution of (3.4) and assume that

  1. 0+ ϵ(t) dt = +∞,

  2. 0+ λ(t) dt = +∞,

  3. ϵ and λ are absolutely continuous, ϵ()λ() is nonincreasing and ϵ(t)λ(t) → 0 as t → +∞,

  4. ϵ˙(t)ϵ(t)2λ˙(t)λ(t)ϵ(t) → 0 as t → +∞.

Then x(t) → PFix(T)(y) as t → +∞.

Proof

In a similar manner to the proof of Theorem 3.4, due to Theorem 3.6 it suffices to check the assumptions in [27, Theorem 4.1] for the function ϵ̃/λ̃. First, we notice that

0+ϵ~(t)λ~(t)dt=0+τ1˙(t)ϵ(τ1(t))dt=0+ϵ(s)ds=+,

where we used that τ1(t) → +∞ as t → +∞. From the proof of Theorem 3.4 we know that for almost all t ≥ 0 one has

ddtϵ~(t)λ~(t)=ϵ˙(τ1(t))λ(τ1(t))2ϵ(τ1(t))λ˙(τ1(t))λ(τ1(t))3,

The last expression divided by ϵ~(t)λ~(t)2 gives

ϵ˙(τ1(t))ϵ(τ1(t))2λ˙(τ1(t))λ(τ1(t))ϵ(τ1(t)),

which, due to the assumptions we made on the functions ϵ and λ, tends to 0 as t → +∞. ◼

In the following two remarks we compare the hypotheses of Theorem 3.4 and Theorem 3.7, noting that, despite the common assumptions (i)-(ii), they do not fully cover each other.

Remark 3.1

The framework of Theorem 3.7 extends the one of Theorem 3.4 by allowing the involved operator T to map a closed convex set to itself, the latter being recovered when choosing D = 𝓗 and y = 0. However, in this setting, fixing β ∈ (0, 1) and taking ϵ(t) = 1/(0.2 + t)β and λ(t) = 0.5 cos(10.2+t) + 0.5, t ≥ 0, one notes that λ(t) ∈ [0, 1] ∀ t ≥ 0, 0+ ϵ(t) dt = 0+ λ(t) dt = +∞ and ϵ(t)/λ(t) is converging to 0 as t → +∞, but there exists an intervall where the function is increasing. Hence assumption (iii) of Theorem 3.7 is violated while the corresponding assumption in Theorem 3.4 is fulfilled. Moreover,

ddtϵ(t)λ(t)=β(0.1+t)(β+1)(0.5cos(10.2+t)+0.5)0.5sin(10.2+t)(0.5cos(10.2+t)+0.5)2(0.2+t)β+2t0,

that is a function of class L1(ℝ+; ℝ). Hence, for the chosen parameter functions ϵ and λ, the assumptions of Theorem 3.7 are not satisfied, while the ones of Theorem 3.4 are.

Remark 3.4

In the situation of Theorem 3.7 we consider again the choice D = 𝓗 and y = 0. In this case Theorem 3.4 is a special instance of Theorem 3.7. In fact, since ϵ/λ is assumed to be nonincreasing and ϵ(t)/λ(t) → 0 as t → +∞ we conclude that

0+ddtϵ(t)λ(t)dt=0+ddtϵ(t)λ(t)dt=lims+ϵ(s)λ(s)+ϵ(0)λ(0)<+,

i.e. assertion (iv) of Theorem 3.4 is fulfilled.

Remark 3.3

One can also compare the hypotheses imposed in Theorem 3.4 and Theorem 3.7 for guaranteeing the strong convergence of the trajectories of a dynamical system towards a fixed point of T with the ones required in [22, Theorem 6 and Remark 17], the weakest of them being (ii) of any of Theorem 3.4 and Theorem 3.7. Taking also into consideration [21, Proposition 5 and Theorem 9] as well as [27, Proposition 4.1 and Theorem 4.1], the assumptions of both Theorem 3.4 and Theorem 3.7 turn out to be natural for achieving the strong convergence of the trajectories of the dynamical system (3.1) towards a fixed point of T.

Fig. 1 
Graph of λ(t) = 0.5 
cos⁡(10.2+t)
$\begin{array}{}
\displaystyle
\cos(\frac{1}{0.2+t})
\end{array}$ + 0.5
Fig. 1

Graph of λ(t) = 0.5 cos(10.2+t) + 0.5

3.4 Special case: an outer Tikhonov regularized forward-backward dynamical system

From the analysis of the strong convergence of the trajectories of the Tikhonov regularized Krasnoselskiĭ-Mann dynamical system (3.1) one can deduce similar assertions for determining zeros of a sum of monotone operators. Let A : 𝓗 ⇉ 𝓗 be a maximally monotone operator and B : 𝓗 → 𝓗 a β-cocoercive operator with β > 0 such that Zer(A + B) is nonempty. The dynamical system employed to this end is a Tikhonov regularized version of [22, equation (14)], namely, when y ∈ (0, 2β), ϵ : [0, +∞) → [0, +∞) and λ : [0, +∞) → [0, (4βy)/(2β)] are Lebesgue measurable functions, and x0 ∈ 𝓗,

x˙(t)=λ(t)(JyA(x(t)yB(x(t)))x(t))ϵ(t)x(t)x(0)=x0. (3.6)

Employing either Theorem 3.4 or Theorem 3.7, one can derive the following statement.

Theorem 3.8

Suppose that either the assumptions of Theorem 3.4 or Theorem 3.7 made on the parameter functions ϵ and λ are fulfilled. Further, let x be the unique strong global solution of the dynamical system (3.6). Then x(t) → PZer(A+B)(0) as t → +∞.

Proof

Since the resolvent of a maximally monotone operator is firmly nonexpansive it is 1/2-averaged, see [23, Remark 4.34(iii)]. Moreover, by [23, Proposition 4.39] Id − y B is y/(2β)-averaged. Combining these two observations with [23, Proposition 4.44] yields that the composed operator TJyA ∘ (Id − y B) is 2β/(4βy)-averaged. Further, it is immediate that the dynamical system (3.6) can be equivalently written as

x˙(t)=λ(t)(T(x(t))x(t))ϵ(t)x(t)x(0)=x0.

As T is 2β/(4βy)-averaged, there exists a nonexpansive operator : 𝓗 → 𝓗 such that T = (1 − 2β/(4βy)) Id + (2β/(4βy)) . Then the dynamical system (3.6) can be further equivalently written as

x˙(t)=λ(t)2β4βy(T^(x(t))x(t))ϵ(t)x(t)x(0)=x0.

Since Fix = T = Zer(A + B) (see [23, Proposition 26.1(iv)(a)]) the assertion follows from Theorem 3.4 or Theorem 3.7. ◼

Remark 3.4

Strong convergence of the trajectories of a forward-backward dynamical system was achieved in [22, Theorem 12] under the more demanding hypothesis of uniform monotonicity (recall that an operator T: 𝓗 → 𝓗 is said to be uniformly monotone if there exists an increasing function ΦT : [0, +∞) → [0, +∞] that vanishes only at 0 such that 〈 xy, uv〉 ≥ ΦT(∥xy∥) for every (x, u), (y, v) ∈ gr (T)) imposed on one of the involved operators.

4 A Tikhonov regularized forward-backward dynamical system

In this section we construct Tikhonov regularized dynamical systems which are strongly converging to solutions of (MIP). The problem formulation involves a maximally monotone operator A : 𝓗 ⇉ 𝓗 and B : 𝓗 → 𝓗 a β-cocoercive operator with β > 0 such that Zer(A + B) is nonempty. Moreover, for t ∈ [0, +∞) denote Bϵ(t)B + ϵ(t) Id : 𝓗 → 𝓗 and Zer(A + Bϵ(t)) = {(ϵ(t))}. We consider the dynamical system

x˙(t)=λ(t)Jy(t)A(x(t)y(t)(Bx(t)+ϵ(t)x(t)))x(t)x(0)=x0, (4.1)

where λ(⋅), ϵ(⋅) obey Assumption 3.1, and y : [0, + ∞) → (0, 2β).

Remark 4.1

Comparing (4.1) with (3.6) one can note two differences: First of all, in (4.1) the stepsizes are provided by the function y : [0, + ∞) → (0, 2β), while in (3.6) y is a positive constant lying in the interval (0, 2β) as well. Secondly, to get (3.6) from the forward-backward dynamical system (cf. [22])

x˙(t)=λ(t)(JyAx(t)yBx(t)x(t))x(0)=x0, (4.2)

an outer perturbation is employed, while for (4.1) an inner one. As illustrated in Section 6, this leads to different performances in concrete applications.

4.1 Existence and uniqueness of strong global solutions

The dynamical system (4.1) can be rewritten as

x˙(t)=f(t,x(t))x(0)=x0,

where f : [0, + ∞) × 𝓗 → 𝓗 is defined by f(t, x) = λ(t)(Tt (x) − x), with TtJy(t)A (Id − y (t) Bϵ(t)). Hence, existence and uniqueness of trajectories follow by verifying the conditions spelled out in Theorem 2.5.

Proposition 4.1

Assume that tϵ (t) is of class Lloc1 (ℝ+; ℝ). Then, for each x0 ∈ 𝓗, there exists a unique strong global solution tx(t), t ≥ 0, of (4.1).

Proof

Conditions (f1), (f2) are clearly satisfied. To show (f3), let x, y ∈ 𝓗 be arbitrary. Since B is (1/β)-Lipschitz continuous, the perturbed operator Bϵ(t) is ((1/β) + ϵ(t))-Lipschitz continuous as well. Hence, by nonexpansiveness of the resolvent, we obtain for all t ∈ [0, +∞)

f(t,x)f(t,y)λ(t)xy+λ(t)TtxTtyλ(t)xy+λ(t)(xy)y(t)(Bϵ(t)xBϵ(t)y)λ(t)2+y(t)1β+ϵ(t)xy.

Since λ and y are bounded and due to the assumption we imposed on ϵ, one has (⋅) ≜ λ(⋅)(2 + y (⋅) [1/β + ϵ(⋅)]) ∈ Lloc1 (ℝ+; ℝ). Condition (f4) is verified by first noting that for ∈ Zer(A + B), we have = Tt() for all t ≥ 0. Therefore, for all x ∈ 𝓗 and all t ≥ 0 it holds

Tt(x)x¯=Jy(t)A(xy(t)Bϵ(t)x)Jy(t)A(x¯y(t)Bx¯)(xx¯)y(t)(Bϵ(t)xBx¯)xx¯+y(t)βxx¯+y(t)ϵ(t)x.

Hence, for all x ∈ 𝓗 and all t ≥ 0 one has

f(t,x)=λ(t)Tt(x)xλ(t)Tt(x)x¯+λ(t)xx¯λ(t)2+y(t)βxx¯+λ(t)y(t)ϵ(t)xΔ(t).

Therefore, (f4) holds as well. ◼

4.2 Convergence of the trajectory

As a preliminary step for proving the convergence statement of the trajectories of (4.1) towards PZer(A+B)(0) we need the following auxiliary result. Recall that Zer(A + Bϵ(t)) = Fix(Tt) = {(ϵ(t))}.

Lemma 4.2

Let tx(t), t ≥ 0, be the strong global solution of (4.1) and suppose that y (t) ≤ (2 β)/(1 + 2β ϵ(t)) for all t ∈ [0, +∞). Then, for almost all t ∈ [0, +∞)

x˙(t),x(t)x¯(ϵ(t))λ(t)2y(t)ϵ(t)(y(t)ϵ(t)2)x(t)x¯(ϵ(t))2.

Proof

By (2.3) we get for almost all t ∈ [0, +∞)

2x˙(t),x(t)x¯(ϵ(t))=x˙(t)+x(t)x¯(ϵ(t))2x˙(t)2x(t)x¯(ϵ(t))2=λ(t)(Tt(x(t))x¯(ϵ(t)))+(1λ(t))(x(t)x¯(ϵ(t)))2x˙(t)2x(t)x¯(ϵ(t))2=λ(t)Tt(x(t))x¯(ϵ(t))2+(1λ(t))x(t)x¯(ϵ(t))2λ(t)(1λ(t))Tt(x(t))x(t)2x˙(t)2x(t)x¯(ϵ(t))2=λ(t)Tt(x(t))x¯(ϵ(t))2λ(t)x(t)x¯(ϵ(t))2λ(t)(1λ(t))Tt(x(t))x(t)2x˙(t)2. (4.3)

On the other hand, for x, y ∈ 𝓗 and for all t ∈ [0, +∞) we obtain

(Idy(t)Bϵ(t))x(Idy(t)Bϵ(t))y2=(1y(t)ϵ(t))(xy)y(t)(BxBy)2=(1y(t)ϵ(t))2xy2+y(t)2BxBy22y(t)(1y(t)ϵ(t))xy,BxBy(1y(t)ϵ(t))2xy2+[y(t)22y(t)β(1y(t)ϵ(t))]BxBy2, (4.4)

where we used the β-cocoercivity of B in the last step and the observation that y (t) ϵ(t) ≤ 1 due to the hypothesis.

By assumption, y (t) ≤ 2 β (1- y (t) ϵ(t)) for all t ∈ [0, +∞). Therefore relation (4.4) yields

(Idy(t)Bϵ(t))x(Idy(t)Bϵ(t))y2(1y(t)ϵ(t))2xy2,

and by the nonexpansiveness of the resolvent

TtxTty2(1y(t)ϵ(t))2xy2t[0,+). (4.5)

Combining (4.3) with (4.5) by neglecting the two nonpositive terms in the last line of (4.3) yields for almost all t ∈ [0, +∞)

2x˙(t),x(t)x¯(ϵ(t))λ(t)(1y(t)ϵ(t))2x(t)x¯(ϵ(t))2λ(t)x(t)x¯(ϵ(t))2=λ(t)y(t)ϵ(t)(y(t)ϵ(t)2)x(t)x¯(ϵ(t))2.

This completes the proof. ◼

The convergence statement follows.

Theorem 4.3

Let tx(t), t ≥ 0, be the strong solution of (4.1). Suppose that y(t)2β1+2βϵ(t) for all t ∈ [0, +∞) and that the following properties are fulfilled

(i)ϵisabsolutelycontinuousandϵ(t)decreasesto0ast+,(ii)ϵ˙(t)ϵ2(t)λ(t)y(t)0ast+,(iii)0+λ(t)y(t)ϵ(t)(2y(t)ϵ(t))dt=+.

Then x(t) → PZer(A+B)(0) as t → + ∞.

Proof

Set θ(t)12x(t)x¯(ϵ(t))2,t0. Then, by using Lemma 4.2

θ˙(t)=x(t)x¯(ϵ(t)),x˙(t)ϵ˙(t)ddϵx¯(ϵ(t))=x(t)x¯(ϵ(t)),x˙(t)x(t)x¯(ϵ(t)),ϵ˙(t)ddϵx¯(ϵ(t))λ(t)2y(t)ϵ(t)(y(t)ϵ(t)2)x(t)x¯(ϵ(t))2x(t)x¯(ϵ(t)),ϵ˙(t)ddϵx¯(ϵ(t)).

We denote L(t)λ(t)2y(t)ϵ(t)(2y(t)ϵ(t)). The previous inequality yields

θ˙(t)2L(t)θ(t)ϵ˙(t)ddϵx¯(ϵ(t))2θ(t),

where we used that ϵ(⋅) is decreasing. Substituting φ2θ yields θ=φ22 and θ̇ = φ φ̇, hence the previous inequality becomes

φ˙(t)+L(t)φ(t)ϵ˙(t)ddϵx¯(ϵ(t)).

By Lemma 2.3,

φ˙(t)+L(t)φ(t)ϵ˙(t)ϵ(t)x¯(ϵ(t)).

Now, we define the integrating factor E(t) ≜ 0t L(s) d s, to get

ddtφ(t)exp(E(t))ϵ˙(t)ϵ(t)x¯(ϵ(t))exp(E(t)).

Hence

0φ(t)exp(E(t))φ(0)0tϵ˙(s)ϵ(s)x¯(ϵ(s))exp(E(s))ds. (4.6)

If 0tϵ˙(s)ϵ(s)x¯(ϵ(s))exp(E(s))ds is bounded, then limt→+∞ φ(t) = 0; otherwise, taking into consideration (iii), we employ L’Hôspital’s rule and obtain

limt+exp(E(t))0tϵ˙(s)ϵ(s)x¯(ϵ(s))exp(E(s))ds=limt+ϵ˙(t)x¯(ϵ(t))ϵ2(t)λ(t)y(t)(2y(t)ϵ(t))=0, (4.7)

where we used assertion (i) with Lemma 2.2 and assertion (ii).

In conclusion, by combining (5.15) and (iii) with (4.6), it follows that φ(t) → 0 as t → + ∞. In particular,

x(t)x¯(ϵ(t))0 as t+. (4.8)

Since

x(t)PZer(A+B)(0)x(t)x¯(ϵ(t))+x¯(ϵ(t))PZer(A+B)(0),

the statement of the theorem follows from Lemma 2.2 and (4.8). ◼

Remark 4.2

Since ϵ(t) must go to zero as t → +∞, the hypothesis y(t)2β1+2βϵ(t) in the previous theorem implies that the stepsize function y is always bounded from above by 2β. This corresponds to the classical assumptions in proving (weak) convergence of the discrete time forward-backward algorithm where, in order to guarantee convergence of the generated iterates, the stepsize has to be taken in the interval (0, 2β), see [23].

Remark 4.3

Comparing the forward-backward dynamical system (4.1) with the Tikhonov regularized Krasnoselskiĭ-Mann dynamical system (3.1) one may observe that the latter needs a constant step size function y (t) ≡ y ∈ (0, 2β). The system (4.1) allows us to vary the stepsizes over time, i.e. we may choose y (⋅) as a function in t.

Remark 4.4

Hypothesis (ii) of Theorem 4.3 is fulfilled when choosing the parameter functions ϵ, λ and y such that ϵ̇(t)/ϵ2(t) → 0 as t → + ∞ and inft→+∞ λ(t) > 0, inft→+∞ y (t) > 0, while Hypothesis (iii) holds true for any choice of parameter functions which satisfy λ(⋅) y (⋅) ϵ(⋅) ∉ L1(ℝ+; ℝ) and ϵ(⋅) ∈ L2(ℝ+; ℝ). A particular instance of parameter β and parameter functions ϵ, λ and y that satisfy the hypotheses of Theorem 4.3 is given by the choice β = 1/2, y (t) = 1/2, λ (t) = cos(1/t) and ϵ (t) = 1/(1 + t)0.6, t ∈ [0, +∞).

5 A Tikhonov regularized forward-backward-forward dynamical system

The Tikhonov regularized forward-backward dynamical system involved a cocoercive single-valued operator B : 𝓗 → 𝓗. In order to handle more general monotone inclusion problems with less demanding regularity assumptions, Tseng constructed in [33] a modified forward-backward scheme which shares the same weak convergence properties as the forward-backward algorithm, but is provably convergent under plain monotonicity assumptions on the involved operators A and B. Motivated by this significant methodological improvement, we are interested in investigating a dynamical system whose trajectories strongly converge towards the minimum norm element of the set Zer(A + B), assumed nonempty, where A : 𝓗 ⇉ 𝓗 is a maximally monotone operator, while B : 𝓗 → 𝓗 is a monotone and (1/β)-Lipschitz continuous operator. The proposed dynamical system is derived from forward-backward-forward splitting algorithms coupled with a Tikhonov regularization of the single-valued operator B. Our starting point is the differential system

z(t)=Jy(t)A(x(t)y(t)Bx(t))0=x˙(t)+x(t)z(t)y(t)(Bx(t)Bz(t))x(0)=x0, (5.1)

recently investigated in [11, 34]. We assume that y : [0, + ∞) → (0, β) is a Lebesgue measurable function and x0 ∈ 𝓗 is a given initial condition.

Given a regularizer function ϵ : [0, +∞) → ℝ, we modify the dynamical system (5.1) to obtain the new dynamical system

z(t)=Jy(t)A(x(t)y(t)(Bx(t)+ϵ(t)x(t)))0=x˙(t)+x(t)z(t)y(t)(Bx(t)Bz(t)+ϵ(t)(x(t)z(t)))x(0)=x0. (5.2)

5.1 Existence and uniqueness of strong global solutions

In this subsection we prove the existence and uniqueness of trajectories of the dynamical system (5.2) by invoking Theorem 2.5.

Let us define the parameterized vector field Vϵ,y : 𝓗 → 𝓗 as

Vϵ,y(x)((IdyBϵ)JyA(IdyBϵ)(IdyBϵ))x, (5.3)

where Bϵ ≜ B + ϵ Id. Notice that the dynamics (5.2) can be equivalently rewritten as

x˙(t)=f(t,x(t))x(0)=x0,

with f : (0, +∞) × 𝓗 → 𝓗, given by f(t, x) ≜ Vϵ(t),y(t)(x). Therefore, measurability in time and local Lipschitz continuity in the spatial variable follow after we have verified these properties for the vector field Vϵ,y(x).

Lemma 5.1

For fixed ϵ ∈ [0, +∞), let 0 < y < βϵβ+1 . Then, for all x, y ∈ 𝓗, it holds

Vϵ,y(x)Vϵ,y(y)6xy.

Proof

Let x, y ∈ 𝓗. For the sake of clarity, we abbreviate Cϵ ≜ Id − y Bϵ and JJyA. First, by using the binomial formula twice we obtain

Vϵ,y(x)Vϵ,y(y)2=CϵJCϵxCϵxCϵJCϵy+Cϵy2=CϵJCϵxCϵJCϵy2+CϵxCϵy22CϵJCϵxCϵJCϵy,CxCy=JCϵxJCϵy2+y2BϵJCϵxBϵJCϵy22yJCϵxJCϵy,BϵJCϵxBϵJCϵy+CϵxCϵy22CϵJCϵxCϵJCϵy,CϵxCϵy.

By invoking the (ϵ + 1/β)-Lipschitz continuity of Bϵ we conclude further

Vϵ,y(x)Vϵ,y(y)21+y2ϵ+1β2CϵxCϵy,JCϵxJCϵy2yJCϵxJCϵy,BϵJCϵxBϵJCϵy+CϵxCϵy22CϵJCϵxCϵJCϵy,CϵxCϵy=1+y2ϵ+1β22CϵxCϵy,JCϵxJCϵy2yJCϵxJCϵy,BϵJCϵxBϵJCϵy+CϵxCϵy2+2yBϵJCϵxBϵJCϵy,CϵxCϵy. (5.4)

On the one hand, the ϵ-strong monotonicity of Bϵ yields

2yJCϵxJCϵy,BϵJCϵxBϵJCϵy2yϵJCϵxJCϵy2, (5.5)

while on the other hand we deduce from the monotonicity of the resolvent and the choice of the involved parameters that

y2ϵ+1β21CϵxCϵy,JCϵxJCϵy0. (5.6)

Taking into account (5.5) and (5.6), using the Cauchy-Schwarz inequality, the firm nonexpansiveness of the resolvent, and the ϵ-strong monotonicity and the Lipschitz-continuity of Bϵ again, we obtain from (5.4)

Vϵ,y(x)Vϵ,y(y)22yϵJCϵxJCy2+CϵxCϵy2+2yBϵJCϵxBϵJCϵyCϵxCϵy2yϵ+1β+1CϵxCϵy22yϵ+1β+1(xy2+y2BϵxBϵy22yxy,BϵxBϵy)2yϵ+1β+11+y2ϵ+1β22yϵxy2.

Further, by the relation imposed on ϵ and y, we get y ϵβ < βy, hence

2yϵ+1β+122yβ+2yβ+1=3

as well as

1+y2ϵ+1β22yϵ=(yϵ1)2+y2β2(2ϵβ+1)<1+1yβ2+y2β2(2ϵβ+1)=22yβ+2y2β2+2y2ϵβ=1+2yβ2(yβ+yϵβ)<2.

Consequently ∥Vϵ,y(x) − Vϵ,y(y)∥2 ≤ 6 ∥xy2, which yields the assertion. ◼

Based on this estimate, we obtain that

f(t,x)f(t,y)Lf(t)xy2t0,x,yH, (5.7)

where Lf :[0, +∞) → ℝ is defined by

Lf(t)2y(t)ϵ(t)+1β+11+y(t)ϵ(t)(y(t)ϵ(t)+2y(t)β2).

Hence, by Assumption 3.1 it follows Lf(⋅) ∈ Lloc1 (ℝ+; ℝ). We now show that tf(t, x) ∈ Lloc1 (ℝ+; 𝓗) for all x ∈ 𝓗. We first establish some continuity estimates of the regularized vector field with respect to the parameters. Define the unregularized forward-backward-forward vector field

Vy(x)(IdyB)JyA(IdyB)x+yBxx, (5.8)

and the residual vector field

Rϵ,y(x)yBJyA(IdyB)BJyA(IdyBϵx+yϵxJyA(IdyBϵ)x. (5.9)

Simple algebra gives the decomposition

Vϵ,y(x)=Vy(x)+Rϵ,y(x).

From [11], we know that the application yVy (x) is continuous on (0, +∞). Furthermore, [11, Lemma 1] gives

limy0+Vy(x)=0xH. (5.10)

Lemma 5.2

If x ∈ dom A, then

lim(ϵ,y)(0,0)+Rϵ,y(x)=0. (5.11)

Proof

Let x ∈ dom A. Nonexpansivenes gives

JyA(xyBϵx)JyA(xyBx)ϵyx.

Since B is (1/β)-Lipschitz, it follows

BJyA(xyBx)BJyA(xyBϵx)ϵyβx.

Furthermore,

xJyA(xyBϵx)xJyA(xyBx)+ϵx.

Summarizing the last two bounds, the triangle inequality yields that

Rϵ,y(x)y2ϵβx+yϵxJyA(xyBx)+yϵ2x.

By [35, Proposition 6.4], we know that limy→0+ JyA(xy Bx) = PcldomA(x) = x. From here the result easily follows. ◼

Lemma 5.3

For all x ∈ dom A, we have

lim(ϵ,y)(0,0)Vϵ,y(x)=0. (5.12)

Proof

We just have to combine (5.10) with the decomposition Vϵ,y(⋅) = Vy (⋅) + Rϵ,y(⋅) and Lemma 5.2. ◼

Define the set

Θ(ϵ,y)R++2|y<βϵβ+1. (5.13)

By nonexpansivenes of the resolvent operator JyA and continuity of B, it follows that the map (ϵ, y ) ↦ R(ϵ,y)(x) is continuous. Furthermore, we can extend it continuously to the closure of the parameter space Θ, denoted as Θ̄, as Lemma 5.3 shows. This allows us to prove the local boundedness of the vector field.

Lemma 5.4

For all (ϵ, y ) ∈ Θ and all x ∈ 𝓗, there exists K > 0 such that

Vϵ,y(x)K(1+x). (5.14)

Proof

Fix ∈ dom A. By Lemma 5.3, the application (ϵ, y) ↦ f(ϵ, y, ⋅) can be continuously extended to the set

Θ¯(ϵ,y)R+2|yβϵβ+1.

Hence, there exists a constant M > 0 such that ∥f(ϵ, y, )∥ ≤ M for all (ϵ, y ) ∈ Θ. Furthermore, using Lemma 5.1, we get

Vϵ,y(x)Vϵ,y(x¯)+Vϵ,y(x)Vϵ,y(x¯)M+3xx¯K(1+x)

where we can choose Kmax{3,M+3x¯}.

All these estimates allow us now to prove existence and uniqueness of solutions to the dynamical system (5.2).

Theorem 5.5

Let (ϵ, y) : [0, +∞) → Θ be measurable. Then, for each x0 ∈ 𝓗, there exists a unique strong solution tx(t), t ≥ 0, of (5.2).

Proof

We verify conditions (f1)-(f4) of Theorem 2.5 for the map f(t, x) = Vϵ(t),y(t)(x). Conditions (f1), (f2) follow from the integrability assumptions on the functions ϵ(t), y (t). For all x, y ∈ 𝓗 and all t ≥ 0 we have

f(t,x)f(t,y)3xy, and f(t,x)K(1+x).

Hence, (f3), (f4) follow as well. ◼

5.2 Convergence of the trajectories

In order to show strong convergence of the strong global solution of (5.2) towards the minimum norm element of Zer(A + B), we need some additional preparatory results.

Lemma 5.6

For almost all t ∈ [0, +∞), we have

0x(t)x¯(ϵ(t))2x(t)z(t)2(1+2ϵ(t)y(t))z(t)x¯(ϵ(t))2+2y(t)Bϵ(t)z(t)Bϵ(t)x(t),z(t)xϵ(t).

Proof

First, we observe that the first line in (5.2) can be equivalently rewritten as

x(t)z(t)y(t)Bϵ(t)x(t)Az(t), (5.15)

hence

x(t)z(t)y(t)+Bϵ(t)z(t)Bϵ(t)x(t)=x˙(t)y(t)(A+Bϵ(t))z(t).

On the other hand 0 ∈ y (t) (A + Bϵ(t))(ϵ(t)). Using the ϵ(t)-strong monotonicity of A + Bϵ(t) yields

2ϵ(t)y(t)z(t)x¯(ϵ(t))22x(t)z(t)+y(t)Bϵ(t)z(t)y(t)Bϵ(t)x(t),z(t)x¯(ϵ(t))=x(t)x¯(ϵ(t))2x(t)z(t)2z(t)x¯(ϵ(t))2+2y(t)Bϵ(t)z(t)Bϵ(t)x(t),z(t)x¯(ϵ(t)).

This shows the assertion. ◼

Lemma 5.7

Let tx(t), t ≥ 0, be the strong global solution of (5.2). Then, for almost all t ∈ [0, +∞)

x(t)x¯(ϵ(t)),x˙(t)y(t)ϵ(t)+y(t)1βx(t)z(t)2ϵ(t)y(t)z(t)x¯(ϵ(t)).

Proof

We have for almost all t ∈ [0, +∞)

2x(t)x¯(ϵ(t)),x˙(t)=2x(t)x¯(ϵ(t)),z(t)x(t)+2y(t)x(t)x¯(ϵ(t)),Bϵ(t)x(t)Bϵ(t)z(t)=z(t)x¯(ϵ(t))2x(t)x¯(ϵ(t))2z(t)x(t)2+2y(t)x(t)x¯(ϵ(t)),Bϵ(t)x(t)Bϵ(t)z(t).

By Lemma 5.6, for almost all t ∈ [0, +∞) one has

z(t)x¯(ϵ(t))2x(t)x¯(ϵ(t))2x(t)z(t)22ϵ(t)y(t)z(t)x¯(ϵ(t))2+2y(t)Bϵ(t)z(t)Bϵ(t)x(t),z(t)x¯(ϵ(t)),

therefore, by using that Bϵ(t) is (ϵ(t) + 1/β)-Lipschitz continuous it holds for almost all t ∈ [0, +∞)

2x(t)x¯(ϵ(t)),x˙(t)2x(t)z(t)22ϵ(t)y(t)z(t)x¯(ϵ(t))2+2y(t)Bϵ(t)z(t)Bϵ(t)x(t),z(t)x(t)
21y(t)ϵ(t)y(t)βx(t)z(t)22ϵ(t)y(t)z(t)x¯(ϵ(t))2.

The convergence statement follows.

Theorem 5.8

Let tx(t), t ≥ 0, be the strong solution of (5.2). Suppose that y(t)<βϵ(t)β+1 for all t ∈ [0, +∞) and that the following properties are fulfilled

(i)ϵisabsolutelycontinuousandϵ(t)decreasesto0ast+,(ii)ϵ˙(t)ϵ2(t)y(t)(β(1y(t)ϵ(t))y(t))0ast+,(iii)0+y(t)ϵ(t)(ββy(t)ϵ(t)y(t))βy(t)ϵ(t)+β+y(t)dt=+.

Then x(t) → PZer(A+B)(0) and z(t) → PZer(A+B)(0) as t → + ∞.

Proof

Define θ(t)12x(t)x¯(ϵ(t))2,t0. Then, by using Lemma 5.7, for almost all t ≥ 0

θ˙(t)=x(t)x¯(ϵ(t)),x˙(t)ϵ˙(t)ddϵx¯(ϵ(t))=x(t)x¯(ϵ(t)),x˙(t)x(t)x¯(ϵ(t)),ϵ˙(t)ddϵx¯(ϵ(t))1y(t)ϵ(t)y(t)βx(t)z(t)2ϵ(t)y(t)z(t)x¯(ϵ(t))x(t)x¯(ϵ(t)),ϵ˙(t)ddϵx¯(ϵ(t)). (5.16)

Further, for almost all t ≥ 0, by ϵ(t)-strong monotonicity of A + Bt one has

ϵ(t)z(t)x¯(ϵ(t))2x(t)z(t)y(t)+Bϵ(t)z(t)Bϵ(t)x(t),z(t)x¯(ϵ(t)),

hence by Cauchy-Schwarz inequality, employing the (ϵ(t) + 1/β)-Lipschitz continuity of Bϵ(t) and rearranging terms, for almost all t ≥ 0 it holds

z(t)x¯(ϵ(t))1+1y(t)ϵ(t)+1βϵ(t)x(t)z(t).

In particular, for almost all t ≥ 0 one has

x(t)x¯(ϵ(t))x(t)z(t)+z(t)x¯(ϵ(t))2+1y(t)ϵ(t)+1βϵ(t)x(t)z(t),

which is equivalent to

x(t)z(t)βy(t)ϵ(t)2βy(t)ϵ(t)+β+y(t)x(t)x¯(ϵ(t)) (5.17)

for almost all t ≥ 0. Inserting (5.17) in (5.16) and dropping the second (nonpositive) term on the right hand side yields, after denoting

L(t)1y(t)ϵ(t)y(t)ββy(t)ϵ(t)2βy(t)ϵ(t)+β+y(t),t0,

we see

θ˙(t)2L(t)θ(t)x(t)x¯(ϵ(t)),ϵ˙(t)ddϵx¯(ϵ(t))2L(t)θ(t)ϵ˙(t)ddϵx¯(ϵ(t))2θ(t),

for almost all t ≥ 0, where in the second inequality we used that ϵ(⋅) is decreasing, thus ϵ̇(⋅) is nowhere positive. From here, we can repeat the arguments from the proof of Theorem 4.3, mutatis mutandis, to obtain the desired result.

Analogously to the proof of [11, Theorem 2] one can show, by replacing in the demonstrations of the intermediate results B by Bϵ(t) and taking into consideration the absolute continuity of ϵ, that limt→+∞ (x(t) − z(t)) = 0, hence z(t) → PZer(A+B)(0) as t → + ∞ as well. ◼

Remark 5.1

If supt→+∞ y (t) < β one can replace assertion (ii) of the previous theorem with

(ii)ϵ˙(t)ϵ2(t)y(t)0 as t+.

Remark 5.2

If we choose, for example, ϵ(t) = 1/(1 + t)0.5 and y (t) ≡ y ∈ (0, β) constant, symbolic computation with Mathematica shows that in this case assertion (iii) holds (as well as assertions (i) and (ii) by choice of ϵ(⋅)).

Remark 5.3

The strong convergence of the trajectories of a forward-backward-forward dynamical system was achieved in [11, Theorem 3] under more demanding hypothesis involving the strong monotonicity of sum of the involved operators.

6 Numerical illustrations

In this section we are going to illustrate by some numerical experiments the theoretical results we achieved. More precisely, we show how adding a Tikhonov regularization term in the considered dynamical systems influences the asymptotic behavior of their trajectories.

6.1 Application to a split feasibility problem

For our first example we consider the following split feasibility problem in ℝ2

find xR2 such that xC and LxQ, (SFP)

where C and Q are nonempty, closed and convex subsets of ℝ2 and L : ℝ2 → ℝ2 a bounded linear operator. For this purpose, we first notice that \eqref{SFP} can be equivalently rewritten as

minxC12LxPQ(Lx)2.

The necessary and sufficient optimality condition for this problem yields

find xR2 such that 0NC(x)+L(IdPQ)Lx. (6.1)

We approach (SFP) by the two Tikhonov regularized forward-backward dynamical systems we developed in this paper as well as by an unregularized version and compare the trajectories. In order to apply the forward-backward dynamical systems to the monotone inclusion problem (6.1), we set A ≜ NC and B ≜ ∇( 12 L(⋅) − PQ (L(⋅)) ∥2) = L ∘ (Id − PQ) ∘ L. It holds for x, y ∈ ℝ2

BxByL2xy+LPQLxPQLy2L2xy,

i.e. B is Lipschitz continuous with constant 2∥L2 and due to the Baillon-Haddad theorem B is (1/(2∥L2))-cocoercive. Hence Theorem 3.8 and Theorem 4.3 as well as the convergence statement [22, Theorem 6] for the non-regularized forward-backward dynamical system can be employed for (6.1) writen by means of the operators A and B. By taking into account that JA = PC we obtain the following dynamical systems

x˙(t)=λ(t)[PC(x(t)yB(x(t)))x(t)]x(0)=x0, (FB)
x˙(t)=λ(t)[PC(x(t)yB(x(t)))x(t)]ϵ(t)x(t)x(0)=x0, (FBOR)

and

x˙(t)=λ(t)[PC[x(t)y(B(x(t))+ϵ(t)x(t))]x(t)]x(0)=x0, (FBIR)

that are special cases of (4.2), (3.6) and (4.1), respectively. For the implementation we take CB1(0) the open ball with center 0 and radius 1 in ℝ2 and Q ≜ {x ∈ ℝ2 : 3x1x2 = 0} a linear subspace. Moreover, we define

L1111

and set x0 = (−3, 3) ∈ ℝ2 as starting point. Obviously, ∥L∥ = 2 . According to [23, Proposition 29.10 and Example 29.18], the projections onto the sets C and Q are given by

PC(x)=xx,x>1,x,else,

and

PQ(x)=x+ηx,uu2u,

with u = (3, −1) ∈ ℝ2 and η = 0, respectively. Further, we choose ϵ(t) ≜ 1/((1 + t)0.5) as the Tikhonov regularization function. For different choices of the parameters λ(t) ≡ λ > 0 and y > 0 the resulted trajectories of the dynamical systems (FB), (FBIR) and (FBOR) are displayed in Figures 2 to 5.

Fig. 2 
Trajectories of (FB), (FBIR) and (FBOR) for λ = 0.5 and y = 0.15
Fig. 2

Trajectories of (FB), (FBIR) and (FBOR) for λ = 0.5 and y = 0.15

Fig. 3 
Trajectories of (FB), (FBIR) and (FBOR) for λ = 0.5 and y = 0.3
Fig. 3

Trajectories of (FB), (FBIR) and (FBOR) for λ = 0.5 and y = 0.3

Fig. 4 
Trajectories of (FB), (FBIR) and (FBOR) for λ = 1 and y = 0.15
Fig. 4

Trajectories of (FB), (FBIR) and (FBOR) for λ = 1 and y = 0.15

Fig. 5 
Trajectories of (FB), (FBIR) and (FBOR) for λ = 1 and y = 0.3
Fig. 5

Trajectories of (FB), (FBIR) and (FBOR) for λ = 1 and y = 0.3

One observes the following: while the trajectories of the unregularized system (FB) approach a solution of (SFP) with positive norm, the regularized dynamical systems (FBIR) and (FBOR) generate trajectories which converge to the minimum norm solution (0, 0) ∈ ℝ2 of (SFP). Furthermore, for small parameters λ and y the outer regularization (FBOR) acts more aggressively than the inner regularization (FBIR), leading to a faster convergence of the trajectories of (FBOR). In contrast, the trajectories of (FBIR) are gently guided to the minimum norm solution and one can recognize the shape of the unregularized trajectories generated by (FB). However, for larger λ and y, the differences between the trajectories generated by the two Tikhonov regularized systems seem to fade.

6.2 Application to a variational inequality

For the second numerical illustration, this time of the forward-backward-forward splitting scheme, we consider the variational inequality

find xR3 such that B(x),yx0yC, (VI)

where B : ℝ3 → ℝ3 is a Lipschitz continuous mapping and C ⊆ ℝ3 a nonempty, closed and convex set. To attach a forward-backward-forward dynamical system to this problem, we note that (VI) can be equivalently rewritten as the monotone inclusion

find xR3 such that 0B(x)+NC(x). (6.2)

Hence, by setting A ≜ NC and taking into consideration that JA = PC, the Tikhonov regularized forward-backward-forward dynamical system (5.2) associated to problem (6.2) reads as

z(t)=PC[x(t)y(t)(Bx(t)+ϵ(t)x(t))]0=x˙(t)+x(t)z(t)y(t)[Bx(t)Bz(t)+ϵ(t)(x(t)z(t))]x(0)=x0. (FBFR)

For the implementation we specify

B00.10.50.100.40.50.40

which defines a linear operator and C ≜ {x ∈ ℝ3 : 3x1x2 + 1 = 0}. Since B is skew-symmetric (i.e. B = − B), it can not be cocoercive, hence our theoretical results on the forward-backward dynamical systems cannnot be used for solving (6.2). However, since B is Lipschitz continuous with constant ∥B∥ ≈ 0.64807 we can apply Theorem 5.8 for finding a solution to (6.2). Similarly as in the previous subsection, according to [23, Example 29.18] the projection onto C is given by

PC(x)=x+ηx,uu2u,

with u = (3, −1, 1) ∈ ℝ3 and η = 0. We choose x0 ≜ (−2, 4, −2) as starting point and ϵ(t) ≜ 1(1+t)β with β ∈ [0, 1) as Tikhonov regularization function. We call β the Tikhonov regularization parameter and note that the choice β = 0 corresponds to the unregularized system (5.1) as investigated in [11]. The trajectories of (FBFR) for the choices of regularization parameters β ∈ {0, 0.1, 0.5, 0.9} and step sizes y ∈ {0.2, 0.5} are pictured in Figures 6 and 7, respectively. One observes that the unregularized trajectories are oscillating with high frequency and converge slowly to zero. As we employ the Tikhonov regularization, the oscillating behaviour flattens out and the convergence speed increases. Since the parameter β is the exponent in the denominator of ϵ, a small value of β corresponds to a stronger impact of the Tikhonov regularization and vice versa. Hence, the two above mentioned effects are most pronounced when β is small. Moreover, comparing Figures 6 and 7 suggests that increasing the step size y results in an acceleration of the convergence behaviour (note the different time scales in Figures 6 and 7).

Fig. 6 
Trajectories of (FBFR) for regularization parameters β ∈ {0, 0.1, 0.5, 0.9} and y = 0.2
Fig. 6

Trajectories of (FBFR) for regularization parameters β ∈ {0, 0.1, 0.5, 0.9} and y = 0.2

Fig. 7 
Trajectories of (FBFR) for regularization parameters β ∈ {0, 0.1, 0.5, 0.9} and y = 0.5
Fig. 7

Trajectories of (FBFR) for regularization parameters β ∈ {0, 0.1, 0.5, 0.9} and y = 0.5

7 Conclusions

In this paper we perturb by means of the Tikhonov regularization several dynamical systems in order to guarantee the strong convergence of their trajectories under reasonable hypotheses. First we investigate a Tikhonov regularized Krasnoselskiĭ-Mann dynamical system and show that its trajectories strongly converge towards a minimum norm fixed point of the involved nonexpansive operator, slightly extending some recent results from the literature. As a special case, a perturbed forward-backward dynamical system with an outer Tikhonov regularization is obtained, whose trajectories strongly converge towards the minimum norm zero of the sum of a maximally monotone operator and a single-valued cocoercive operator. Making the Tikhonov regularization an inner one, by perturbing the single-valued operator and not the whole system as above, another Tikhonov regularized forward-backward dynamical system, this time with dynamic stepsizes (in contrast to the constant ones considered before) is obtained and its trajectories strongly converge towards the minimum norm zero of the mentioned sum of operators as well. Afterwards we consider an implicit forward-backward-forward dynamical system with a similar inner Tikhonov regularization of the involved single-valued operator, that is taken to be only Lipschitz continuous this time. The trajectories of this perturbed dynamical system strongly converge towards the minimum norm zero of the sum of a maximally monotone operator with the mentioned single-valued Lipschitz continuous one. These results improve previous contributions from the literature where only weak convergence of such trajectories was obtained under standard assumptions, more demanding hypotheses of uniform monotonicity or strong monotonicity being employed for deriving strong convergence. In order to illustrate our achievements we present some numerical experiments performed in MATLAB by using the ODE15S function for solving ordinary differential equations. In order to deal with the forward-backward dynamical systems we consider a split feasibility problem, while for the forward-backward-forward dynamical system we use a variational inequality. In both these situations one can note that adding a Tikhonov regularization term in the considered dynamical systems significantly influences the asymptotic behaviour of their trajectories. More precisely, while the trajectories of the unregularized dynamical systems are oscillating with high frequency and converge slowly towards some (random) solutions of the considered problems, the regularized dynamical systems generate trajectories which converge to the corresponding minimum norm solutions. Moreover, the outer regularization acts more aggressively than the inner regularization, leading to a faster convergence of the trajectories.

Acknowledgements

The work of R.I. Boţ was supported by FWF (Austrian Science Fund), project I2419-N32. The work of S.-M. Grad was supported by FWF (Austrian Science Fund), project M-2045, and by DFG (German Research Foundation), project GR 3367/4-1. The work of D. Meier was supported by FWF (Austrian Science Fund), project I2419-N32, by the Doctoral Programme Vienna Graduate School on Computational Optimization (VGSCO), project W1260-N35 and by DFG (German Research Foundation), project GR3367/4-1. M. Staudigl thanks the COST Action CA16228 “European Network for Game Theory” for financial support. The authors thank Phan Tu Vuong for valuable discussions and an anonymous reviewer for suggestions that contributed to improving the quality of the presentation.

References

[1] R.E. Bruck Jr. Asymptotic convergence of nonlinear contraction semigroups in Hilbert space. J. Funct. Anal., 18(1):15–26, 1975.10.1016/0022-1236(75)90027-0Search in Google Scholar

[2] J.B. Baillon and H. Brezis. Une remarque sur le comportement asymptotique des semigroupes non linéaires. Houston J. Math., 2(1):5–7, 1976.Search in Google Scholar

[3] J.B. Baillon. Un exemple concernant le comportement asymptotique de la solution du problème d u/d t + ∂ ϕ(μ) ∋ 0. J. Funct. Anal., 28(3):369–376, 1978.10.1016/0022-1236(78)90093-9Search in Google Scholar

[4] M. Muehlebach and M.I. Jordan. A dynamical systems perspective on Nesterov acceleration. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 4656–4662. MLR Press, 2019.Search in Google Scholar

[5] W. Su, S. Boyd, and E.J. Candes. A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. J. Mach. Learn. Res., 17(153):1–43, 2016.Search in Google Scholar

[6] H. Attouch, Z. Chbani, and H. Riahi. Combining fast inertial dynamics for convex optimization with Tikhonov regularization. J. Math. Anal. Appl., 457(2):1065–1094, 2018.10.1016/j.jmaa.2016.12.017Search in Google Scholar

[7] H. Attouch, A. Cabot, Z. Chbani, and H. Riahi. Inertial forward-backward algorithms with perturbations: application to Tikhonov regularization. J. Opt. Theory Appl., 179(1):1–36, 2018.10.1007/s10957-018-1369-3Search in Google Scholar

[8] H. Attouch, Z. Chbani, J. Fadili, and H. Riahi. First-order optimization algorithms via inertial systems with Hessian driven damping. arXiv, 1907.10536v1, 2019.10.1007/s10107-020-01591-1Search in Google Scholar

[9] H. Attouch, Z. Chbani, J. Peypouquet, and P. Redont. Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity. Math. Program., 168(1):123–175, 2018.10.1007/s10107-016-0992-8Search in Google Scholar

[10] E.R. Csetnek, Y. Malitsky, and M.K. Tam. Shadow Douglas-Rachford splitting for monotone inclusions. Appl. Math. Optim., 80(3):665–678, 2019.10.1007/s00245-019-09597-8Search in Google Scholar

[11] S. Banert and R.I. Boţ. A forward-backward-forward differential equation and its asymptotic properties. J. Convex Anal., 25(2):371–388, 2018.Search in Google Scholar

[12] H. Attouch and R. Cominetti. A dynamical approach to convex minimization coupling approximation with the steepest descent method. J. Differ. Equations, 128(2):519–540, 1996.10.1006/jdeq.1996.0104Search in Google Scholar

[13] H. Attouch, J. Bolte, P. Redont, and M. Teboulle. Singular Riemannian barrier methods and gradient-projection dynamical systems for constrained optimization. Optimization, 53(5–6):435–454, 2004.10.1080/02331930412331327184Search in Google Scholar

[14] F. Alvarez, H. Attouch, J. Bolte, and P. Redont. A second-order gradient-like dissipative dynamical system with Hessian damping. Applications to optimization and mechanics. J. Math. Pures Appl, 81(8):774–779, 2002.10.1016/S0021-7824(01)01253-3Search in Google Scholar

[15] A. Wibisono, A.C. Wilson, and M.I. Jordan. A variational perspective on accelerated methods in optimization. Proc. Natl. Acad. Sci. USA, 113(47):E7351–E7358, 2016.10.1073/pnas.1614734113Search in Google Scholar PubMed PubMed Central

[16] P. Mertikopoulos and M. Staudigl. On the convergence of gradient-like flows with noisy gradient input. SIAM J. Optim., 28(1):163–197, 2018.10.1137/16M1105682Search in Google Scholar

[17] P. Mertikopoulos and M. Staudigl. Stochastic mirror descent dynamics and their convergence in monotone variational inequalities. J. Optim. Theory Appl., 179(3):838–867, 2018.10.1007/s10957-018-1346-xSearch in Google Scholar PubMed PubMed Central

[18] J. Peypouquet and S. Sorin. Evolution equations for maximal monotone operators: asymptotic analysis in continuous and discrete time. J. Convex Anal., 17(3–4):1113–1163, 2010.Search in Google Scholar

[19] J. Bolte. Continuous gradient projection method in Hilbert spaces. J. Optim. Theory Appl., 119(2):235–259, 2003.10.1023/B:JOTA.0000005445.21095.02Search in Google Scholar

[20] B. Abbas and H. Attouch. Dynamical systems and forward–backward algorithms associated with the sum of a convex subdifferential and a monotone cocoercive operator. Optimization, 64(10):2223–2252, 2015.10.1080/02331934.2014.971412Search in Google Scholar

[21] R. Cominetti, J. Peypouquet, and S. Sorin. Strong asymptotic convergence of evolution equations governed by maximal monotone operators with Tikhonov regularization. J. Differ. Equations, 245(12):3753–3763, 2008.10.1016/j.jde.2008.08.007Search in Google Scholar

[22] R.I. Boţ and E.R. Csetnek. A dynamical system associated with the fixed points set of a nonexpansive operator. J. Dyn. Differ. Equations, 29(1):155–168, 2017.10.1007/s10884-015-9438-xSearch in Google Scholar

[23] H.H. Bauschke and P.L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer - CMS Books in Mathematics, 2 edition, 2017.10.1007/978-3-319-48311-5Search in Google Scholar

[24] J.-L. Lions and G. Stampacchia. Variational inequalities. Commun. Pure Appl. Math., 20:493–519, 1967.10.1002/cpa.3160200302Search in Google Scholar

[25] P.L. Lions and B. Mercier. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal., 16(6):964–979, 1979.10.1137/0716071Search in Google Scholar

[26] H. Attouch and M.-O. Czarnecki. Asymptotic behavior of coupled dynamical systems with multiscale aspects. J. Differ. Equations, 248(6):1315–1344, 2010.10.1016/j.jde.2009.06.014Search in Google Scholar

[27] E. Vilches and P. Pérez-Aros. Tikhonov regularization of dynamical systems associated with nonexpansive operators defined in closed and convex sets. arXiv, 1904.05718, 2019.Search in Google Scholar

[28] R.E. Bruck Jr. A strongly convergent iterative solution of 0 ∈ U(x) for a maximal monotone operator U in Hilbert space. J. Math. Anal. Appl., 48(1):114–126, 1974.10.1016/0022-247X(74)90219-4Search in Google Scholar

[29] A. Haraux. Systémes Dynamiques Dissipatifs et Applications. Masson, Paris, 1991.Search in Google Scholar

[30] E.D. Sontag. Mathematical Control Theory: Deterministic Finite Dimensional Systems, volume 6. Springer, 2013.Search in Google Scholar

[31] J.-P. Aubin. Viability Theory. Birkhäuser, Boston, 1991.Search in Google Scholar

[32] J.-P. Aubin and A. Cellina. Differential Inclusions. Springer, Berlin, 1984.10.1007/978-3-642-69512-4Search in Google Scholar

[33] P. Tseng. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim., 38(2):431–446, 2000.10.1137/S0363012998338806Search in Google Scholar

[34] R.I. Boţ, E.R. Csetnek, and P.T. Vuong. The forward-backward-forward method from discrete and continuous perspective for pseudo-monotone variational inequalities in Hilbert spaces. Eur. J. Oper. Res., 287(1):49–60, 202010.1016/j.ejor.2020.04.035Search in Google Scholar

[35] E. Pardoux and A. A. Răşcanu. Stochastic Differential Equations, Backward SDEs and Partial Differential Equations. Springer, 2014.10.1007/978-3-319-05714-9Search in Google Scholar

Received: 2019-11-11
Accepted: 2020-06-18
Published Online: 2020-08-25

© 2021 R.I. Boţ et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 11.5.2024 from https://www.degruyter.com/document/doi/10.1515/anona-2020-0143/html
Scroll to top button