Research paper
Validated integration of differential equations with state-dependent delay

https://doi.org/10.1016/j.cnsns.2022.106762Get rights and content

Highlights

  • Develops validated integration for differential equations with state-dependent delay.

  • Computer-assisted proof methodology provides explicit, computable, and tight error bounds.

  • Examples and detailed code are provided.

Abstract

We present an implicit method of steps for differential equations with state-dependent delays and validated numerics to rigorously enclose solutions of initial-value problems. Our approach uses a combination of contraction mapping arguments based on a Newton–Kantorovich type theorem and piecewise polynomial interpolation. Completing multiple steps of integration is challenging, and we resolve it by smooth interpolation of the previous solution, resulting in an interval-valued polynomial initial condition for the subsequent step. A set of examples is provided.

Introduction

Functional differential equations (FDE) have been studied for over 200 years, with substantial developments since the 1950s. Broadly, a FDE of retarded type is a differential equation ẋ(t)=f(xt),where f is a functional acting on a function space, and xt denotes a history function that “windows” the solution until some time in the past. When this function space consists of the continuous functions on a compact interval, and f is at least Lipschitz continuous, there is a well-developed theory popularized by the works of Diekmann, van Gills, Lunel and Walther [1], Hale & Lunel [2] and Krasovskii [3], among others. The situation is far less clear-cut for systems with so-called state-dependent delay, where the functional f cannot be understood as Lipschitz continuous on the typical space of continuous functions. In such cases, there remain several open problems concerning the regularity of the semiflow and so-called solution manifolds and invariant manifolds [4]. For some general background on state-dependent delay, the reader may refer to [5].

Consider the differential equation with state-dependent delay (DE-SDD), written in the form ẋ(t)=f(x(t),x(tτ(t,x(t)))).We assume f:Rd×RdRd and τ:R×RdR are Ck for some k1. More generally, one could instead allow f and τ to only be defined on an open domain ΩRd. In this paper, we present a method to approximate solutions of initial-value problems for DE-SDD and obtain rigorous, computable, tight bounds for the error. This is in contrast to a more traditional truncation error analysis, where error bounds are coarse and sometimes unquantifiable, as they may require access to some portion of the exact solution to compute; for example, see [6]. Here, the error bounds we obtain are explicit and machine-implementable.

Eq. (1) includes a few typical sub-classes of non-constant delays:

  • Time-varying delays: τ(t,x)=r(t) for r:RR.

  • Discrete state-dependent delays: τ(t,x)=r(x) for r:RdR.

An assumption of causality might be physically reasonable (i.e τ(t,x)0), but in many applications causality must be verified at the level of a particular solution a posteriori. A classical example of an equation with time-varying delay is the pantograph equation [7], ẋ(t)=ax(t)+bx(λt) for λ(0,1). As for discrete state-dependent delay problems, these continue to see application in cell biology [8], [9], electrodynamics [10], [11] and other fields. In the present work, we will concentrate on the case of discrete state-dependent delays: that is, formally, τ(t,x)=τ(x) for some function τ:RdR. However, results will be stated with as much generality as possible.

In many applications, the delay is an implicit function of the state and is not given explicitly. Examples include the two-body problem in electrodynamics and position control models, among others, and we refer the reader to the chapter [5] for several detailed examples. In many such situations, however, it is possible to determine other differential equations that are satisfied by the delay variables. This generally leads to a state-dependent delay differential equation of the form (1), perhaps of higher dimension and with more than one delay. We consider here the case of a single delay, although the ideas developed in this paper could be extended to the case of multiple delays.

Recently, there has been a surge of interest in computer-assisted proofs in delay differential equations. Examples include analysis of slowly oscillating periodic solutions of Wright’s equation [12] and ultimately, the proofs of Wright’s conjecture [13], Jones’ conjecture [14]. There is also the analysis of periodic orbits and bifurcations in the time-delay Duffing equation [15]. Several works have addressed general approaches for validated computation of periodic orbits [16], [17], integration [16], [18] and parameterization of unstable manifolds [19]. Insofar as validated integration is concerned, the Taylor methods [16] and Chebyshev spectral methods [18] seem to be the most recent. They are appropriate for fixed, constant delays. Our objective with the present work is to develop rigorous numerics for an implicit method of steps, using polynomial interpolation.

State-dependent delay equations (SDDE) are very different from those with constant delays. Superficially, the biggest change is that they are always nonlinear. More deeply, the semiflow of a state-dependent delay equation is generally only C1 on its associated solution manifold. For these and related notions, the reader may consult [20], [21].

There is a large body of literature on numerical methods for SDDE, and we will make no effort to describe this here. We will, however, mention some recent work on the development of a posteriori validation approaches to invariant objects of SDDE. There is the work periodic orbits in state-dependent delayed perturbations of ordinary differential equations of Yang, Gimeno and de la Llave [22] as well as the dissertation of Yang [23] on similar topics. Parameterization of quasi-periodic solutions under a small state-dependent delay and exponential dichotomy assumption was considered by He and de la Llave [24].

Recall the method of steps for delay differential equations. Without loss of generality, assume a constant unit delay and the task of solving the initial-value problem ẋ(t)=f(x(t),x(t1)),t>0x(θ)=ϕ(θ),θ[1,0]. The method of steps exploits the fact that for t[0,1], we have x(t1)=ϕ(t1). The task of solving the IVP above is equivalent to solving a sequence of ODE initial-value problems. Denote ϕ0=ϕ and consider the sequence of IVPs for nN: ϕ̇n(t)=f(ϕn(t),ϕn1(t1)),t[n1,n]ϕn(n1)=ϕn1(n1). We have x(t)=ϕn(t) whenever t[n1,n], so the solution of the delay IVP can be densely written as x=n1[n1,n)ϕn.Here, 1X is the indicator function on the set X and by an abuse of notation we define ϕn(t)=0 for t[n1,n] so that the above is well-defined.

The Chebyshev spectral approach of [18] makes explicit use of the method of steps by representing the functions ϕn at each step as Chebyshev series. The power of this spectral method comes from the excellent approximation properties of the Chebyshev polynomials and the Banach algebra associated to the sequence space used in the proofs. Polynomial nonlinearities in f translate to cosine convolutions in the sequence space, and the representation of the derivative operator is diagonally dominant. To compare, the Taylor method of [16] represents solutions as piecewise-Taylor expansions, using a bootstrapping procedure to get high-order derivatives of the solution needed to step the procedure forward iteratively, and the step size is not necessarily equal to the delay.

Our initial goal was to develop a fully spectral integrator based on Chebyshev expansion. In moving from the constant delay case to the non-constant or state-dependent case, there are a some technical problems that must be either resolved or circumvented. Chiefly, self-compositions – for example, x(t)x(tr(x(t))) – are highly nonlinear, and are difficult to characterize at the level of a sequence algebra. The obstructions were so severe that we abandoned the approach. In the next section, we will overview these obstructions in more detail.

Let x(t)=a0+2n=1anTn(t) denote a Chebyshev series, uniformly convergent on [1,1] and such that x(t)[1,1]. Let y(t)=b0+2n=1bnTn(t) denote another Chebyshev series, and let us consider the composition yx(t)=cn+2n1cnTn(t). We might ask: how do we compute the coefficients {cn} given the coefficients {an} and {bn}, and how are the decay of these sets of coefficients related?

Write the Chebyshev polynomials in the form Tn(t)=k=0nsn,ktk. Then we have y(x(t))=b0+2n1bnj=0nsn,ja0+2k1akTk(t)j=b0+2n1bnj=0nsn,ja0j+2k1akjTk(t),where am denotes the m-fold cosine convolution of a with itself: am=def(a(m1)a) and (uv)k=nZu|n|v|kn|.It follows that the operation (y,x)yx is equivalent at the level of the Chebyshev series coefficients to the map (b,a)c, where cn=m=0bmj=0msm,janj.

The previous derivation has a fairly direct consequence to numerical computations. If the coefficients {an} are finitely-supported with an=0 for nN+1, then a self-composition b=aa will have support for n{0,,N2}. Hence, rigorously evaluating a numerical defect can be expensive even for a low-order approximation, and the amount of information that is lost by taking a finite-mode projection gets quadratically worse for higher-order approximation.

Far more problematic from the point of view of rigorous Chebyshev (or Fourier) spectral methods: if one prescribes geometric decay of the coefficients – for example, aν=def|a0|+2n=1νn|an|< for some ν>1 – it is generally false that a2ν<. To see this, consider x(t)=t(1+t2)1. The poles of x are at ±i and it is therefore analytic on the Bernstein ellipse Bρ for any ρ<1+2, with the associated Chebyshev series expansion being uniformly convergent there. Let {an} denote the Chebyshev coefficients of x(t)=t(1+t2)1. Then aν< for ν<2. However, consider the self-composition xx and the associated Chebyshev coefficients a2. xx has six poles, all imaginary, but for the present discussion it suffices to list only the two smallest ones: t=±i(21(35))1/2. Let ρ(1,1+2) be such that t is inside the Bernstein ellipse Bρ. Then aν< for ν=ρ1, but xx has a pole on the in the interior or Bρ. In particular, a2ν=. See Fig. 1.

For the purposes of solving an initial-value problem associated to a DE-SDD like (1), the self-composition barrier is not initially present. Indeed, the implicit method of steps introduced in Section 2.1 transforms an initial-value problem for a DD-SDD into a boundary-value problem for an ordinary differential equation. This ordinary differential equation depends on the initial condition. Its solution will then solve the initial-value problem on an interval of finite length. However, to solve the initial-value problem on a longer interval, the computed solution must be fed forward as the initial condition for a new initial-value problem. Viewing this process more globally, the feedforward structure is “lower triangular” but non-trivially coupled. Solving the initial-value problem ẋ(t)=f(x(t),x(tτ(t,x(t)))),x(θ)=ϕ(θ),θ0on a suitably long time interval (such that two implicit steps occur; see later Section 2.1) is equivalent, after suitable transformations, to a fixed-point problem of the form x1(t)=ϕ(0)+0tF1(x1,ϕ)(s)dsx2(t)=x1(1)+0tF2(x2,x1)(s)ds, where Fj:C0([0,1],Rd)×C0([0,1],Rd)C0([0,1],Rd) for j=1,2 is a function that is related to the vector field f but, importantly, features a compositional term. Specifically, Fj(y,z)(s) contains a term of the form y(sδτ(sδ,z(s))) for some constant δ. At the level of a fixed-point problem, the pair of integral equations above define (almost; see Section 2.2) a map on C0([0,1],Rd)2 that contains a self-composition, due to the presence of F2. Therefore, moving to a purely spectral approach would still require understanding a composition formula such as (2). At present, this seems out of reach. For these reasons, we will avoid the use of purely spectral methods.

This is done in Section 2. Specifically, the first Section 2.1 is more computational, while Section 2.2 is more theoretically-minded, exploring how to translate the implicit method of steps to a zero-finding problem. In Section 3 we review some necessary background on interpolation and introduce the rigorous numerics framework for the zero-finding problem of the previous section. Section 4 contains the theoretical background for rigorous multiple integration steps, as well as extensive details concerning practical implementation. Several examples concerning a scalar state-dependent delay equation appear in 5.

Given a function F:X1×X2××XnY for Banach spaces X1,,Xn and Y, we denote DjF the partial Fréchet derivative of F with respect to variables in Xj. For an interval I, we denote by I its interior. If I is closed we denote I+=[infI,supI). IR denotes the set of real intervals, and we define IRd=def(IR)d for any d1.

Section snippets

Solving DE-SDD

In this section we present the implicit method of steps and an associated zero-finding problem.

Interpolation and rigorous numerics for a single implicit step

With Lemma 4 in mind, our focus shifts to proving the existence of a zero of the map F in (18). A secondary goal is to rigorously check the inclusion th(tδ,ψ(t))J=dom(ϕ). To accomplish this, we will represent approximate solutions ψ using piecewise polynomial interpolants of a given order and design a Newton-like operator for (18) that can be expected to contract in some closed ball around a candidate interpolant ψ¯ and approximate terminal integration time δ¯R.

Multiple implicit steps

The Y bound of Section 3.4, needed to obtain computer-assisted proofs of solutions of DE-SDD, is not directly implementable. Also, we have not explained how exactly the error from one proof should be propagated forward rigorously, if one is proving a long solution using multiple integration steps. The latter step ends up being less than trivial, and there are several non-equivalent ways it can be done depending on how the numerical data from each step is stored and how this interacts with the

Example

This section concerns an application of our validated integrator to the following initial-value problem: u̇(t)=γu(t)κu(tαcu(t)),x0=ϕwith γ,κ,α,c all being non-negative. In particular, we will require κ,c positive so as not to trivialize the state-dependent delay. A similar equation with two delays was studied in [31], where Hopf bifurcations and periodic orbits were analyzed using delay expansions and normal forms. In particular, (68) is a restricted version of that equation. We have

Discussion

We have extended the classical method of steps from differential equations with constant delay to the state-dependent delay case, resulting in the implicit method of steps. The result is a ODE boundary-value problem. Solving this boundary-value problem in a rigorous way requires a mechanism to resolve the out-of-bounds evaluation problem, and we generally accomplish this by way of a Taylor extension mechanism. We then develop computer-assisted proofs to validate a numerically-computed solution

CRediT authorship contribution statement

Kevin E.M. Church: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – Original Draft, Writing – Review & Editing, Visualization, Project administration.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References (34)

  • HaleJack K. et al.
  • KrasovskiiN.N.

    Stability of motion: applications of Lyapunov’s second method to differential systems and equations with delay

    (1963)
  • KrisztinTibor et al.

    Smoothness issues in differential equations with state-dependent delay

    Rendiconti Dell’Istituto Di Matematica Dell’Universita Di Trieste

    (2017)
  • GedeonTomas et al.

    Operon dynamics with state dependent transcription and/or translation delays

    J Math Biol

    (2022)
  • LucaJayme De

    Lorentz-equivariant flow with four delays of neutral type

    J Differential Equations

    (2022)
  • LópezÁlvaro G.

    On an electrodynamic origin of quantum fluctuations

    Nonlinear Dynam

    (2020)
  • LessardJean-Philippe

    Recent advances about the uniqueness of the slowly oscillating periodic solutions of Wright’s equation

    J Differential Equations

    (2010)
  • Cited by (0)

    View full text