1 Introduction

For coefficient matrices \(A\in {\mathbb {C}}^{n \times {n}}\) and \(B \in {\mathbb {C}}^{m \times {m}}\), an inhomogeneity \(C\in {\mathbb {C}}^{n \times {m}}\), and an initial value \(D\in {\mathbb {C}}^{n \times {m}}\), we consider the differential matrix equation

$$\begin{aligned} \begin{aligned} {\dot{X}}(t)&= AX(t) + X(t)B + C, \\ X(t_0)&= D, \end{aligned} \end{aligned}$$
(1)

and provide formulas for the solution X with \(X(t)\in {\mathbb {C}}^{n \times {m}}\). Equation (1) is commonly known as differential Sylvester equation (as opposed to the algebraic Sylvester equation \(AX+BX+C=0\)). In the symmetric case when \(B=A^T\), Eq. (1) and its algebraic counterpart are called differential and algebraic Lyapunov equations, respectively. In what follows, we will occasionally abbreviate Sylvester or Lyapunov equations by SLE.

In particular the differential Lyapunov equation is a useful tool for stability analysis and controller design for linear time-varying systems [2]. Equilibrium points of the differential Lyapunov equation, namely solutions of the algebraic Lyapunov equation, are used to construct quadratic Lyapunov functions for asymptotically stable linear time-invariant systems [33, Thm. 7.4.7]. The controllability and observability problem for linear time-varying systems is strongly connected to the solution of the differential Lyapunov equation [9, Ch. 13–14], [19, Ch. 3–4]. Other important applications lie in model order reduction [3] or in optimal control of linear time-invariant systems on finite time horizons [30]. Despite its importance, there have been but a few efforts to solve the differential Sylvester / Lyapunov or Riccati equation numerically, see [6, 7, 16, 21, 25, 26, 31, 36]. These algorithms are usually based on applying a time discretization and solving the resulting algebraic equations. Thus, even if the algebraic SLE are solved efficiently, the storage needed for the discrete solution at the time instances makes these direct approaches infeasible for large scale settings. Recently, Krylov subspace based methods were proposed in [12,13,14, 24].

In an attempt to overcome this shortage, we revisit known solution formulas, develop alternative solution representations, and discuss their suitability for numerical approximations. We start with deriving a spectral decomposition for the Sylvester operator \({\mathcal {S}}\) which allows functional calculus. We obtain formulas for the operator norm \(\Vert {\mathcal {S}}\Vert \) as well as for \(\mathcal S^{-1}\) and \(e^{{\mathcal {S}}}\). This recovers previously known solution formulas. It will turn out that, in terms of efficiency, this solution representation is not well suited for approximation in general but, in special cases, allows for the construction or computation of exact solutions.

As a step towards efficient solution approximation, we use Taylor series expansions to identify suitable Krylov subspaces. For the differential Lyapunov equation with stable coefficient matrices and symmetric low-rank factored inhomogeneity, it is well-known that the solution of the algebraic Lyapunov equation spans a Krylov subspace under these assumptions. We split the solution of the differential Lyapunov equation in a constant and time dependent part, where the constant part is the solution of an algebraic Lyapunov equation. We approximate the time dependent part using the subspace spanned by the solution of the algebraic Lyapunov equation. The resulting algorithm overcomes the essential problems with storage consumption. Numerical results are presented in Sect. 7 and “Appendices B and C”.

2 Preliminaries

In this section, we introduce the considered equations and the Sylvester operator, set the notation, and recall basic results.

The spectrum of a matrix \(A \in {\mathbb {C}}^{n \times {n}}\) is denoted by \(\varLambda ({A})\subseteq {\mathbb {C}}\). A matrix is called stable if its spectrum is contained in the left open half plane \({\mathbb {C}}^{-}\). The Frobenius inner product on \({\mathbb {C}}^{n \times {m}}\) is given by \(\langle A, B \rangle _F := \sum \limits _{i=1}^{n} \sum \limits _{j=1}^{m} A_{i,j} \overline{B_{i,j}}\). The Hadamard product is \( A\odot B = {\left( A_{i,j} \cdot B_{i,j} \right) }_{\begin{array}{c} i=1,\ldots ,n\\ j=1,\ldots ,m \end{array}} \in {\mathbb {C}}^{n \times {m}}\) for \(A,B \in {\mathbb {C}}^{n \times {m}}\). The Hermitian transpose, transpose, conjugate are denoted by \(A^H\), \(A^T\), \({\overline{A}}\), respectively. Further, we will refer to the Kronecker delta: \(\delta _{{ij}}={{\left\{ \begin{array}{ll}1&{}{\text {if }}i=j, \\ 0&{}{\text {if }}i\ne j \end{array}\right. }}\) and the Kronecker product: \(A \otimes B={\left( A_{i,j} \cdot B \right) }_{\begin{array}{c} i=1,\ldots ,n\\ j=1,\ldots ,m \end{array}}\). The identity matrix in \({\mathbb {C}}^{d \times {d}}\) is denoted by \(E_{d,d}\). For an interval \(I\subseteq {\mathbb {R}}\) the set of continuous matrix-valued functions with complex entries is denoted by \({\mathcal {C}}{(I,{\mathbb {C}}^{n \times {m}})}\).

We start with a well known result on the solutions of differential Sylvester equations which concerns a more general case of the SLE (where the coefficient matrices may depend on time).

Theorem 1

(Existence and Uniqueness of Solutions, [1, Thm. 1.1.1., Thm. 1.1.3., Thm. 1.1.5]) Let \(I\subseteq {\mathbb {R}}\) be an open interval with \(t_0\in I\), \(A\in {\mathcal {C}}{(I,{\mathbb {C}}^{n \times {n}})}, B\in {\mathcal {C}}{(I,{\mathbb {C}}^{m \times {m}})}, C\in {\mathcal {C}}{(I,{\mathbb {C}}^{n \times {m}})}\) and \(D\in {\mathbb {C}}^{n \times {m}}\). The differential Sylvester equation

$$\begin{aligned} {\dot{X}}(t)&= A(t)X(t) + X(t)B(t) + C(t), \\ X(t_0)&= D, \end{aligned}$$

has the unique solution \(X(t) = \varPhi _{A}(t,t_0)D \varPhi _{B^{H}}{(t,t_0)}^H + \int \limits _{t_0}^{t} \varPhi _{A}(t,s)C(s)\varPhi _{B^{H}}{(t,s)}^H \mathrm {d}s\).

Here \(\varPhi _{A}(t,t_0)\) and \(\varPhi _{B^{H}}(t,t_0)\) are the unique state-transition matrices with respect to \(t_0\in I\) defined by

$$\begin{aligned} {\dot{\varPhi }}_{A}(t,t_0)&:= \frac{\partial }{\partial t} \varPhi _{A}(t,t_0)=A(t)\varPhi _{A}(t,t_0), \\ \varPhi _{A}(t_0,t_0)&= E_{n,n}. \\ {\dot{\varPhi }}_{B^H}(t,t_0)&:= \frac{\partial }{\partial t} \varPhi _{B^H}(t,t_0)={B(t)}^H\varPhi _{B^H}(t,t_0), \\ \varPhi _{B^H}(t_0,t_0)&= E_{m,m}. \end{aligned}$$

An application of Theorem 1 specifically to the autonomous case with constant coefficients by simply replacing the state transition matrices with the matrix exponentials yields the next result.

Theorem 2

Let \(I\subseteq {\mathbb {R}}\) be an open interval with \(t_0\in I\), \(A\in {\mathbb {C}}^{n \times {n}}, B\in {\mathbb {C}}^{m \times {m}}, C\in {\mathcal {C}}{(I,{\mathbb {C}}^{n \times {m}})}\) and \(D\in {\mathbb {C}}^{m \times {n}}\). The differential Sylvester equation

$$\begin{aligned} {\dot{X}}(t)&= AX(t) + X(t)B + C(t), \\ X(t_0)&= D, \end{aligned}$$

has the unique solution

$$\begin{aligned} X(t) = e^{A(t-t_0)}D e^{B(t-t_0)} + \int \limits _{t_0}^{t} e^{A(t-s)}C(s)e^{B(t-s)}\mathrm {d}s. \end{aligned}$$
(2)

Next, we restate basic properties of the Sylvester operator \({\mathcal {S}}:{\mathbb {C}}^{n \times {m}}\rightarrow {\mathbb {C}}^{n \times {m}}\), which, for given \(A\in {\mathbb {C}}^{n \times {n}}\) and \(B\in {\mathbb {C}}^{m \times {m}}\), is defined by

$$\begin{aligned} {\mathcal {S}}(X)=A X + XB. \end{aligned}$$
(3)

The Sylvester operator \({\mathcal {S}}\) has been thoroughly studied in [10, 22, 23, 35]. Among others, it has been shown that the eigenvalues and eigenvectors of the Sylvester operator \({\mathcal {S}}\) can be expressed in terms of the eigenvalues and eigenvectors of A and B, cf. [1, Rem. 1.1.2.], [3, Ch. 6.1.1], [20].

By exploiting the lemma presented next, it is possible to express the solution in (2) in terms of the Sylvester operator.

Lemma 1

(Sylvester Operator \({\mathcal {S}}\)) For the Sylvester operator \({\mathcal {S}}:{\mathbb {C}}^{n \times {m}} \rightarrow {\mathbb {C}}^{n \times {m}}\) and its partial realizations \({\mathcal {H}}, \mathcal V:{\mathbb {C}}^{n \times {m}} \rightarrow {\mathbb {C}}^{n \times {m}}\), \({\mathcal {H}}(X)=AX\) and \({\mathcal {V}}(X) = XB\), it holds that:

  • \({\mathcal {S}} = {\mathcal {H}} + {\mathcal {V}}\) and \({\mathcal {H}} {\mathcal {V}} = {\mathcal {V}} {\mathcal {H}}\),

  • \(e^{t{\mathcal {S}}}= e^{t{\mathcal {H}}}e^{t{\mathcal {V}}}\) for all \( t\in {\mathbb {R}}\),

for any \(A\in {\mathbb {C}}^{n \times {n}} \) and \(B\in {\mathbb {C}}^{m \times {m}}\).

Proof

The first claim can be confirmed by direct computations. The second claim is a standard result for commuting linear operators. \(\square \)

By Lemma 1, formula (2) can be rewritten as

$$\begin{aligned} X(t)&= e^{(t-t_0)A}De^{(t-t_0)B} + \int \limits _{t_0}^{t} e^{(t-s)A}C(s)e^{(t-s)B}\mathrm {d}s = e^{(t-t_0){\mathcal {H}}}e^{(t-t_0){\mathcal {V}}}(D) \nonumber \\&\quad + \int \limits _{t_0}^{t} e^{(t-s){\mathcal {H}}} e^{(t-s){\mathcal {V}}}(C(s))\mathrm {d}s \nonumber \\&= e^{(t-t_0){\mathcal {S}}}(D) + \int \limits _{t_0}^{t} e^{(t-s){\mathcal {S}}}(C(s))\mathrm {d}s. \end{aligned}$$
(4)

3 Spectral decomposition of the Sylvester operator

In this section we show that the Sylvester operator \({\mathcal {S}}\), as defined in (3), is a normal operator if A and B are diagonalizable and if a suitably chosen inner product on a Hilbert space is considered. The inner product depends on the decomposition of A and B. Nevertheless, this approach will enable us to apply the spectral theorem and to derive a solution formula for the differential and algebraic SLE. This resembles the formulas of [11, Ch. 4.1.1], [18, 34]. Those results were obtained by inserting the spectral decomposition into the SLE and by applying suitable algebraic manipulations and using the unrolled Kronecker representation of the SLE. Our strategy is to decompose the operator \({\mathcal {S}}\) first and then using functional calculus to obtain formulas for \(e^{{\mathcal {S}}}\) and \({\mathcal {S}}^{-1}\). The eigenspaces of \({\mathcal {S}}\) can be constructed from the eigenspaces of A and B. The choice of the inner product ensures that the eigenvectors are orthonormal and \({\mathcal {S}}\) becomes a normal operator.

Lemma 2

(Inner product for \({\mathcal {S}}\)) Let \(A\in {\mathbb {C}}^{n \times {n}}, B \in {\mathbb {C}}^{m \times {m}}\) be diagonalizable and \(C,D\in {\mathbb {C}}^{n \times {m}}\). Furthermore let \(A=UD_{A}U^{-1}\) and \(B^H=VD_{B^H}V^{-1}\) be the spectral decompositions of A and \(B^H\) with \(U,D_A\in {\mathbb {C}}^{n \times {n}}\), \(V,D_{B^H}\in {\mathbb {C}}^{m \times {m}}\) and \(D_A\) and \(D_{B^H}\) be diagonal matrices.

The following holds:

  1. (i)

    \(\langle X, Y \rangle _{U,V}:= \langle U^{-1} X V^{-H}, U^{-1} Y V^{-H} \rangle _F\) is an inner product on \({\mathbb {C}}^{n\times m}\).

  2. (ii)

    \({(u_i {v_j}^H)}_{\begin{array}{c} i=1,\ldots ,n\\ j=1,\ldots ,m \end{array}}\) is an orthonormal basis of \({\mathbb {C}}^{n\times m}\) with respect to \(\langle \cdot , \cdot \rangle _{U,V}\).

  3. (iii)

    The adjoint operator \({\mathcal {S}}^{*}: {\mathbb {C}}^{n \times {m}} \rightarrow {\mathbb {C}}^{n \times {m}}\) with respect to \(\langle \cdot ,\cdot \rangle _{U,V}\)

    is \({\mathcal {S}}^{*}(X)=U\overline{D_A} U^{-1}X + XV^{-H} D_{B^H}V^{H}\).

  4. (iv)

    \({\mathcal {S}}\) is a normal operator with respect to \(\langle \cdot , \cdot \rangle _{U,V}\).

Proof

It is straightforward to see that \(\langle \cdot , \cdot \rangle _{U,V}\) is an inner product on \({\mathbb {C}}^{n \times {m}}\). Because of the identity

$$\begin{aligned} \langle u_i v_j^H , u_k v_l^H\rangle _{U,V} = \langle U^{-1} u_i v_j^H V^{-H}, U^{-1} u_k v_l^H V^{-H} \rangle _F = \langle e_i e_j^H , e_k e_l^H \rangle _F = \delta _{i,k}\delta _{j,l}, \end{aligned}$$

the matrices \(u_i v_j^H \in {\mathbb {C}}^{n \times {m}}\) are orthogonal with respect to \(\langle \cdot , \cdot \rangle _{U,V}\) and therefore linearly independent. As \(\dim ({\mathbb {C}}^{n \times {m}})=n\cdot m\), the tuple \({(u_i v_j^H)}_{\begin{array}{c} i=1,\ldots ,n\\ j=1,\ldots ,m \end{array}}\) is an orthonormal basis of \({\mathbb {C}}^{n \times {m}}\).

Finally we show that, in this inner product, the adjoint of \({\mathcal {S}}\) is defined as \({\mathcal {S}}^{*}(X)=U\overline{D_A} U^{-1}X + XV^{-H} D_{B^H}V^{H}\) and that \({\mathcal {S}}\mathcal S^*={\mathcal {S}}^*{\mathcal {S}}\) meaning that \({\mathcal {S}}\) is normal. In fact, for \(X,Y\in {\mathbb {C}}^{n \times {m}}\),

$$\begin{aligned} \langle S(X), Y \rangle _{U,V}&= \langle AX + XB , Y \rangle _{U,V} = \langle AX, Y \rangle _{U,V} + \langle XB , Y \rangle _{U,V} \\&= \langle U^{-1}AXV^{-H}, U^{-1}YV^{-H} \rangle _{F} + \langle U^{-1}XBV^{-H} , U^{-1}YV^{-H} \rangle _{F} \\&= \langle D_A U^{-1}XV^{-H}, U^{-1}YV^{-H} \rangle _{F} + \langle U^{-1}XV^{-H}\overline{D_{B^H}} , U^{-1}YV^{-H} \rangle _{F} \\&= \langle U^{-1}XV^{-H}, \overline{D_A} U^{-1}YV^{-H}+ U^{-1}YV^{-H} D_{B^H} \rangle _{F} \\&= \langle U^{-1}XV^{-H},U^{-1} \left( U\overline{D_A} U^{-1}Y + YV^{-H} D_{B^H}V^{H} \right) V^{-H} \rangle _{F} \\&= \langle X,\left( U\overline{D_A} U^{-1}Y + YV^{-H} D_{B^H}V^{H} \right) \rangle _{U,V} = \langle X, {\mathcal {S}}^{*}(Y)\rangle _{U,V} \end{aligned}$$

and

$$\begin{aligned} {\mathcal {S}} {\mathcal {S}}^{*}(X)&= {\mathcal {S}} (U\overline{D_A} U^{-1}X + XV^{-H} D_{B^H}V^{H}) = {\mathcal {S}} (U\overline{D_A} U^{-1}X) + {\mathcal {S}}(XV^{-H} D_{B^H}V^{H}) \\&= U D_A \overline{D_A}U^{-1} X + U\overline{D_A}U^{-1}XV^{-H}\overline{D_{B^H}}V^H \\&\quad + U D_A U^{-1} X V^{-H} D_{B^H} V^{H} + X V^{-H}D_{B^H}\overline{D_{B^H}} V^{H} \\&= U \overline{D_A} D_A U^{-1} X + U\overline{D_A}U^{-1}XV^{-H}\overline{D_{B^H}}V^H \\&\quad + U D_A U^{-1} X V^{-H} D_{B^H} V^{H} + X V^{-H}\overline{D_{B^H}} D_{B^H} V^{H} \\&= {\mathcal {S}}^{*} (UD_A U^{-1}X) + {\mathcal {S}}^{*}(XV^{-H} \overline{D_{B^H}}V^{H}) = {\mathcal {S}}^{*}{\mathcal {S}}(X). \end{aligned}$$

This means \({\mathcal {S}}\) and \({\mathcal {S}}^{*}\) commute and, therefore, by definition, \({\mathcal {S}}\) is normal. \(\square \)

Now that we have an inner product on a Hilbert space for which \({\mathcal {S}}\) is normal, the second step is to compute the spectral decomposition of \({\mathcal {S}}\). The spectral decomposition allows functional calculus and we get a formula for \({\mathcal {S}}^{-1}\) and \(e^{t{\mathcal {S}}}\). Since for normal operators, the operator norm is its spectral radius, we directly get a formula for the induced operator norm of \({\mathcal {S}}\). We remark that in the case that A and B are unitarily diagonalizable Lemma 2 also holds when \(\langle \cdot ,\cdot \rangle _{U,V}\) is replaced by the standard Frobenius inner product \(\langle \cdot ,\cdot \rangle _F\).

Lemma 3

(Spectral Decomposition of \({\mathcal {S}}\)) Let the assumptions of Lemma 2 hold. Moreover let \(\alpha _1,\ldots , \alpha _n \in {\mathbb {C}}\) and \(\overline{\beta _1},\ldots ,\overline{\beta _m} \in {\mathbb {C}}\) be the diagonal entries of \(D_A\) and \(D_{B^H}\), respectively. Then we have

  1. (i)

    \({\mathcal {S}}(X) = \sum \limits _{i=1}^{n}\sum \limits _{j=1}^{m} (\alpha _i + \beta _{j}) \langle X, u_{i}v_{j}^H \rangle _{U,V} u_{i}v_{j}^H = U\left( {\left( \alpha _i + \beta _j \right) }_{\begin{array}{c} i=1,\ldots ,n \\ j =1,\ldots ,m \end{array}} \odot U^{-1} X V^{-H} \right) V^{H} \),

  2. (ii)

    \(\Vert {\mathcal {S}}\Vert =\max \limits _{X\in {\mathbb {C}}^{n \times {m}}\setminus \{0\}} \frac{\Vert \mathcal S(X)\Vert _{U,V}}{\Vert X\Vert _{U,V}}=\max \limits _{i,j}|\alpha _i + \beta _j|\), where \(\Vert X\Vert _{U,V} = \sqrt{\langle X,X\rangle _{U,V}}\),

  3. (iii)

    \(e^{t {\mathcal {S}}}(X) = U \left( {\left( e^{t(\alpha _i + \beta _j)}\right) }_{\begin{array}{c} i=1,\ldots ,n \\ j=1,\ldots ,m \end{array}}\odot U^{-1}XV^{-H} \right) V^{H}\).

    If in addition \(\alpha _i + \beta _j \ne 0\) for \(i=1,\ldots ,n\) and \(j=1,\ldots ,m\), then \({\mathcal {S}}^{-1}(X) =U {\left( {\left( \frac{1}{\alpha _i + \beta _j}\right) }_{\begin{array}{c} i=1,\ldots ,n \\ j=1,\ldots ,m \end{array}}\odot U^{-1}XV^{-H} \right) }V^{H}\).

Proof

 

  1. (i)

    From \(AU=UD_A\) and \(B^{H}V=VD_{B^{H}}\), we deduce \({\mathcal {S}}(u_k v_l^H) = A u_k v_l^H + u_k v_l^H B = (\alpha _k + \beta _l) u_k v_l^H\). Representing \({\mathcal {S}}(X)\in {\mathbb {C}}^{n \times {m}}\) as well as \(X\in {\mathbb {C}}^{n \times {m}}\) with respect to the orthonormal basis \(\left( u_i v_j^H \right) \) and exploiting linearity of \({\mathcal {S}}\) and \(\langle \cdot , \cdot \rangle _{U,V}\) yield

    $$\begin{aligned} {\mathcal {S}}(X)&= \sum \limits _{i=1}^{n} \sum \limits _{j=1}^{m} \langle {\mathcal {S}}(X),u_i v_j^H \rangle _{U,V} u_i v_j^H \\&= \sum \limits _{i=1}^{n} \sum \limits _{j=1}^{m} \langle {\mathcal {S}}(\sum \limits _{k=1}^{n}\sum \limits _{l=1}^{m} \langle X ,u_k v_l^H\rangle _{U,V} u_k v_l^H), u_i u_j^H \rangle _{U,V}u_i v_j^H \\&= \sum \limits _{i,k=1}^{n} \sum \limits _{j,l=1}^{m} \langle X, u_k v_l^H \rangle _{U,V} \langle {\mathcal {S}}(u_k v_l^H), u_i v_j^H \rangle _{U,V} u_i v_j^H \\&= \sum \limits _{i,k=1}^{n} \sum \limits _{j,l=1}^{m} (\alpha _k + \beta _l) \langle X,u_k v_l^H \rangle _{U,V} \langle u_k v_l^H ,u_i v_j^H\rangle _{U,V} u_i v_j^H \\&= \sum \limits _{i=1}^{n} \sum \limits _{j=1}^{m} (\alpha _i + \beta _j) \langle X,u_i v_j^H \rangle _{U,V} u_i v_j^H\\&= U {\left( (\alpha _i + \beta _j) \langle X,u_i v_j^H \rangle _{U,V} \right) }_{\begin{array}{c} i=1,\ldots ,n \\ j=1,\ldots ,m \end{array}} V^{H} \\&= U \left( {\left( \alpha _i + \beta _j\right) }_{\begin{array}{c} i=1,\ldots ,n \\ j=1,\ldots ,m \end{array}} \odot {\left( \langle U^{-1}XV^{-H},e_i e_j^H \rangle _{F} \right) }_{\begin{array}{c} i=1,\ldots ,n \\ j=1,\ldots ,m \end{array}} \right) V^{H} \\&= U \left( {\left( \alpha _i + \beta _j\right) }_{\begin{array}{c} i=1,\ldots ,n \\ j=1,\ldots ,m \end{array}} \odot U^{-1}XV^{-H} \right) V^{H}. \end{aligned}$$
  2. (ii)

    The claim about the norm follows from a direct application of the fundamental functional analytical result on compact normal operators, see, e.g., [37, Thm. VI.3.2].

  3. (iii)

    With the spectral decomposition of \({\mathcal {S}}\) one can resort to functional calculus, cf. [32, Cor. 9.3.38], [37, Kor. IX.3.8], and obtain the formula for \({\mathcal {S}}^{-1}\) under the additional assumption that \(\alpha _i + \beta _j \ne 0\).

\(\square \)

Using the spectral decomposition and functional calculus we find that, under the assumptions of Lemma 2, the solution of the differential Sylvester equation

$$\begin{aligned} {\dot{X}}(t)&= AX(t) +X(t)B + C = {\mathcal {S}}(X(t)) + C, \\ X(0)&= D, \end{aligned}$$

has the form

$$\begin{aligned} X(t)&= e^{t{\mathcal {S}}}(D) + \int \limits _{0}^{t}e^{(t-s){\mathcal {S}}}(C) \mathrm {d}s \nonumber \\&= U \left( {\left( e^{t(\alpha _i + \beta _j)}\right) }_{\begin{array}{c} i=1,\ldots ,n \\ j=1,\ldots ,m \end{array}}\odot U^{-1}DV^{-H} \right) V^{H} \nonumber \\&\quad + \int \limits _{0}^{t} U \left( {\left( e^{(t-s)(\alpha _i + \beta _j)}\right) }_{\begin{array}{c} i=1,\ldots ,n \\ j=1,\ldots ,m \end{array}}\odot U^{-1}CV^{-H} \right) V^{H} \mathrm {d}s \nonumber \\&= U \left( {\left( e^{t(\alpha _i + \beta _j)}\right) }_{\begin{array}{c} i=1,\ldots ,n \\ j=1,\ldots ,m \end{array}} \odot U^{-1}DV^{-H} \nonumber \right. \\&\left. \quad + {\left( \int \limits _{0}^{t} e^{(t-s)(\alpha _i + \beta _j)} \mathrm {d}s\right) }_{\begin{array}{c} i=1,\ldots ,n \\ j=1,\ldots ,m \end{array}}\odot U^{-1}CV^{-H} \right) V^{H}, \end{aligned}$$
(5)

with the involved scalar integrals given explicitly as:

$$\begin{aligned} \int \limits _{0}^{t} e^{(t-s){(\alpha _i +\beta _j)}}\mathrm {d}s = {\left\{ \begin{array}{ll} \frac{e^{t(\alpha _i + \beta _j)}-1}{\alpha _i + \beta _j} &{} \quad \text{ if } \alpha _i + \beta _j\ne 0, \\ t &{}\quad \text{ if } \alpha _i + \beta _j= 0 \end{array}\right. }. \end{aligned}$$

4 Variation of constants

The application of the variation of constants formula leads to yet another solution formula for the SLE (1).

Lemma 4

(Variations of Constants, [9, Ch. 13]) Let \(A\in {\mathbb {C}}^{n \times {n}}, B \in {\mathbb {C}}^{m \times {m}}, C\in {\mathbb {C}}^{m \times {n}}, D\in {\mathbb {C}}^{m \times {n}}\) with \(\varLambda ({A})\cap \varLambda ({-B})=\emptyset \). The differential Sylvester equation

$$\begin{aligned} {\dot{X}}(t)&= AX(t) + X(t)B + C= {\mathcal {S}}(X(t)) + C, \\ X(0)&=D, \end{aligned}$$

has the solution

$$\begin{aligned} X(t) = e^{t{\mathcal {S}}}(D) + {\mathcal {S}}^{-1}(-C) - e^{t\mathcal S}{\mathcal {S}}^{-1}(-C). \end{aligned}$$
(6)

Proof

Because of \(\varLambda ({A})\cap \varLambda ({-B})= \emptyset \), the inverse \({\mathcal {S}}^{-1}\) exists and we can rewrite the solution formula (4) as

$$\begin{aligned} X(t)&=e^{t{\mathcal {S}}}(D) + \int \limits _{0}^{t}e^{(t-s)\mathcal S}(C) \mathrm {d}s =e^{t{\mathcal {S}}}(D) + {\left[ -{\mathcal {S}}^{-1}e^{(t-s){\mathcal {S}}}(C)\right] }_{s=0}^{s=t} \\&=e^{t{\mathcal {S}}}(D) + {\mathcal {S}}^{-1}(-C) - \mathcal S^{-1}e^{t{\mathcal {S}}}(-C) \end{aligned}$$

and confirm that \( X(0) = D + {\mathcal {S}}^{-1}(-C) - \mathcal S^{-1}(-C)= D\). \(\square \)

From formula (6), we find that the solution can be written as the solution of the algebraic Sylvester equation and a time dependent part. We will make use of this fact in the numerical scheme that we propose in Sect. 6.

5 Solution as Taylor series

In this section we use Taylor series to derive a solution formula. From this we can read off suitable Krylov subspaces for our projection approach in the next section.

Lemma 5

(Taylor series solution formula) Let \(A\in {\mathbb {C}}^{n \times {n}}, B \in {\mathbb {C}}^{m \times {m}}, C\in {\mathbb {C}}^{m \times {n}}, D\in {\mathbb {C}}^{m \times {n}}\). The differential Sylvester equation

$$\begin{aligned} {\dot{X}}(t)&= AX(t) + X(t)B + C ={\mathcal {S}}(X(t)) + C, \\ X(0)&=D, \end{aligned}$$

has the unique solution

$$\begin{aligned} X(t) = D + \sum \limits _{k=1}^{\infty }\frac{t^k}{k!}(\mathcal S^k(D)+{\mathcal {S}}^{k-1}(C)). \end{aligned}$$
(7)

Proof

First observe that

$$\begin{aligned} || D || + \sum \limits _{k=1}^{\infty }|| \frac{t^k}{k!}({\mathcal {S}}^k(D)+\mathcal S^{k-1}(C)) ||&\le || D || + \sum \limits _{k=1}^{\infty }\frac{|t|^k}{k!}(|| \mathcal S^k(D) ||+|| {\mathcal {S}}^{k-1}(C) ||) \\&\le || D || + \sum \limits _{k=1}^{\infty }\frac{|t|^k}{k!}( || {\mathcal {S}} ||^k|| D || + || {\mathcal {S}} ||^{k-1}|| C ||) \\&= || D ||e^{|t||| {\mathcal {S}} ||} + || C ||\sum \limits _{k=0}^{\infty }\frac{|t|^{k+1}}{(k+1)!} || \mathcal S ||^{k} \\&\le || D ||e^{|t||| {\mathcal {S}} ||} + |t||| C ||e^{|t||| {\mathcal {S}} ||}. \end{aligned}$$

The series converges absolutely and since \(({\mathbb {C}}^{n\times m},|| \cdot ||)\) is a Banach space, the series converges for every \(t \in {\mathbb {R}}\). Therefore its radius of convergence is infinity, X is continuously differentiable and can be differentiated term-wise. Since, furthermore,

$$\begin{aligned} X(0)&=D \end{aligned}$$

and

$$\begin{aligned} \dot{X}(t)&= \sum \limits _{k=1}^{\infty }\frac{t^{k-1}}{(k-1)!}({\mathcal {S}}^{k}(D)+ {\mathcal {S}}^{k-1}(C)) \\&= \sum \limits _{k=0}^{\infty }\frac{t^{k}}{k!}({\mathcal {S}}^{k+1}(D)+ {\mathcal {S}}^{k}(C)) = {\mathcal {S}}(D) + \sum \limits _{k=1}^{\infty }\frac{t^{k}}{k!}({\mathcal {S}}^{k+1}(D)+ {\mathcal {S}}^{k}(C)) +C \\&= {\mathcal {S}}(D + \sum \limits _{k=1}^{\infty }\frac{t^{k}}{k!}({\mathcal {S}}^{k}(D)+ {\mathcal {S}}^{k-1}(C))) +C = {\mathcal {S}}(X(t)) + C, \end{aligned}$$

X(t) is the unique solution. \(\square \)

If we assume that D and C are given in factored form, then we can exploit this to rewrite the truncated series in a closed form of a matrix product.

Remark 1

Let \(D=D_1D_2^H\) and \(C=C_1C_2^H\) with \(D_1 \in {\mathbb {C}}^{n \times {d}}, D_2 \in {\mathbb {C}}^{m \times {d}},C_1 \in {\mathbb {C}}^{n \times {c}}\) and \(C_2 \in {\mathbb {C}}^{m \times {c}}\). Then, having truncated the two parts of the series (7) after \(m_1\) and \(m_2\) summands, respectively, we can rewrite the solution approximation as

$$\begin{aligned} X_{m_1,m_2}(t)&= \sum \limits _{k=0}^{m_1} \frac{t^k}{k!}{\mathcal {S}} ^k (D) + \sum \limits _{k=1}^{m_2} \frac{t^k}{k!} \mathcal S^{k-1}(C) \\&= \sum \limits _{k=0}^{m_1} \frac{t^k}{k!}{\mathcal {S}} ^k (D_1D_2^H) + \sum \limits _{k=1}^{m_2} \frac{t^k}{k!} {\mathcal {S}}^{k-1}(C_1C_2^H) \\&= \sum \limits _{k=0}^{m_1} \frac{t^k}{k!}{({\mathcal {H}} + {\mathcal {V}})}^k (D_1D_2^H) + \sum \limits _{k=1}^{m_2} \frac{t^k}{k!} {({\mathcal {H}} + {\mathcal {V}})}^{k-1}(C_1C_2^H) \\&= \sum \limits _{k=0}^{m_1}\sum \limits _{i=0}^{k}\frac{t^k}{k!}\left( {\begin{array}{c}k\\ i\end{array}}\right) \mathcal H^{k-i}{\mathcal {V}} ^{i}(D_1D_2^H) + \sum \limits _{k=1}^{m_2}\sum \limits _{i=0}^{k-1}\frac{t^k}{k!}\left( {\begin{array}{c}k-1\\ i\end{array}}\right) \mathcal H^{k-1-i}{\mathcal {V}} ^{i}(C_1 C_2^H) \\&= \sum \limits _{k=0}^{m_1}\sum \limits _{i=0}^{k} \frac{t^k}{k!}\left( {\begin{array}{c}k\\ i\end{array}}\right) A^{k-i}D_1D_2^{H}B^i + \sum \limits _{k=1}^{m_2}\sum \limits _{i=0}^{k-1}\frac{t^k}{k!}\left( {\begin{array}{c}k-1\\ i\end{array}}\right) A^{k-1-i}C_1 C_2^{H}B^i, \end{aligned}$$

with the explicit representation of the sums

and

(8)

6 Feasible numerical solution approaches

In this section, we briefly note that, for various reasons, all presented solution representations are not feasible for a straight-forward numerical approximation, in particular in a large-scale sparse setting.

A common reason is that none of the formulas supports a sparse representation of the solutions such that exorbitant amounts of memory will be required.

Limitations in memory will doubly affect the solution representation through the spectral decomposition (5) since also the basis matrices U and V are generically dense matrices. Apart from that, the computation of a spectral decomposition is typically computationally expensive and can be ill conditioned. Nonetheless, the spectral decomposition formula is useful to construct exact solutions for given coefficients with known spectral decompositions.

Another issue is the unfeasible computation of the full matrix exponential in all variants (2), (4), and (6) of the variation of constants formula. A possible remedy is the approximation of the action of the matrix exponential on a low-rank matrix in a Krylov subspace.

The approach to the solution via a Taylor series (see Sect. 5) seems best suited for the large-scale case since, at least in the symmetric case, the formulas provided in Remark 1 allow for a solution representation in factored form with the original coefficients. One problem here is that the truncated Taylor series only leads to good approximations locally around the point of expansion.

We will, however, exploit and combine certain parts of the solution representations to propose an algorithm for fast and memory efficient solution approximations.

We consider the stable, linear time-invariant case, i.e. we assume that \(A\in {\mathbb {R}}^{n \times {n}}\), \(B\in {\mathbb {R}}^{n \times {p}}\), and \(\varLambda ({A})\subseteq \mathbb {C}^{-}\). We consider the differential Lyapunov equation

$$\begin{aligned} {\dot{X}}(t)&= A^T X(t) + X(t)A + BB^T, \end{aligned}$$
(9a)
$$\begin{aligned} X(0)&= 0. \end{aligned}$$
(9b)

By Lemma 4, we have that the solution splits into a constant part and a time dependent part. From Remark 1, we infer that the solution is contained in a Krylov subspace. We combine both observations in the following numerical solution approach:

1. Factors of the time constant part as Krylov space basis

With A stable, the associated algebraic Lyapunov \(A^T X + X A + BB^T=0\) has a unique symmetric positive semi-definite solution \(X_{\infty }\) that can be written in factored form \(X_\infty = Z_{\infty }Z_{\infty }^T\), with \(Z_{\infty } \in {\mathbb {R}}^{n \times {q}}\) and \({{\,\mathrm{rank}\,}}(Z_{\infty })=q\le n\). Moreover, since A is stable, it holds (see [9, Ch. 13]) that

$$\begin{aligned} {{\,\mathrm{range}\,}}(Z_{\infty })={{\,\mathrm{range}\,}}(X_{\infty })={{\,\mathrm{range}\,}}\left( \left[ B,A^{T}B, \ldots , {(A^T)}^{n-1} B \right] \right) . \end{aligned}$$

2. Factors of the time dependent part evolve in the same Krylov space

With \(X(t)= X_{\infty } + {\tilde{X}}(t)\), we obtain that

$$\begin{aligned} \dot{{\tilde{X}}}(t)&= A^T {\tilde{X}}(t) + {\tilde{X}} (t)A, \\ {\tilde{X}}(0)&= -X_{\infty }, \end{aligned}$$

where \({\tilde{X}}(t)\) is given by \({\tilde{X}}(t)= -e^{tA^T}X_{\infty } e^{tA} =-e^{tA^T}Z_{\infty } Z_{\infty }^T e^{tA}=:-{\tilde{Z}}(t){\tilde{Z}}{(t)}^T\). And since by (8) also X(t) evolves in the same Krylov spaceFootnote 1, as does the difference \(\tilde{X}(t)\) and, thus, \({{\tilde{Z}}}(t)\).

3. Orthogonalize the basis and compute the time dependent factors

By means of a (reduced) Singular Value Decomposition of \(Z_{\infty }\), we obtain orthogonal matrices \(Q_{\infty }\in {\mathbb {R}}^{n \times {q}}\) and \(V_{\infty }\in {\mathbb {R}}^{q \times {q}}\) with \({{\,\mathrm{range}\,}}(Q_{\infty }) = {{\,\mathrm{range}\,}}(Z_{\infty })\) and \(Z_{\infty } = Q_{\infty }S_{\infty } V_{\infty }^T\), where \(S_{\infty }\in {\mathbb {R}}^{q \times {q}}\). Like \(Z_\infty \), the columns of \(Q_{\infty }\) span an \(A^T\) invariant subspace and with \(Q_{\infty }^T Q_{\infty }=E_{q,q}\) it holds that

$$\begin{aligned} A^T Q_{\infty }&= Q_{\infty } Q_{\infty }^T A^T Q_{\infty }, \\ e^{tA^T} Q_{\infty }&= Q_{\infty } e^{tQ_{\infty }^T A^T Q_{\infty }}, \end{aligned}$$

from which we deduce that

$$\begin{aligned} {\tilde{Z}}(t){{\tilde{Z}}(t)}^T&= \left( e^{tA^T}Z_{\infty }\right) {\left( e^{tA^T}Z_{\infty }\right) }^T = \left( e^{tA^T}Q_{\infty } S_{\infty }\right) {\left( e^{tA^T}Q_{\infty } S_{\infty }\right) }^T \\&= \left( Q_{\infty }e^{tQ_{\infty }^{T}A^{T}Q_{\infty }} S_{\infty }\right) {\left( Q_{\infty }e^{tQ_{\infty }^{T}A^{T}Q_{\infty }} S_{\infty }\right) }^T. \end{aligned}$$

We define \({z}(t)=e^{tQ_{\infty }^{T}A^{T}Q_{\infty }} S_{\infty }\in {\mathbb {R}}^{q \times {q}}\) and find that z can be obtained by solving

$$\begin{aligned} \dot{{z}}(t)&= Q_{\infty }^{T}A^{T}Q_{\infty } {z}(t) \end{aligned}$$
(10a)
$$\begin{aligned} {z}(0)&= S_{\infty }, \end{aligned}$$
(10b)

which is a matrix valued ODE that can be solved column-wise or by computing the matrix exponential \(e^{tQ_{\infty }^{T}A^{T}Q_{\infty }}\).

The solution of the differential Lyapunov equation is, thus, given by \(X(t) = Z_{\infty } Z_{\infty }^T - Q_{\infty }{z}(t){z}{(t)}^T Q_{\infty }^T\).

Remark 2

The differential equation for z is of size \(q\times q\), which can be much smaller than n, if the solution of the algebraic Lyapunov equation \(X_{\infty }=Z_{\infty }Z_{\infty }^T\) has low-rank. Moreover, the orthogonalization of the basis allows for the detection of the numerical rank and for a compression of \(Z_\infty \) through truncating singular values that are smaller than a certain threshold.

We further note that, with minor adjustments, all arguments also hold for the generalized differential Lyapunov equation

$$\begin{aligned} M^T{\dot{X}}(t)M&= A^T X(t) M + {M}^T X(t)A + BB^T, \\ X(0)&= 0, \end{aligned}$$

with \(M\in {\mathbb {R}}^{n \times {n}}\) nonsingular and \(AM^{-1}\) stable that can accommodate, e.g., a mass matrix from a finite element discretization. We have included the derivation of the relevant formulas, and an example that illustrates a fundamental difference when M is symmetric positive definite in “Appendix A”.

In summary, the proposed approach reads as written down in Algorithm 1.

figure a

7 Numerical results

7.1 Setup

To quantify and illustrate the performance of Algorithm 1, we consider differential Lyapunov equations that are used to define optimal controls for a finite element discretization of a heat equation; see [8] for the model description. Namely, we solve the differential Lyapunov equations:

$$\begin{aligned} M {\dot{X}}(t) M^T = A X(t) M^T + M X(t) A^T + BB^T, \quad X(0) = 0. \end{aligned}$$
(DLE-1)
$$\begin{aligned} M^T {\dot{X}}(t) M = A^T X(t) M + M^T X(t) A + C^T C, \quad X(0) = 0. \end{aligned}$$
(DLE-2)

that are defined through matrices M, \(A\in {\mathbb {R}}^{n \times {n}}\) that are symmetric, M is positive definite and A stable, \(B\in {\mathbb {R}}^{n \times {7}}\) and \(C\in {\mathbb {R}}^{6 \times {n}}\). For computing the error, we precomputed the spectral decomposition of (AM) and constructed the exact solution by means of the formula from Eq. (5). The memory consuming computation of the spectral decomposition was done on a compute server with 4 \(\times \) Xeon® CPU E7-8837 @ 2.67 GHz with 8 cores and 1 TB Ram and \(\hbox {MATLAB}^{\textregistered }\) 2015b. All other computations were carried out on a machine with 2 \(\times \) Xeon® CPU E5-2640 v3 @ 2.60 GHz with 8 Cores and 64 GB Ram and MATLAB 2017a. We have used the low-rank ADI iteration implemented in MEX-M.E.S.S. [4] to solve the algebraic Lyapunov equations; as required for Algorithm 1 (Step 2).

We solve the resulting projected ODE system column-wise using MATLAB ODE solvers ode45, ode23, ode113, ode15s, ode23s, ode23t and ode23tb. The solvers ode45, ode23, ode113 are nonstiff solvers, whereas the rest are stiff solvers. We have used the following parameters for the \(\texttt {odeset}\) function:

  • RelTol: \(1\cdot 10^{-9}\)

  • AbsTol: \(1\cdot 10^{-10}\)

  • Stats: off

  • NormControl: off

  • BDF: on

  • Jacobian: \(A_F\)

  • JPattern: logical (\(A_F\))

  • Mass: \(M_F\)

  • MassSingular: no

  • MStateDependence: none

As the time interval, we consider [0, 4500].

7.2 Projection approach

The initial step of Algorithm 1 requires the solutions to the associated algebraic Lyapunov equations. For this task we call MEX-M.E.S.S. that iteratively computes the solutions up to the following absolute and relative residuals

$$\begin{aligned} || A Z_{\infty } Z_{\infty }^T M^T + M Z_{\infty } Z_{\infty }^T A^T + BB^T ||_2\ \text {or}\ || A^T Z_{\infty } Z_{\infty }^T M + M^T Z_{\infty } Z_{\infty }^T A + C^{T}C ||_2. \end{aligned}$$

and

$$\begin{aligned} \frac{|| A Z_{\infty } Z_{\infty }^T M^T + M Z_{\infty } Z_{\infty }^T A^T + BB^T ||_2}{|| BB^T ||_2}\ \text {or}\ \frac{|| A^T Z_{\infty } Z_{\infty }^T M + M^T Z_{\infty } Z_{\infty }^T A + C^{T}C ||_2}{|| C^{T}C ||_2}. \end{aligned}$$

The achieved values for the different test setups as well as the number of columns of the corresponding \(Z_{\infty }\) after truncation (see Step 7 of Algorithm 1), that define the dimension of the reduced model for the time dependent part, are listed in Tables 1 and 2.

Table 1 Residuals for \(A X M^{T} + M X A^{T} + BB^{T} = 0\)
Table 2 Residuals for \(A^{T} X M + M^{T} X A + C^{T} C = 0\)

We report the absolute and relative errors

$$\begin{aligned} || X(t) - X_{ref}(t) ||_2\quad \text {and}\quad \frac{|| X(t) - X_{ref}(t) ||_2}{|| X_{ref}(t) ||_2}, \end{aligned}$$

where X is the numerical solution obtained from Algorithm 1 with various ODE solvers and where the reference solution \(X_{ref}\) was obtained from the spectral decomposition of (AM). The spectral decomposition of (AM) is \(AV=MVD\), where V contains the eigenvectors and D has the corresponding eigenvalues of \(M^{-1}A\) on the diagonal. According to Eq. (5) the reference solution for (DLE–1) can be formed as

$$\begin{aligned} X_{ref}(t) = V \left( {\left( \int \limits _{0}^{t} e^{(t-s)\left( D_{i,i}+\overline{D_{j,j}}\right) }\mathrm {d}s\right) }_{\begin{array}{c} i=1,\ldots ,n \\ j=1,\ldots ,n \end{array}} \odot \left( V^{-1}M^{-1} B\right) \left( V^{-1}M^{-1} B\right) ^H\right) V^H. \end{aligned}$$

As the matrices A and M are real symmetric and M is positive definite, the eigenproblem for (AM) is a generalized symmetric definite one. Therefore all eigenvalues are real and the corresponding system of eigenvectors can be chosen real and orthogonal with respect to M, that is \(V^T M V = E_{n,n}\). The representation of the reference solution simplifies as follows

$$\begin{aligned} X_{ref}(t) = V \left( {\left( \int \limits _{0}^{t} e^{(t-s)\left( D_{i,i}+ D_{j,j}\right) }\mathrm {d}s\right) }_{\begin{array}{c} i=1,\ldots ,n \\ j=1,\ldots ,n \end{array}} \odot \left( V^{T} B\right) \left( V^{T} B\right) ^T\right) V^T. \end{aligned}$$

We plot the numerical errors and \(|| X_{ref}(t) ||_2\) on the initial short time interval [0, 10], where most of the evolution is happening, and on the full time interval [0, 4500] in “Appendix B”, Figs. 4a–d, 5a–d, 6a–d, 7a–d, 8a–d and 9a–d.

In view of the performance of the different ODE solvers, we can interpret the presented numbers and plots as follows: the solver \(\texttt {ode15s}\), which is a stiff solver of variable order, performs best in time and accuracy. Due to the stiffness, the error is oscillating for the solvers ode45, ode23, ode113. Note that the discrete Laplacian that is encoded in the coefficient matrix A becomes stiffer with a finer space discretization, i.e. for larger n. Accordingly, the computational times for the non-stiff solvers grow with n at a higher rate than the stiff solvers; see Fig. 2.

The solution of (DLE–2) itself is large in norm which makes the relative error stagnate around the prescribed tolerance and the absolute error comparatively large; see the plots in “Appendix B.2”. Particularly, the non-stiff solvers (and ode15s) achieve this error level and the oscillations as stiffness is dominated by the approximation error. This might be the reason, why for (DLE–2), despite the fact that the coefficient matrices are still stiff, the non-stiff solvers perform better than the stiff solvers (except ode15s). Nonetheless, again the computation times for the non-stiff solver grow at a higher rate with the increasing stiffness that comes with increasing n; see Fig. 2.

Because \(X(t) \rightarrow 0\) for \(t \searrow 0\), the plots for the relative error spread out for small times.

Except ode15s, there is no general rule which solver performs better in terms of computational time; see Fig. 2. The timings may change, when different relative and absolute error tolerances are used in the MATLAB odeset function.

Finally, we want to make the following remark: instead of integrating the projected ODE (10) which is linear with constant coefficients of moderate dimension, one may consider using the Schur–Parlett [17, Ch. 10] algorithm to compute the matrix exponential. The initialization efforts of the Schur–Parlett algorithm will pay off, if the matrix exponential has to be evaluated for many different values of t. Also, asymptotically, the storage requirements for z(t) will be lifted, since the matrix exponential for a given t can be computed on demand. Nonetheless, we used the ODE approach, which we think is more efficient because of the sophisticated MATLAB ODE solver implementations that come, e.g, with step size selection methods integrated.

The code of the implementation with the precomputed spectral decompositions needed to construct the exact solution is available as mentioned in Fig. 1.

Fig. 1
figure 1

Link to code and data

7.3 Computational time

7.4 Comparison with backward differentiation formulas

To benchmark our method, we have run comparison tests with the MATLAB implementation of the Backward Differentiation Formulas / Alternating Direction Implicit (BDF/ADI) scheme as developed in [29] (see also the “Appendix C” for a short summary of the algorithm). The numerical experiments were conducted on the same machine, with the same MATLAB version and the same model as described in Sect. 7.1.

In contrast to the Algorithm 1 that needs to solve only a single algebraic Lyapunov equation for the initialization, the BDF/ADI approach solves a Lyapunov equation in every time step. Moreover, the numerical solution is stored in \(LDL^T\)-format, i.e., in terms of the factors of \(X(t_k) \approx L_k D_k L_k^T\) and \(L_k \in {\mathbb {R}}^{n \times {l_k}}\), which grow at least linearly with the size of n and the number of time steps. For this reason, we had to restrict our numerical experiments with BDF/ADI to the interval [0, 100] and consider only the model of the smallest size \(n=1357\). As for the test of our Algorithm 1 in “Appendix B”, for computing the error, we used an exact solution \(X_{ref}\) based on the spectral decomposition of (AM).

Fig. 2
figure 2

Timings of the different MATLAB ODE solvers for the solution of the projected time-dependent factors as defined in Eq. (10)

We have compared various BDF methods that we abbreviated as BDF1, BDF2, BDF3, BDF4, BDF5 and BDF6, where the number denotes the order s of the method. We used the constant time step sizes \(\tau _k := h \in \{2^{-4}, 2^{-6}, 2^{-8}\}\) for our computations. In “Appendices C.1 and C.2”, we plot the relative and absolute errors compared to the solution obtained by the spectral decomposition, c.f. Figs. 10a–f and 11a–f. For comparison, we also plot the error of the numerical solution obtained by Algorithm 1 and the MATLAB ODE solver ode15s. We list the computational times for performing the BDF/ADI method based computations in Sect. 7.3.

As the actual solution X converges towards the solution of the algebraic Lyapunov equation, also the numerical errors show a decay towards a certain error level. Decreasing the step size lowers this level accordingly. This effect is visible only for methods of lower order (\(s\le 2\)). For higher orders, the error level stagnates and the error rather shows oscillations, which are likely due to errors in the solution of the Lyapunov equations. Since all BDF methods are stiff solvers, there are no oscillations of higher frequency to observe. The error levels are comparable to the level reached by our approach with ode15s.

The computational timings for the BDF/ADI solvers are nearly the same for same step sizes; cf. Fig. 3. Accordingly, the higher order methods clearly outperform the low-order methods.

As for the comparison to our approach, we note that the reported timings were for the longer time-interval [0, 4500]. However, since the transient behavior of the solution is confined to a short initial phase and since the ODE solvers have a step size control, a restriction to a shorter interval would not change the timings by much.

For the test case with (DLE–1), the BDF/ADI schemes achieved the same accuracy as our approach already with the coarsest step size; see Fig. 10a–f. Thus, we compare the timings in the left most columns in Fig. 2 (\(n=1357\)) and Fig. 3 (\(h=2^{-4}\)) to conclude that our approach with the best performing ODE solver is about 25 times faster.

As for the test case with (DLE–2), the BDF/ADI schemes reached the same accuracy only for the finest chosen step size; see Fig. 11a–f. Thus, comparing the right most column in Fig. 3 (\(h=2^{-8}\)) to the left column in the right plot of Fig. 2 (\(n=1357\)), we find that our Algorithm 1, again, is faster by a factor of 30.

Fig. 3
figure 3

Timings of the different BDF/ADI solvers

To conclude the comparison, we add the following remarks. With the costs of solving Lyapunov equations, apart from the increased memory requirements, the BDF/ADI scheme will also become significantly more costly for larger system sizes. As opposed to our Algorithm 1 however, the BDF/ADI scheme also applies for the time varying case as well as for nonzero initial conditions. Moreover, as ode15s in fact uses the BDF formulas but with variable order s and variable time step, one may consider integrating similar error control mechanisms in the BDF/ADI approach to improve performance.

8 Conclusions

We presented several solution formulas for the differential Sylvester and Lyapunov equations. For the autonomous stable differential Lyapunov equation, we proposed a numerical algorithm that combined certain aspects derived from the solution representations. The main feature of the algorithm is the projection of the time dependent part onto a suitable subspace of lower dimension. Only this makes the numerical solution feasible in terms of memory requirements. As for the computational time, the projected system can be directly solved with optimized ODE solvers like the ode-suite in MATLAB. Moreover, its structure allows for column-wise computation of the factors and, thus, for straight-forward parallelization.

We illustrated the performance of the algorithm in an example that was derived from a finite-element discretization of a heat equation. The achieved accuracy is fully satisfactory. The greatest benefit, also in contrast to existing numerical schemes, is the low memory requirement.

The possible extension to the unstable or unsymmetric as well as to the non-autonomous case is not straightforward since the space in which the solution evolves has not been found to span a suitable invariant Krylov subspace and is possibly of high dimension. Moreover, the solution of \(AX=C\) (which is the special case of \(AX+XB=C\) with \(B=0\)) is unlikely to span an A-invariant subspace at all, meaning that the span of the solution of the algebraic Sylvester equation is not suited for the differential equation. Thus, for the general case, a different ansatz is needed. The same is true for time-dependent coefficients or non-zero initial conditions. Here, a remedy might be structured approaches for the case that time dependence comes, e.g., from a low-rank update.

In the unstable case, apart from the fact that the algebraic Lyapunov equation may not have a unique solution, there can be modes that grow exponentially in time. For this case it may be worth investigating whether a projector that identifies the stable part of the underlying mathematical model can be efficiently incorporated in the proposed solution approach.