1 Introduction

We are concerned with the evaluation of \(x = f({\mathcal {M}}) v\), where f(z) is a Stieltjes function, which can be expressed in integral form

$$\begin{aligned} f(z) = \int _0^\infty g(t, z) \mu (t)\ dt, \qquad g(t, z) \in \left\{ e^{-tz}, \frac{1}{t + z} \right\} . \end{aligned}$$
(1)

The two choices for g(tz) define Laplace–Stieltjes and Cauchy–Stieltjes functions, respectively [8, 31]. The former class is a superset of the latter, and coincides with the set of completely monotonic functions, whose derivatives satisfy \((-1)^j f^{(j)} \geqslant 0\) over \({\mathbb {R}}_+\) for any \(j \in {\mathbb {N}}\).

We are interested in two instances of this problem; first, we consider the case \({\mathcal {M}} := A\), where \(A\in {\mathbb {C}}^{n\times n}\) is Hermitian positive definite with spectrum contained in [ab], \(v\in {\mathbb {C}}^{n \times s}\) is a generic (block) vector, and a rational Krylov method [18] is used to approximate \(x = f({\mathcal {M}})v\). In this case, we want to estimate the Euclidean norm of the error \(\Vert x - x_\ell \Vert _2\), where \(x_\ell \) is the approximation returned by \(\ell \) steps of the method. Second, we consider

$$\begin{aligned} {\mathcal {M}} := I \otimes A - B^T \otimes I\in {\mathbb {C}}^{n^2\times n^2}, \end{aligned}$$
(2)

where \(A,-B\in {\mathbb {C}}^{n\times n}\) are Hermitian positive definite with spectra contained in [ab], \(v = \mathrm {vec}(F)\in {\mathbb {C}}^{n^2}\) is the vectorization of a low-rank matrix \(F = U_F V_F^T\in {\mathbb {C}}^{n\times n}\), and a tensorized rational Krylov method [8] is used for computing \(\mathrm {vec}(X) = f({\mathcal {M}}) \mathrm {vec}(F)\). This problem is a generalization of the solution of a Sylvester equation with a low-rank right hand side, which corresponds to evaluate the function \(f(z) = z^{-1}\). Here, we are concerned with estimating the quantity \(\Vert X - X_\ell \Vert _2\), where \(X_\ell \) is the approximation obtained after \(\ell \) steps.

1.1 Main contributions

This paper discusses the connection between rational Krylov evaluation of Stieltjes matrix functions and the parameter dependent rational approximation (with the given poles) of the kernel functions \(e^{-tz}\) and \(\frac{1}{t+z}\).

The contributions of this work are the following:

  1. 1.

    Corollary 3 provides a choice of poles for the rational Krylov approximation of \(f({\mathcal {M}})v\), where f(z) is Laplace–Stieltjes, with an explicit error bound depending on the spectrum of A.

  2. 2.

    Similarly, for Cauchy–Stieltjes functions, we show (in Corollary 4) how leveraging an approach proposed in [14] allows to recover a result previously given in [4] using different theoretical tools.

  3. 3.

    In Sect. 3.5, we obtain new nested sequences of poles by applying the approach of equidistributed sequences to the results in Corollary 34.

  4. 4.

    In the particular case where \({\mathcal {M}} := I \otimes A - B^T \otimes I\) we extend the analysis recently proposed in [8] to rational Krylov subspaces. Also in this setting, we provide explicit choices for the poles and explicit convergence bounds. For Laplace–Stieltjes functions a direct consequence of the analysis mentioned above leads to Corollary 5; in the Cauchy case, we describe a choice of poles that enables the simultaneous solution of a set of parameter dependent Sylvester equations. This results in a practical choice of poles and an explicit error bound given in Corollary 7.

  5. 5.

    Finally, we give results predicting the decay in the singular values of X where \(\mathrm {vec}(X) = f({\mathcal {M}}) \mathrm {vec}(F)\)F is a low-rank matrix, and f(z) is either Laplace–Stieltjes (Theorem 6) or Cauchy–Stieltjes (Theorem 7). This generalizes the well-known low-rank approximability of the solutions of of Sylvester equations with low-rank right hand sides [5]. The result for Laplace–Stieltjes follows by the error bound for the rational Krylov method and an Eckart–Young argument. The one for Cauchy–Stieltjes requires to combine the integral representation with the ADI approximant for the solution of matrix equations.

The error bounds obtained are summarized in Table 1.

We recall that completely monotonic functions are well approximated by exponential sums [11]. Another consequence of our results in the Laplace–Stieltjes case is to constructively show that they are also well-approximated by rational functions.

Table 1 Summary of the convergence rates for rational Krylov methods with the proposed poles

1.2 Motivating problems

Computing the action of a matrix function on a vector is a classical task in numerical analysis, and finds applications in several fields, such as complex networks [7], signal processing [29], numerical solution of ODEs [20], and many others.

Matrices with the Kronecker sum structure as in (2) often arise from the discretization of differential operators on tensorized 2D grids. Applying the inverse of such matrices to a vector is equivalent to solving a matrix equation. When the right hand side is a smooth function or has small support, the vector v is the vectorization of a numerically low-rank matrix. The latter property has been exploited to develop several efficient solution methods, see [28] and the references therein. Variants of these approaches have been proposed under weaker assumptions, such as when the smoothness is only available far from the diagonal \(x = y\), as it happens with kernel functions [23, 25].

In recent years, there has been an increasing interest in models involving fractional derivatives. For 2D problems on rectangular grids, discretizations by finite differences or finite elements lead to linear systems that can be recast as the solution of matrix equations with particularly structured coefficients [12, 24]. However, a promising formulation which simplifies the design of boundary conditions relies on first discretizing the 2D Laplacian on the chosen domain, and then considers the action of the matrix function \(z^{-\alpha }\) (with the Laplacian as argument) on the right hand side. This is known in the literature as the matrix transform method [32]. In this framework, one has \(0< \alpha < 1\), and therefore \(z^{-\alpha }\) is a Cauchy–Stieltjes function, a property that has been previously exploited for designing fast and stable restarted polynomial Krylov methods for its evaluation [27]. The algorithm proposed in this paper allows to exploit the Kronecker structure of the 2D Laplacian on rectangular domains in the evaluation of the matrix function.

Another motivation for our analysis stems from the study of exponential integrators, where it is required to evaluate the \(\varphi _j(z)\) functions [20], which are part of the Laplace–Stieltjes class. This has been the subject of deep studies concerning (restarted) polynomial and rational Krylov methods [17, 27]. However, to the best of our knowledge the Kronecker structure, and the associated low-rank preservation, has not been exploited in these approaches, despite being often present in discretization of differential operators [30].

The paper is organized as follows. In Sect. 2 we recall the definitions and main properties of Stieltjes functions. Then, in Sect. 3 we recall the rational Krylov method and then we analyze the simultaneous approximation of parameter dependent exponentials and resolvents; this leads to the choice of poles and convergence bounds for Stieltjes functions given in Sect. 3.4. In Sect. 4 we provide an analysis of the convergence of the method proposed in [8] when rational Krylov subspaces are employed. In particular, in Sect. 4.4 we provide decay bounds for the singular values of X such that \(\mathrm {vec}(X) = f({\mathcal {M}}) \mathrm {vec}(F)\). We give some concluding remarks and outlook in Sect. 5.

2 Laplace–Stieltjes and Cauchy–Stieltjes functions

We recall the definition and the properties of Laplace–Stieltjes and Cauchy–Stieltjes functions that are relevant for our analysis. Functions expressed as Stieltjes integrals admit a representation of the form:

$$\begin{aligned} f(z) = \int _{0}^\infty g(t, z) \mu (t) \ dt, \end{aligned}$$
(3)

where \(\mu (t) dt\) is a (non-negative) measure on \([0, \infty ]\), and g(tz) is integrable with respect to that measure. The choice of g(tz) determines the particular class of Stieltjes functions under consideration (Laplace–Stieltjes or Cauchy–Stieltjes), and \(\mu (t)\) is called the density of f(z). \(\mu (t)\) can be a proper function, or a distribution, e.g. a Dirac delta. In particular, we can restrict the domain of integration to a subset of \((0,\infty )\) imposing that \(\mu (t)=0\) elsewhere. We refer the reader to [31] for further details.

2.1 Laplace–Stieltjes functions

Laplace–Stieltjes functions are obtained by setting \(g(t,z) = e^{-tz}\) in (3).

Definition 1

Let f(z) be a function defined on

\((0, +\infty )\). Then, f(z) is a Laplace–Stieltjes function if there is a positive measure \(\mu (t)dt\) on \({\mathbb {R}}_+\) such that

$$\begin{aligned} f(z) = \int _0^{\infty } e^{-tz} \mu (t)\ dt. \end{aligned}$$
(4)

Examples of Laplace–Stieltjes functions include:

$$\begin{aligned} f(z)&= z^{-1} = \int _0^\infty e^{-tz}t\ dt, \qquad f(z) = e^{-z}=\int _1^\infty e^{-tz}\ dt, \\ f(z)&= (1-e^{-z})/z = \int _0^\infty e^{-tz}\mu (t)\ dt, \qquad \mu (t):={\left\{ \begin{array}{ll} 1 &{} t\in [0,1]\\ 0&{}t> 1 \end{array}\right. }. \end{aligned}$$

The last example is an instance of a particularly relevant class of Laplace–Stieltjes functions, with applications to exponential integrators. These are often denoted by \(\varphi _j(z)\), and can be defined as follows:

$$\begin{aligned} \varphi _j(z) := \int _0^\infty e^{-tz} \frac{\left[ \max \{1-t, 0\}\right] ^{j-1}}{(j-1)!}\ dt, \qquad j \geqslant 1. \end{aligned}$$

A famous theorem of Bernstein states the equality between the set of Laplace–Stieltjes functions and those of completely monotonic functions in \((0, \infty )\) [10], that is \((-1)^j f^{(j)}(z)\) is positive over \((0, \infty )\) for any \(j\in {\mathbb {N}}\).

From the algorithmic point of view, the explicit knowledge of the Laplace density \(\mu (t)\) will not play any role. Therefore, for the applications of the algorithms and projection methods described here, it is only relevant to know that a function is in this class.

2.2 Cauchy–Stieltjes functions

Cauchy–Stieltjes functions form a subclass of Laplace–Stieltjes functions, and are obtained from (3) by setting \(g(t,z) = (t+z)^{-1}\).

Definition 2

Let f(z) be a function defined on \({\mathbb {C}} \setminus {\mathbb {R}}_-\). Then, f(z) is a Cauchy–Stieltjes function if there is a positive measure \(\mu (t)dt\) on \({\mathbb {R}}_+\) such that

$$\begin{aligned} f(z) = \int _0^{\infty } \frac{\mu (t)}{t+z}\ dt. \end{aligned}$$
(5)

A few examples of Cauchy–Stieltjes functions are:

$$\begin{aligned} f(z)&= \frac{\log (1 + z)}{z} = \int _1^\infty \frac{t^{-1}}{t + z}\ dt, \qquad f(z) = \sum _{j = 1}^h \frac{\alpha _j}{z - \beta _j}, \qquad \alpha _j>0,\quad \beta _j < 0, \\ f(z)&= z^{-\alpha } = \frac{\sin (\alpha \pi )}{\pi } \int _0^\infty \frac{t^{-\alpha }}{t + z}\ dt, \qquad \alpha \in (0, 1). \end{aligned}$$

The rational functions with poles on the negative real semi-axis do not belong to this class if one requires \(\mu (t)\) to be a function, but they can be obtained by setting \(\mu (t) = \sum _{j = 1}^h \alpha _j \delta (t - \beta _j)\), where \(\delta (\cdot )\) is the Dirac delta with unit mass at 0. For instance, \(z^{-1}\) is obtained setting \(\mu (t) := \delta (t)\).

Since Cauchy–Stieltjes functions are also completely monotonic in \((0,\infty )\) [9], the set of Cauchy–Stieltjes functions is contained in the one of Laplace–Stieltjes functions. Indeed, assuming that f(z) is a Cauchy–Stieltjes function with density \(\mu _C(t)\), one can construct a Laplace–Stieltjes representation as follows:

$$\begin{aligned} f(z) = \int _0^\infty \frac{\mu _C(t)}{t+z}\ dt = \int _0^\infty \int _0^\infty e^{-s(t+z)} \mu _C(t)\ ds\ dt = \int _0^\infty e^{-sz} \underbrace{\int _0^\infty e^{-st} \mu _C(t)\ dt}_{\mu _L(s)}\ ds, \end{aligned}$$

where \(\mu _L(s)\) defines the Laplace–Stieltjes density. In particular, note that if \(\mu _C(t)\) is positive, so is \(\mu _L(s)\). For a more detailed analysis of the relation between Cauchy- and Laplace–Stieltjes functions we refer the reader to [31, Section 8.4].

As in the Laplace case, the explicit knowledge of \(\mu (t)\) is not crucial for the analysis and is not used in the algorithm.

3 Rational Krylov for evaluating Stieltjes functions

Projection schemes for the evaluation of the quantity f(A)v work as follows: an orthonormal basis W for a (small) subspace \({\mathcal {W}}\subseteq {\mathbb {C}}^{n}\) is computed, together with the projections \(A_{{\mathcal {W}}}:=W^* A W\) and \(v_{{\mathcal {W}}} := W^* v\). Then, the action of f(A) on v is approximated by:

$$\begin{aligned} f(A) v \approx x_{{\mathcal {W}}}:=W f(A_{{\mathcal {W}}}) v_{{\mathcal {W}}}. \end{aligned}$$

Intuitively, the choice of the subspace \({\mathcal {W}}\) is crucial for the quality of the approximation. Usually, one is interested in providing a sequence of subspaces \({\mathcal {W}}_1\subset {\mathcal {W}}_2\subset {\mathcal {W}}_3\subset \dots \) and study the convergence of \(x_{{\mathcal {W}}_j}\) to f(A)v as j increases. A common choice for the space \({\mathcal {W}}_j\) are Krylov subspaces.

3.1 Krylov subspaces

Several functions can be accurately approximated by polynomials. The idea behind the standard Krylov method is to generate a subspace that contains all the quantities of the form p(A)v for every p(z) polynomial of bounded degree.

Definition 3

Let A be an \(n\times n\) matrix, and \(v \in {\mathbb {C}}^{n \times s}\) be a (block) vector. The Krylov subspace of order \(\ell \) generated by A and v is defined as

$$\begin{aligned} {\mathcal {K}}_\ell (A, v) := \mathrm {span}\{ v, Av, \ldots , A^{\ell } v \}= \{p(A)v:\ \deg (p)\leqslant \ell \}. \end{aligned}$$

Projection on Krylov subspaces is closely related to polynomial approximation. Indeed, if f(z) is well approximated by p(z), then p(A)v is a good approximation of f(A)v, in the sense that \(\Vert f(A)v - p(A)v \Vert _2 \leqslant \max _{z \in [a,b]} |f(z) - p(z)| \cdot \Vert v \Vert _2\).

Rational Krylov subspaces are their rational analogue, and can be defined as follows.

Definition 4

Let A be a \(n \times n\) matrix, \(v \in {\mathbb {C}}^{n \times s}\) be a (block) vector and \(\varPsi = (\psi _1, \ldots , \psi _\ell )\), with \(\psi _j \in \overline{{\mathbb {C}}}:={\mathbb {C}} \cup \{ \infty \}\). The rational Krylov subspace with poles \(\varPsi \) generated by A and v is defined as

$$\begin{aligned} \mathcal {RK}_{\ell }(A, v, \varPsi )= & {} q_{\ell }(A)^{-1} {\mathcal {K}}_{\ell }(A, v) = \mathrm {span}\{ q_{\ell }(A)^{-1} v, q_{\ell }(A)^{-1} A v, \\&\ldots , q_{\ell }(A)^{-1} A^{\ell } v \}, \end{aligned}$$

where \(q_\ell (z) = \prod _{j = 1}^\ell (z - \psi _j)\) and if \(\psi _j = \infty \), then we omit the factor \((z - \psi _j)\) from \(q_\ell (z)\).

Note that, a Krylov subspace is a particular rational Krylov subspace where all poles are chosen equal to \(\infty \): \(\mathcal {RK}_\ell (A, v, (\infty , \ldots , \infty )) = {\mathcal {K}}_{\ell }(A, v)\). A common strategy of pole selection consists in alternating 0 and \(\infty \). The resulting vector space is known in the literature as the extended Krylov subspace [13].

We denote by \({\mathcal {P}}_\ell \) the set of polynomials of degree at most \(\ell \), and by \({\mathcal {R}}_{\ell ,\ell }\) the set of rational functions g(z)/l(z) with \(g(z),l(z)\in {\mathcal {P}}_{\ell }\). Given \(\varPsi =\{\psi _1, \ldots , \psi _\ell \}\subset \overline{{\mathbb {C}}}\), we indicate with \(\frac{{\mathcal {P}}_\ell }{\varPsi }\) the set of rational functions of the form g(z)/l(z), with \(g(z) \in {\mathcal {P}}_\ell \) and \(l(z):=\prod _{\psi _j\in \varPsi \setminus \{\infty \}}(z-\psi _j)\).

It is well-known that Krylov subspaces contain the action of related rational matrix functions of A on the (block) vector v, if the poles of the rational functions are a subset of the poles used to construct the approximation space.

Lemma 1

(Exactness property) Let A be a \(n \times n\) matrix, \(v \in {\mathbb {C}}^{n \times s}\) be a (block) vector and \(\varPsi = \{\psi _1, \ldots , \psi _\ell \}\subset \overline{{\mathbb {C}}}\). If \(U_{{\mathcal {P}}},U_{{\mathcal {R}}}\) are orthonormal bases of \({\mathcal {K}}_{\ell }(A, v)\) and \(\mathcal {RK}_{\ell }(A, v, \varPsi )\), respectively, then:

  1. 1.

    \(f(z)\in {\mathcal {P}}_{\ell }\quad \Longrightarrow \quad f(A)v=U_{{\mathcal {P}}}f(A_{\ell })(U_{{\mathcal {P}}}^*v)\in {\mathcal {K}}_{\ell }(A, v),\quad A_\ell =U^*_{{\mathcal {P}}}AU_{{\mathcal {P}}} \),

  2. 2.

    \(f(z)\in \frac{{\mathcal {P}}_{\ell }}{\varPsi } \quad \Longrightarrow \quad f(A)v=U_{{\mathcal {R}}}f(A_{\ell })(U_{{\mathcal {R}}}^*v)\in \mathcal {RK}_{\ell }(A, v,\varPsi )\), \(\quad A_\ell =U^*_{{\mathcal {R}}}AU_{{\mathcal {R}}} \),

Lemma 1 enables to prove the quasi-optimality of the Galerkin projection described in Sect. 3. Indeed, if \({\mathcal {W}} := \mathcal {RK}(A,v,\varPsi )\), then [18]

$$\begin{aligned} \Vert x_{{\mathcal {W}}} - x \Vert _2 \leqslant 2 \cdot \Vert v \Vert _2 \cdot \min _{r(z)\in \frac{{\mathcal {P}}_\ell }{\varPsi }} \max _{z \in [a,b]} |f(z) - r(z)|. \end{aligned}$$
(6)

The optimal choice of poles for generating the rational Krylov subspaces is problem dependent and linked to the rational approximation of the function f(z) on [ab]. We investigate how to perform this task when f is either a Laplace–Stieltjes or Cauchy–Stieltjes function.

3.2 Simultaneous approximation of resolvents and matrix exponentials

The integral expression (1) reads as

$$\begin{aligned} f(A) v = \int _0^\infty g(t, A) \mu (t)\ dt, \qquad g(t,A) \in \{ e^{-tA}, (tI + A)^{-1} \} \end{aligned}$$

when evaluated at a matrix argument. Since the projection is a linear operation we have

$$\begin{aligned} x_{{\mathcal {W}}}=W f(A_{{\mathcal {W}}}) v_{{\mathcal {W}}}=\int _0^\infty Wg(t,A_{{\mathcal {W}}}) v_{{\mathcal {W}}}\ \mu (t)dt. \end{aligned}$$

This suggests to look for a space approximating uniformly well, in the parameter t, matrix exponentials and resolvents, respectively. A result concerning the approximation error in the \(L^2\) norm for \(t \in {\mathbf {i}}{\mathbb {R}}\) is given in [14, Lemma 4.1]. The proof is obtained exploiting some results on the skeleton approximation of \(\frac{1}{t+\lambda }\) [26]. We provide a pointwise error bound, which can be obtained by following the same steps of the proof of [14, Lemma 4.1]. We include the proof for completeness.

Theorem 1

Let A be Hermitian positive definite with spectrum contained in [ab] and U be an orthonormal basis of \(\mathcal {U_R}=\mathcal {RK}_\ell (A,v,\varPsi )\). Then, \(\forall t\in [0,\infty )\), we have the following inequality:

$$\begin{aligned} \Vert (tI+A)^{-1}v-U(tI+A_\ell )^{-1} v_\ell \Vert _2\leqslant \frac{2}{t+a} \Vert v \Vert _2 \min _{r(z)\in \frac{{\mathcal {P}}_{\ell }}{\varPsi }} \frac{\max _{z\in [a,b]}|r(z)|}{\min _{z\in (-\infty ,0]}|r(z)|} \end{aligned}$$
(7)

where \(A_\ell =U^*AU\) and \(v_\ell =U^*v\).

Proof

Following the construction in [26], we consider the function \(f_{\mathrm {skel}}(\lambda , t)\) defined by

$$\begin{aligned} f_{\mathrm {skel}}(t,\lambda ) := \begin{bmatrix} \frac{1}{t_1 + \lambda }&\cdots&\frac{1}{t_\ell + \lambda } \end{bmatrix} M^{-1} \begin{bmatrix} \frac{1}{t+\lambda _1} \\ \vdots \\ \frac{1}{t+\lambda _\ell } \end{bmatrix}, \qquad M_{ij} = \frac{1}{t_j + \lambda _i}\in {\mathbb {C}}^{\ell \times \ell } , \end{aligned}$$

where \(M_{ij}\) are the entries of M and \((t_j, \lambda _i)\) is an \(\ell \times \ell \) grid of interpolation nodes. The function \(f_{\mathrm {skel}}(t,\lambda )\) is usually called Skeleton approximation and it is practical for approximating \(\frac{1}{t+\lambda }\); indeed its relative error takes the explicit form:\(~1 - (t+\lambda )f_{\mathrm {skel}}(t,\lambda ) = \frac{r(\lambda )}{r(-t)}\) with \(r(z) = \prod _{j = 1}^\ell \frac{z - \lambda _j}{t_j + z}\). If this ratio of rational functions is small, then \(f_{\mathrm {skel}}(t,\lambda )\) is a good approximation of \(\frac{1}{t+\lambda }\) and—consequently—\(f_{\mathrm {skel}}(t,A)\) is a good approximation of \((tI+A)^{-1}\). Note that, for every fixed t\(f_{\mathrm {skel}}(t,\lambda )\) is a rational function in \(\lambda \) with poles \(-t_1, \ldots , -t_\ell \). Therefore, using the poles \(\psi _j=-t_j\)\(j=1,\ldots ,\ell \) for the projection we may write, thanks to (6):

$$\begin{aligned} \Vert (tI + A)^{-1} v - U(tI + A)^{-1} v_\ell \Vert _2 \leqslant \frac{2}{t + a} \Vert v \Vert _2 \frac{\max _{z \in [a, b]} |r(z)|}{\min _{z \in (-\infty , 0]} |r(z)|}. \end{aligned}$$

Taking the minimum over the possible choices of the parameters \(\lambda _j\) we get (7). \(\square \)

Concerning the rational approximation of the (parameter dependent) exponential, the idea is to rely on its Laplace transform that involves the resolvent:

$$\begin{aligned} e^{-tA} = \frac{1}{2\pi {\mathbf {i}}}\lim _{T \rightarrow \infty } \int _{-{\mathbf {i}}T}^{{\mathbf {i}}T} e^{st} (sI+A)^{-1}\ ds. \end{aligned}$$
(8)

In this formulation, it is possible to exploit the Skeleton approximation of \(\frac{1}{s+\lambda }\) in order to find a good choice of poles, independently on the parameter t. For proving the main result we need the following technical lemma whose proof is given in the “Appendix 2”.

Lemma 2

Let \({\mathcal {L}}^{-1}[{\widehat{r}}(s)]\) be the inverse Laplace transform of \({\widehat{r}}(s) = \frac{1}{s}\frac{p(s)}{p(-s)}\), where p(s) is a polynomial of degree \(\ell \) with positive real zeros contained in [ab]. Then,

$$\begin{aligned} \Vert {\mathcal {L}}^{-1}[{\widehat{r}}(s)] \Vert _{L^\infty ({\mathbb {R}}_+)} \leqslant \gamma _{\ell ,\kappa },\qquad \gamma _{\ell ,\kappa }:=2.23 + \frac{2}{\pi } \log \left( 4\ell \cdot \sqrt{\frac{\kappa }{\pi }} \right) , \end{aligned}$$

where \(\kappa =\frac{b}{a}\).

Theorem 2

Let A be Hermitian positive definite with spectrum contained in [ab] and U be an orthonormal basis of \(\mathcal {U_R}=\mathcal {RK}_\ell (A,v,\varPsi )\), where \(\varPsi =\{\psi _1,\dots ,\psi _\ell \} \subseteq [-b,-a]\). Then, \(\forall t\in [0,\infty )\), we have the following inequality:

$$\begin{aligned} \Vert e^{-tA}v-Ue^{-tA_\ell }v_\ell \Vert _2\leqslant 4\gamma _{\ell ,\kappa } \Vert v \Vert _2 \max _{z\in [a,b]}|r_\varPsi (z)|, \end{aligned}$$
(9)

where \(A_\ell =U^*AU\)\(v_\ell =U^*v\)\(\kappa := \frac{b}{a}\)\(r_\varPsi (z)\in {\mathcal {R}}_{\ell ,\ell }\) is the rational function defined by \(r_\varPsi (z) := \prod _{j = 1}^\ell \frac{z + \psi _j}{z - \psi _j}\) and \(\gamma _{\ell ,\kappa }\) is the constant defined in Lemma 2.

Proof

We consider the Skeleton approximation of \(\frac{1}{s+\lambda }\) by restricting the choice of poles in both variables to \(\varPsi \)

$$\begin{aligned} f_{\mathrm {skel}}(s,\lambda ) := \begin{bmatrix} \frac{1}{\lambda - \psi _1}&\cdots&\frac{1}{\lambda - \psi _\ell } \end{bmatrix} M^{-1} \begin{bmatrix} \frac{1}{s-\psi _1} \\ \vdots \\ \frac{1}{s-\psi _\ell } \end{bmatrix},\qquad M_{ij}=-\frac{1}{\psi _i+\psi _j}, \end{aligned}$$

where \(M_{ij}\) denote the entries of M. Then, by using (8) for A and \(A_{\ell }\) we get

$$\begin{aligned} e^{-tA}v-Ue^{-tA_\ell }v_\ell = \frac{1}{2\pi {\mathbf {i}}}\lim _{T \rightarrow \infty } \int _{-{\mathbf {i}}T}^{{\mathbf {i}}T} e^{st} (sI+A)^{-1}v-e^{st} U(sI+A_\ell )^{-1}v_\ell \ ds. \end{aligned}$$

Adding and removing the term \(e^{st}f_{\mathrm {skel}}(s,A)v = e^{st}Uf_{\mathrm {skel}}(s,A_\ell )U^*v\) inside the integral (the equality holds thanks to Lemma 1) we obtain the error expression

$$\begin{aligned} e^{-tA} v - U e^{-tA_\ell } v_\ell&= \frac{1}{2\pi {\mathbf {i}}} \lim _{T \rightarrow \infty } \int _{-{\mathbf {i}}T}^{{\mathbf {i}}T} e^{st} \left[ (sI+A)^{-1} v - U (sI+A_\ell )^{-1} v_\ell \right] \ ds \\&= \frac{1}{2\pi {\mathbf {i}}} \lim _{T \rightarrow \infty } \int _{-{\mathbf {i}}T}^{{\mathbf {i}}T} e^{st} (sI+A)^{-1} \left[ I -(sI+A) f_{\mathrm {skel}}(s,A) \right] v\ ds \\&\quad - \frac{1}{2\pi {\mathbf {i}}} \lim _{T \rightarrow \infty } U \int _{-{\mathbf {i}}T}^{{\mathbf {i}}T} e^{st} (sI+A_\ell )^{-1} \left[ I - (sI+A_\ell )f_{\mathrm {skel}}(s,A_\ell ) \right] v_\ell \ ds\\&= \frac{1}{2\pi {\mathbf {i}}} \lim _{T \rightarrow \infty } \int _{-{\mathbf {i}}T}^{{\mathbf {i}}T} e^{st} (sI+A)^{-1} r_\varPsi (A)r_\varPsi (-s)^{-1} v\ ds \\&\quad - \frac{1}{2\pi {\mathbf {i}}} \lim _{T \rightarrow \infty } U \int _{-{\mathbf {i}}T}^{{\mathbf {i}}T} e^{st} (sI+A_\ell )^{-1} r_\varPsi (A_\ell )r_\varPsi (-s)^{-1} v_\ell \ ds. \end{aligned}$$

Since A and \(A_\ell \) are normal, the above integrals can be controlled by the maximum of the corresponding scalar functions on the spectrum of A (and \(A_\ell \)), which yields the bound

$$\begin{aligned}&\Vert e^{-tA} v - U e^{-tA_\ell } v_\ell \Vert _2 \leqslant 2 \max _{\lambda \in [a, b]} |h(t,\lambda )|, \\&\qquad h(t,\lambda ) := \frac{1}{2\pi {\mathbf {i}}} \lim _{T \rightarrow \infty } \int _{-iT}^{iT} e^{st} \frac{1}{s+\lambda } \frac{r_\varPsi (\lambda )}{r_\varPsi (-s)}\ ds. \end{aligned}$$

We note that \(r_\varPsi (\lambda )\) can be pulled out of the integral, since it does not depend on s, and thus the above can be rewritten as

$$\begin{aligned} h(t,\lambda )&= r_\varPsi (\lambda ) \cdot {\mathcal {L}}^{-1}\left[ \frac{1}{\lambda + s} \frac{p(s)}{p(-s)} \right] (t) \\&=r_\varPsi (\lambda ) \cdot {\mathcal {L}}^{-1}\left[ \frac{s}{s+\lambda }\right] \star {\mathcal {L}}^{-1}\left[ \frac{1}{s} \frac{p(s)}{p(-s)} \right] (t)\\&=r_\varPsi (\lambda ) \cdot (\delta (t)-\lambda e^{-\lambda t})\star {\mathcal {L}}^{-1}\left[ \frac{1}{s} \frac{p(s)}{p(-s)} \right] (t), \end{aligned}$$

where p(s) is as in Lemma 2 and \(\delta (t)\) indicates the Dirac delta function. Since the 1-norm of \(\delta (t)-\lambda e^{-t\lambda }\) is equal to 2, using Young’s inequality we can bound \(\Vert h(t,\lambda ) \Vert _{\infty } \leqslant 2 \Vert {\mathcal {L}}^{-1}\left[ \frac{1}{s} \frac{p(s)}{p(-s)} \right] \Vert _\infty \). Therefore, we need to estimate the infinity norm of \({\mathcal {L}}^{-1}\left[ \frac{1}{s} \frac{p(s)}{p(-s)} \right] (t)\). Such inverse Laplace transform can be uniformly bounded in t by using Lemma 2 with a constant that only depends on \(\ell \) and b/a:

$$\begin{aligned} |h(\lambda , t)| \leqslant 2\gamma _{\ell ,\kappa }|r_\varPsi (\lambda )|. \end{aligned}$$

This completes the proof. \(\square \)

Remark 1

The constant provided by Lemma 2 is likely not optimal. Indeed, experimentally it seems to hold that \(\gamma _{\ell ,\kappa } = 1\) for any choice of poles in the negative real axis—not necessarily contained in \([-b,-a]\)—and this has been verified in many examples. If this is proved, then the statement of Theorem 1 can be made sharper by removing the factor \(\gamma _{\ell ,\kappa }\).

3.3 Bounds for the rational approximation problems

Theorems 1 and 2 show the connection between the error norm and certain rational approximation problems. In this section we discuss the optimal values of such problems in the cases of interests.

Definition 5

Let \(\varPsi \subset \overline{{\mathbb {C}}}\) be a finite set, and \(I_1, I_2\) closed subsets of \(\overline{{\mathbb {C}}}\). Then, we defineFootnote 1

$$\begin{aligned} \theta _\ell (I_1, I_2, \varPsi ) := \min _{r(z)\in \frac{{\mathcal {P}}_\ell }{\varPsi }} \frac{\max _{I_1}|r(z)|}{\min _{I_2}|r(z)|}. \end{aligned}$$

The \(\theta _\ell \) functions enjoy some invariance and inclusion properties, which we report here, and will be extensively used in the rest of the paper.

Lemma 3

Let \(I_1, I_2\) be subsets of the complex plane, and \(\varPsi \subset \overline{{\mathbb {C}}}\). Then, the map \(\theta _\ell \) satisfies the following properties:

  1. (i)

    (shift invariance) For any \(t \in {\mathbb {C}}\), it holds \(\theta _\ell (I_1 + t, I_2 + t, \varPsi +t) = \theta (I_1, I_2, \varPsi )\).

  2. (ii)

    (monotonicity) \(\theta _\ell (I_1, I_2, \varPsi )\) is monotonic with respect to the inclusion on the parameters \(I_1\) and \(I_2\):

    $$\begin{aligned} I_1 \subseteq I_1', I_2 \subseteq I_2' \implies \theta _\ell (I_1, I_2, \varPsi ) \leqslant \theta _\ell (I_1', I_2', \varPsi ). \end{aligned}$$
  3. (iii)

    (Möbius invariance) If M(z) is a Möbius transform, that is a rational function \(M(z) = (\alpha z + \beta ) / (\gamma z + \delta )\) with \(\alpha \delta \ne \beta \gamma \), then

    $$\begin{aligned} \theta _\ell (I_1, I_2, \varPsi ) = \theta _\ell (M(I_1), M(I_2), M(\varPsi )). \end{aligned}$$

Proof

Property (i) follows by (iii) because applying a shift is a particular Möbius transformation. Note that, generically, when we compose a rational function \(r(z) = \frac{p(z)}{h(z)} \in \frac{{\mathcal {P}}_\ell }{\varPsi }\) with \(M^{-1}(z)\) we obtain another rational function of (at most) the same degree and with poles \(M(\varPsi )\). Hence, we obtain

$$\begin{aligned} \theta _\ell (I_1, I_2, \varPsi )&= \min _{r(z)\in \frac{{\mathcal {P}}_\ell }{\varPsi }} \frac{\max _{I_1}|r(z)|}{\min _{I_2}|r(z)|} = \min _{r(z)\in \frac{{\mathcal {P}}_\ell }{\varPsi }} \frac{\max _{M(I_1)}|r(M^{-1}(z))|}{\min _{M(I_2)}|r(M^{-1}(z))|}\\ {}&=\min _{r(z)\in \frac{{\mathcal {P}}_\ell }{M(\varPsi )}} \frac{\max _{M(I_1)}|r(z)|}{\min _{M(I_2)}|r(z)|} = \theta _\ell (M(I_1), M(I_2), M(\varPsi )). \end{aligned}$$

Property (ii) follows easily from the fact that the maximum taken on a larger set is larger, and the minimum taken on a larger set is smaller.\(\square \)

Now, we consider the related optimization problem, obtained by allowing \(\varPsi \) to vary:

$$\begin{aligned} \min _{\begin{array}{c} \varPsi \subset \overline{{\mathbb {C}}}, |\varPsi |=\ell \end{array} }\theta _\ell (I_1,I_2,\varPsi ) = \min _{r(z) \in {\mathcal {R}}_{\ell ,\ell }} \frac{\max _{z \in I_1} |r(z)|}{\min _{z \in I_2} |r(z)|}. \end{aligned}$$
(10)

The latter was posed and studied by Zolotarev in 1877 [33], and it is commonly known as the third Zolotarev problem. We refer to [3] for a modern reference where the theory is used to recover bounds on the convergence of rational Krylov methods and ADI iterations for solving Sylvester equations.

In the case \(I_1=-I_2=[a,b]\) (10) simplifies to

$$\begin{aligned} \min _{r(z) \in {\mathcal {R}}_{\ell ,\ell }} \frac{\max _{z \in [a,b]} |r(z)|}{\min _{z \in [a,b]} |r(-z)|} \end{aligned}$$

which admits the following explicit estimate.

Theorem 3

(Zolotarev) Let \(I = [a, b]\), with \(0<a<b\). Then

$$\begin{aligned} \min _{\varPsi \subset \overline{{\mathbb {C}}}, \ |\varPsi |=\ell }\theta _\ell (I,-I,\varPsi ) \leqslant 4\rho _{[a,b]}^{\ell },\qquad \rho _{[a,b]}:=\exp \left( -\frac{\pi ^2}{\log \left( 4 \kappa \right) }\right) ,\qquad \kappa =\frac{b}{a}. \end{aligned}$$

In addition, the optimal rational function \(r^{[a,b]}_\ell (z)\) that realizes the minimum has the form

$$\begin{aligned} r^{[a,b]}_\ell (z) := \frac{p^{[a,b]}_\ell (z)}{p^{[a,b]}_\ell (-z)}, \qquad p^{[a,b]}_\ell (z) := \prod _{j = 1}^\ell (z + \psi ^{[a,b]}_{j,\ell }), \qquad \psi ^{[a,b]}_{j,\ell } \in -I. \end{aligned}$$

We denote by \(\varPsi _{\ell }^{[a,b]} := \{ \psi _{1,\ell }^{[a,b]}, \dots , \psi _{\ell ,\ell }^{[a,b]} \}\) the set of poles of \(r^{[a,b]}_\ell (z)\).

Explicit expression for the elements of \(\varPsi _\ell ^{[a,b]}\) are available in terms of elliptic functions, see [14, Theorem 4.2].

Remark 2

The original version of Zolotarev’s result involves \(\mathrm {exp}(-\frac{\pi ^2}{\mu (\kappa ^{-1})})\) in place of \(\rho _{[a, b]}\), where \(\mu (\cdot )\) is the Grötzsch ring function. For simplicity, in this paper we prefer the slightly suboptimal form involving the logarithm. We remark that for large \(\kappa \) (which is usually the case when considering rational Krylov methods) the difference is negligible [1, Section 17.3].

We use Theorem 3 and the Möbius invariance property as building blocks for bounding (7). The idea is to map the set \([-\infty , 0] \cup [a, b]\) into \([-1, -{\widehat{a}}]\cup [{\widehat{a}}, 1]\)—for some \({\widehat{a}}\in (0,1)\)—with a Möbius transformation; then make use of Theorem 3 and Lemma 3(iii) to provide a convenient choice of \(\varPsi \) for the original problem.

Lemma 4

The Möbius transformation

$$\begin{aligned} T_C(z):=\frac{\varDelta + z - b}{\varDelta - z + b}, \qquad \varDelta := \sqrt{b^2 - ab}, \end{aligned}$$

maps \([-\infty ,0]\cup [a,b]\) into \([-1, -{\widehat{a}}]\cup [{\widehat{a}}, 1]\), with \({\widehat{a}} := \frac{\varDelta + a - b}{\varDelta - a + b}=\frac{b-\varDelta }{\varDelta +b}\). The inverse map \(T_C(z)^{-1}\) is given by:

$$\begin{aligned} T_C^{-1}(z):= \frac{(b+\varDelta )z+b-\varDelta }{1+z}. \end{aligned}$$

Moreover, for any \(0<a<b\) it holds \({\widehat{a}}^{-1} \leqslant \frac{4b}{a}\), and therefore \(\rho _{[{\widehat{a}},1]}\leqslant \rho _{[a,4b]}\).

Proof

By direct substitution, we have \(T_C(-\infty ) = -1\), and \(T_C(b) = 1\); moreover, again by direct computation one verifies that \(T_C(0) + T_C(a) = 0\), which implies that \(T_C([-\infty , 0]) = [-1, -{\widehat{a}}]\) and \(T_C([a, b]) = [{\widehat{a}}, 1]\). Then, we have

$$\begin{aligned} {\widehat{a}}^{-1} = \frac{\varDelta + b}{b - \varDelta }, \qquad \varDelta = b \sqrt{1 - a/b}. \end{aligned}$$

Using the relation \(\sqrt{1 - t} \leqslant 1 - \frac{t}{2}\) for any \(0 \leqslant t \leqslant 1\), we obtain that \({\widehat{a}}^{-1} \leqslant \frac{2b - \frac{a}{2}}{\frac{a}{2}} \leqslant 4\frac{b}{a},\) which concludes the proof. \(\square \)

Remark 3

We note that the estimate \(\rho _{[{\widehat{a}}, 1]} \leqslant \rho _{[a,4b]}\) is asymptotically tight, that is the limit of \(\rho _{[{\widehat{a}}, 1]} / \rho _{[a,4b]} \rightarrow 1\) as \(b/a \rightarrow \infty \). For instance, if \(b/a = 10\) then the relative error between the two quantities is about \(2\cdot 10^{-2}\), and for \(b/a = 1000\) about \(5\cdot 10^{-5}\). Since the interest for this approach is in dealing with matrices that are not well-conditioned, we consider the error negligible in practice.

In light of Theorem 3 and Lemma 4, we consider the choice

$$\begin{aligned} \varPsi _{C,\ell }^{[a,b]} := T_C^{-1}(\varPsi _\ell ^{[{\widehat{a}},1]}) \end{aligned}$$
(11)

in Theorem 1. This yields the following estimate.

Corollary 1

Let A be Hermitian positive definite with spectrum contained in [ab] and U be an orthonormal basis of \(\mathcal {U_R}=\mathcal {RK}_\ell (A,v,\varPsi _{C,\ell }^{[a,b]})\). Then, \(\forall t\in [0,\infty )\)

$$\begin{aligned} \Vert (tI+A)^{-1}v-U(tI+A_\ell )^{-1}v_\ell \Vert _2\leqslant \frac{8}{t+a} \Vert v \Vert _2 \rho _{[a,4b]}^\ell , \end{aligned}$$
(12)

where \(A_\ell =U^*AU\) and \(v_\ell =U^*v\).

When considering Laplace–Stieltjes functions, we may choose as poles \(\varPsi _\ell ^{[a,b]}\) which are the optimal Zolotarev poles on the interval [ab]. This enables to prove the following result, which builds on Theorem 2.

Corollary 2

Let A be Hermitian positive definite with spectrum contained in [ab] and U be an orthonormal basis of \(\ \mathcal {U_R}=\mathcal {RK}_\ell (A,v,\varPsi _{\ell }^{[a,b]})\). Then, \(\forall t\in [0,\infty )\)

$$\begin{aligned} \Vert e^{-tA}v-Ue^{-tA_\ell }v_\ell \Vert _2\leqslant 8 \gamma _{\ell ,\kappa } \Vert v \Vert _2 \rho _{[a,b]}^{\frac{\ell }{2}}, \end{aligned}$$
(13)

where \(A_\ell =U^*AU\) and \(v_\ell =U^*v\).

Proof

The proof relies on the fact that the optimal Zolotarev function evaluated on the interval [ab] can be bounded by \(2 \rho _{[a,b]}^{\frac{\ell }{2}}\) [5, Theorem 3.3]. Since its zeros and poles are symmetric with respect to the imaginary axis and real, we can apply Theorem 2 to obtain (13). \(\square \)

3.4 Convergence bounds for Stieltjes functions

Let us consider f(z) a Stieltjes function of the general form (1). Then the error of the rational Krylov method for approximating f(A)v is given by

$$\begin{aligned} \Vert f(A)v - Uf(A_\ell )v_\ell \Vert _2&= \left\Vert \int _0^\infty \left[ g(t,A)v - Ug(t,A_{\ell }) v_\ell \right] \mu (t) \ dt \right\Vert _2\\&\leqslant \int _0^\infty \left\Vert g(t,A)v - Ug(t,A_{\ell }) v_\ell \right\Vert _2\mu (t) \ dt \end{aligned}$$

where g(tA) is either a parameter dependent exponential or resolvent function. Therefore Corollary 1 and Corollary 2 provide all the ingredients to study the error of the rational Krylov projection, when the suggested pole selection strategy is adopted.

Corollary 3

Let f(z) be a Laplace–Stieltjes function, A be Hermitian positive definite with spectrum contained in [ab], U be an orthonormal basis of \(\mathcal {U_R}=\mathcal {RK}_\ell (A,v,\varPsi _{\ell }^{[a,b]})\) and \(x_\ell =Uf(A_\ell )v_\ell \) with \(A_\ell =U^*AU\) and \(v_\ell =U^*v\). Then

$$\begin{aligned} \Vert f(A)v-x_\ell \Vert _2\leqslant 8 \gamma _{\ell ,\kappa } f(0^+) \Vert v \Vert _2 \rho _{[a,b]}^{\frac{\ell }{2}}, \end{aligned}$$
(14)

where \(\gamma _{\ell ,\kappa }\) is defined as in Theorem 2, and \(f(0^+) := \lim _{z \rightarrow 0^+} f(z)\).

Proof

Since f(z) is a Laplace–Stieltjes function, we can express the error as follows:

$$\begin{aligned} \Vert f(A)v - x_\ell \Vert _2&\leqslant \int _0^\infty \left\Vert e^{-tA}v - Ue^{-tA_{\ell }} U^* v \right\Vert _2\mu (t) \ dt\\&\leqslant 8\gamma _{\ell ,\kappa }\int _0^\infty \mu (t) \ dt \Vert v \Vert _2\rho _{[a,b]}^{\frac{\ell }{2}}\\&=8 \gamma _{\ell ,\kappa }f(0^+) \Vert v \Vert _2 \rho _{[a,b]}^{\frac{\ell }{2}}, \end{aligned}$$

where we applied (6) and Corollary 2 to obtain the second inequality.\(\square \)

Remark 4

In order to be meaningful, Corollary 3 requires the function f(z) to be finite over \([0, \infty )\), which might not be the case in general (consider for instance \(x^{-\alpha }\), which is both Cauchy and Laplace–Stieltjes). Nevertheless, the result can be applied to \(f(z + \eta )\), which is always completely monotonic for a positive \(\eta \), by taking \(0< \eta < a\). A value of \(\eta \) closer to a gives a slower decay rate, but a smaller constant \(f(0^+)\). Similarly, if f(z) happens to be completely monotonic on an interval larger than \([0, \infty )\), then bounds with a faster asymptotic convergence rate (but a larger constant) can be obtained considering \(\eta < 0\).

Corollary 1 allows to state the corresponding bound for Cauchy–Stieltjes functions. The proof is analogous to the one of Corollary 3.

Corollary 4

Let f(z) be a Cauchy–Stieltjes function, A be Hermitian positive definite with spectrum contained in [ab], U be an orthonormal basis of \(\mathcal {U_R}=\mathcal {RK}_\ell (A,v,\varPsi _{C,\ell }^{[a,b]})\) with \(\varPsi _{C,\ell }^{[a,b]}\) as in (11) and \(x_\ell =Uf(A_\ell )v_\ell \) with \(A_\ell =U^*AU\) and \(v_\ell =U^*v\). Then

$$\begin{aligned} \Vert f(A)v-x_G \Vert _2\leqslant 8f(a) \Vert v \Vert _2 \rho _{[a,4b]}^\ell . \end{aligned}$$
(15)

3.5 Nested sequences of poles

From the computational perspective, it is more convenient to have a nested sequence of subspaces \({\mathcal {W}}_{1} \subseteq \ldots {\mathcal {W}}_j \subseteq {\mathcal {W}}_{j+1} \subseteq \ldots \), so that \(\ell \) can be chosen adaptively. For example, in [19] the authors propose a greedy algorithm for the selection of the poles taylored to the evaluation of Cauchy–Stieltjes matrix functions. See [15, 16] for greedy pole selection strategies to be applied in different—although closely related—contexts.

The choices of poles proposed in the previous sections require to a priori determine the degree \(\ell \) of the approximant \(x_\ell \). Given a target accuracy, one can use the convergence bounds in Corollary 34 to determine \(\ell \). Unfortunately, this is likely to overestimate the minimum value of \(\ell \) that provides the sought accuracy.

An option, that allows to overcome this limitation, is to rely on the method of equidistributed sequences (EDS), as described in [14, Section 4]. The latter exploits the limit—as \(\ell \rightarrow \infty \)—of the measures generated by the set of points \(\varPsi _\ell ^{[a,b]},\varPsi _{C,\ell }^{[a,b]}\) to return infinite sequences of poles that are guaranteed to provide the same asymptotic rate of convergence. More specifically, the EDS \(\{{{\widetilde{\sigma }}}_j\}_{j\in {\mathbb {N}}}\) associated with \(\varPsi _{\ell }^{[a,1]}\) is obtained with the following steps:

  1. (i)

    Select \(\zeta \in {\mathbb {R}}^+\setminus {\mathbb {Q}}\) and generate the sequence \(\{s_j\}_{j\in {\mathbb {N}}}:=\{0,\ \zeta -\lfloor \zeta \rfloor ,\ 2\zeta -\lfloor 2\zeta \rfloor ,\ 3\zeta -\lfloor 3\zeta \rfloor ,\ \dots \}\), where \(\lfloor \cdot \rfloor \) indicates the greatest integer less than or equal to the argument; this sequence has as asymptotic distribution (in the sense of EDS) the Lebesgue measure on [0, 1].

  2. (ii)

    Compute the sequence \(\{t_j\}_{j\in {\mathbb {N}}}\) such that \(g(t_j)=s_j\) where

    $$\begin{aligned} g(t):=\frac{1}{2M}\int _{a^2}^t\frac{dy}{\sqrt{(y-a^2)y(1-y)}},\qquad M:=\int _0^1\frac{dy}{\sqrt{(1-y^2)(1-(1-a^2)y^2)}}, \end{aligned}$$
  3. (iii)

    Define \({{\widetilde{\sigma }}}_j:=\sqrt{t_j}\).

More generally, the EDS associated with \(\varPsi _\ell ^{[a,b]},\varPsi _{C,\ell }^{[a,b]}\) are obtained by applying either a scaling or the Möbius transformation (11) to the EDS for \(\varPsi _{\ell }^{[a,1]}\).

In our implementation, only the finite portion \(\{{{\widetilde{\sigma }}}_j\}_{j=0,\dots , \ell -1}\) is—incrementally—generated for computing \(x_\ell \). As starting irrational number we select \(\zeta =\frac{1}{\sqrt{2}}\) and each quantity \(t_j\) is approximated by applying the Newton’s method to the equation \(g(t_j)-s_j=0\). The initialization of the Newton iteration is done by approximating \({\hat{t}} \mapsto g(e^{{\hat{t}}}) - s_j\) with a linear function on the domain of interest, and then using the exponential of its only root as starting point. This is done beforehand selecting \(t = a^2\) and \(t = a\) as interpolation points; in our experience, with such starting point Newton’s method converges in a few steps.

3.6 Some numerical tests

3.6.1 Laplace–Stieltjes functions

Let us consider the 1D diffusion problem over [0, 1] with zero Dirichlet boundary conditions

$$\begin{aligned} \frac{\partial u}{\partial t} = \epsilon \frac{\partial ^2 u}{\partial x^2} + f(x), \qquad u(x, 0) \equiv 0, \qquad \epsilon = 10^{-2}, \end{aligned}$$

discretized using central finite differences in space with 50, 000 points, and integrated by means of the exponential Euler method with time step \(\varDelta t = 0.1\). This requires to evaluate the action of the Laplace–Stieltjes matrix function \(\varphi _1(\frac{\epsilon }{h^2} \varDelta t A) v\), where A is the tridiagonal matrix \(A = \mathrm {tridiag}(-1, 2, -1)\). We test the convergence rates of various choices of poles by measuring the absolute error when using a random vector v. Figure 1 (left) reports the results associated with: the poles from Corollary 2, the corresponding EDS computed as described in Sect. 3.5 and the extended Krylov method. It is visible that the three approximations have the same convergence rate, although the choice of poles from Corollary 2 and the EDS performs slightly better. We mention that, since A is ill-conditioned, polynomial Krylov performs poorly on this example.

We keep the same settings and we test the convergence rates for the Laplace–Stieltjes function \(z^{-\frac{3}{2}}W(z)\) where W(z) is the Lambert W function [22]. The plot in Fig. 1(right) shows that after about 10 iterations the convergence rate of the extended Krylov method deteriorates, while the poles from Corollary 2 and the EDS provide the best convergence rate.

Fig. 1
figure 1

Convergence history of the different projection spaces for the evaluation of \(\varphi _1(A)v\) and \(A^{-\frac{3}{2}}W(A)v\) for a matrix argument of size \(50{,}000 \times 50{,}000\). The methods tested are extended Krylov (EK), rational Krylov with the poles from Corollary 2 and rational Krylov with nested poles obtained as in Sect. 3.5. The bound in the left figure is obtained directly from Corollary 2. The bound in the right figure has been obtained as in Remark 4

3.6.2 Inverse square root

Let us test the pole selection strategies for Cauchy–Stieltjes functions, by considering the evaluation of \(f(z) = z^{-\frac{1}{2}}\) on the \(n\times n\) matrix \(\mathrm {tridiag}(-1, 2, -1)\), for \(n=10^4,5\cdot 10^4, 10^5\). The list of methods that we consider includes: the poles \(\varPsi _{C,\ell }^{[a,b]}\) from Corollary 1, the associated EDS, the extended Krylov method and the adaptive strategy proposed in [19] for Cauchy–Stieltjes functions. The latter is implemented in the markovfunmv package available at http://guettel.com/markovfunmv/ which we used for producing the results reported in Fig. 2. The poles from Corollary 1 and the extended Krylov method yield the best and the worst convergence history, respectively, for all values of n. The EDS and markovfunm perform similarly for \(n=10^4\), but as n increases the decay rate of markovfunm deteriorates significantly.

We consider a second numerical experiment which keeps the same settings apart from the size of the matrix argument which is fixed to \(n=10^5\). Then, we measure the number of iterations and the computational time needed by the methods using nested sequences of poles, i.e. EK, EDS, markovfunm, to reach different target values for the relative error \(\frac{\Vert x-x_{\ell } \Vert _2}{\Vert x \Vert _2}\). The EK method has the cheapest iteration cost because it exploits the pre-computation of the Cholesky factor of the matrix A for the computation of the orthogonal basis. However, as testified by the results in Table 2, the high number of iterations makes EK competitive only for the high relative error \(10^{-1}\). The iteration costs of EDS and markovfunm is essentially the same since they only differ in the computation of the poles, which requires a negligible portion of the computational time. Therefore, the comparison between EDS and markovfunm goes along with the number of iterations which makes the former more efficient.Footnote 2 We remark that in the situation where precomputing the Cholesky gives a larger computational benefit, and memory is not an issue, EK may be competitive again.

Fig. 2
figure 2

Convergence history of the different projection spaces for the evaluation of \(A^{-\frac{1}{2}}v\), with \(A=\mathrm {trid}(-1, 2, -1)\), for different sizes n of the matrix argument. The methods tested are extended Krylov (EK), rational Krylov with the poles from Corollary 1, rational Krylov with nested poles obtained as in Sect. 3.5 (EDS) and rational Krylov with the poles of markovfunm. The bound is obtained from Corollary 1

We conclude the numerical experiments on the inverse square root by considering matrix arguments with different distributions of the eigenvalues. More precisely, we set A as the diagonal matrix of dimension \(n=5\cdot 10^4\) with the following spectrum configurations:

(i):

Equispaced values in the interval \([\frac{1}{n}, 1]\),

(ii):

Eigenvalues of \(\mathrm {trid}(-1,2+10^{-3},-1)\) (shifted Laplacian),

(iii):

20 Chebyshev points in \([10^{-3}, 10^{-1}]\) and \(n-20\) Chebyshev points in \([10, 10^3]\).

The convergence histories of the different projection spaces are reported in Fig. 3. For all the eigenvalues configurations, EDS and markovfunm provide comparable performances. The poles from Corollary 1 performs as EDS and markovfunm on (ii) and slightly better on (i) and (iii). Once again, EK is the one providing the slowest convergence rate on all examples.

Table 2 Comparison of the time and number of iterations required for computing \(A^{-\frac{1}{2}}v\) with different relative tolerances using markovfunm, EDS, and extended Krylov

3.6.3 Other Cauchy–Stieltjes functions

Finally, we test the convergence rate of the different pole selection strategies for the Cauchy–Stieltjes functions \(\frac{1-e^{-\sqrt{z}}}{z}, z^{-0.2},z^{-0.8}\) and the matrix argument \(A=\mathrm {trid}(-1,2,-1)\).

The results reported in Fig. 4 show that in all cases the poles from Corollary 1 and the extended Krylov method provide the best and the worst convergence rates, respectively. The EDS converges faster than markovfunm apart from the case of \(z^{-0.2}\) where the two strategies perform similarly.

4 Evaluating Stieltjes functions of matrices with Kronecker structure

We consider the task of computing \(f({\mathcal {M}}) v\) where \({\mathcal {M}} = I \otimes A - B^T \otimes I\). This problem often stems from the discretizations of 2D differential equations, such as the matrix transfer method used for fractional diffusion equations [32].

We assume that \(v = \mathrm {vec}(F)\), where \(F = U_F V_F^T\) where \(U_F\) and \(V_F\) are tall and skinny matrices. For instance, when \(f(z) = z^{-1}\), this is equivalent to solving the matrix equation \(AX - XB = F\). It is well-known that, if the spectra of A and B are separated, then the low-rank property is numerically inherited by X [5]. For more general functions than \(z^{-1}\), a projection scheme that preserves the Kronecker structure has been proposed in [8] using polynomial Krylov methods. We briefly review it in Sect. 4.1. The method proposed in [8] uses tensorized polynomial Krylov subspaces, so it is not well-suited when A and B are ill-conditioned, as it often happens discretizing differential operators. Therefore, we propose to replace the latter with a tensor product of rational Krylov subspaces and we provide a strategy for the pole selection. This enables a faster convergence and an effective scheme for the approximation of the action of such matrix function in a low-rank format.

The case of Laplace–Stieltjes functions, described in Sect. 4.2, follows easily by the analysis performed for the pole selection with a generic matrix A. The error analysis for Cauchy–Stieltjes functions, presented in Sect. 4.3, requires more care and builds on the theory for the solution of Sylvester equations.

4.1 Projection methods that preserve Kronecker structure

If AB are \(n \times n\) matrices, applying the projection scheme described in Sect. 3 requires to build an orthonormal basis W for a (low-dimensional) subspace \({\mathcal {W}}\subseteq {\mathbb {C}}^{n^2}\), together with the projections of \(W^* {\mathcal {M}} W = H\) and \(v_{{\mathcal {W}}} = W^* v\). Then the action of \(f({\mathcal {M}})\) on v is approximated by:

$$\begin{aligned} f({\mathcal {M}}) v \approx W f(H) v_{{\mathcal {W}}}. \end{aligned}$$

The trick at the core of the projection scheme proposed in [8] consists in choosing a tensorized subspace of the form \({\mathcal {W}} := {\mathcal {U}} \otimes {\mathcal {V}}\), spanned by an orthonormal basis of the form \(W = U \otimes V\), where U and V are orthonormal bases of \({\mathcal {U}}\subseteq {\mathbb {C}}^n\) and \({\mathcal {V}}\subseteq {\mathbb {C}}^n\), respectively. With this choice, the projection of \({\mathcal {M}}\) onto \({\mathcal {U}} \otimes {\mathcal {V}}\) retains the same structure, that is

$$\begin{aligned} (U \otimes V)^* {\mathcal {M}} (U \otimes V) = I \otimes A_U - B_V^T \otimes I, \end{aligned}$$

where \(A_{{\mathcal {U}}} = U^* A U\) and \(B_{{\mathcal {V}}} = V^* B V\).

Fig. 3
figure 3

Convergence history of the different projection spaces for the evaluation of \(A^{-\frac{1}{2}}v\) for a diagonal matrix argument of size \(50{,}000 \times 50{,}000\) with different eigenvalue distributions. The methods tested are extended Krylov (EK), rational Krylov with the poles from Corollary 1, rational Krylov with nested poles obtained as in Sect. 3.5 (EDS) and rational Krylov with the poles of markovfunm. The bound is obtained from Corollary 1

Fig. 4
figure 4

Convergence history of the different projection spaces for the evaluation of f(A)v for different Cauchy–Stieltjes functions f(z) and the matrix argument\(A=\mathrm {trid}(-1, 2, -1)\) of size \(50{,}000 \times 50{,}000\). The methods tested are extended Krylov (EK), rational Krylov with the poles from Corollary 1, rational Krylov with nested poles obtained as in Sect. 3.5 (EDS) and rational Krylov with the poles of markovfunm. The bound is obtained from Corollary 1

Since in our case \(v = \mathrm {vec}(F)\) and \(F = U_F V_F^T\), this enables to exploit the low-rank structure as well. Indeed, the projection of F onto \({\mathcal {U}} \otimes {\mathcal {V}}\) can be written as \(v_{{\mathcal {W}}} = \mathrm {vec}(F_{{\mathcal {W}}})=\mathrm {vec}((U^* U_F) ( V_F^TV))\). The high-level structure of the procedure is sketched in Algorithm 1.

figure a

At the core of Algorithm 1 is the evaluation of the matrix function on the projected matrix \(I \otimes A_{{\mathcal {U}}} - B_{{\mathcal {V}}}^T \otimes I\). Even when UV have a low dimension \(k\ll n\), this matrix is \(k^2 \times k^2\), so it is undesirable to build it explicitly and then evaluate \(f(\cdot )\) on it.

When \(f(z) = z^{-1}\), it is well-known that such evaluation can be performed in \(k^3\) flops by the Bartels-Stewart algorithm [2], in contrast to the \(k^6\) complexity that would be required by a generic dense solver for the system defined by \(I \otimes A_{{\mathcal {U}}} - B_{{\mathcal {V}}}^T \otimes I\). For a more general function, we can still design a \({\mathcal {O}}(k^3)\) procedure for the evaluation of \(f(\cdot )\) in our case. Indeed, since \(A_{{\mathcal {U}}}\) and \(B_{{\mathcal {V}}}\) are Hermitian, we may diagonalize them using a unitary transformation as follows:

$$\begin{aligned} Q_A^* A_{{\mathcal {U}}} Q_A = D_A, \qquad Q_B^* B_{{\mathcal {V}}} Q_B = D_B. \end{aligned}$$

Then, the evaluation of the matrix function f(z) with argument \(I\otimes A_{{\mathcal {U}}} - B_{{\mathcal {V}}}^T\otimes I\) can be recast to a scalar problem by setting

$$\begin{aligned} f(I\otimes A_{{\mathcal {U}}} - B_{{\mathcal {V}}}^T\otimes I) \mathrm {vec}(U^*F V) = \left( {\overline{Q}}_B \otimes {Q}_A \right) f({\mathcal {D}}) \left( Q_B^T \otimes {Q}_A^*\right) \mathrm {vec}(U^*F V), \end{aligned}$$

where \({\mathcal {D}} := I\otimes D_A - D_B \otimes I\). If we denote by \(X = \mathrm {vec}^{-1}(f({\mathcal {M}}) \mathrm {vec}(F))\) and with D the matrix defined by \(D_{ij} = (D_A)_{ii} - (D_B)_{jj}\), then

$$\begin{aligned} X = Q_A \left[ f^\circ ( D ) \circ (Q_A^* U^*F V Q_B) \right] Q_B^*, \end{aligned}$$

where \(\circ \) denotes the Hadamard product and \(f^\circ (\cdot )\) the function \(f(\cdot )\) applied component-wise to the entries of D\([f^\circ (D)]_{ij} = f(D_{ij})\).

Assuming that the matrices \(Q_A, Q_B\) and the corresponding diagonal matrices \(D_A, D_B\), are available, this step requires \(k^2\) scalar function evaluation, plus 4 matrix-matrix multiplications, for a total computational cost bounded by \({\mathcal {O}}(c_f \cdot k^2 + k^3)\), where \(c_f\) denotes the cost of a single function evaluation. The procedure is described in Algorithm 2.

figure b

4.2 Convergence bounds for Laplace–Stieltjes functions of matrices with Kronecker structure

The study of approximation methods for Laplace–Stieltjes functions is made easier by the following property of the matrix exponential: whenever MN commute, then \(e^{M + N} = e^M e^N\). Since the matrices \(B^T \otimes I\) and \(I \otimes A\) commute, we have

$$\begin{aligned} x = \mathrm {vec}(X)=f({\mathcal {M}}) v=\int _0^{\infty }e^{-t{\mathcal {M}}} v\mu (t)\ dt= \mathrm {vec}\left( \int _0^\infty e^{-tA}U_FV_F^Te^{t B}\mu (t)\ dt\right) . \end{aligned}$$

Consider projecting the matrix \({\mathcal {M}}\) onto a tensorized subspace spanned by the Kronecker products of unitary matrices \(U \otimes V\). This, combined with Algorithm 1, yields an approximation whose accuracy is closely connected with the one of approximating \(e^{-tA}\) by projecting using U, and \(e^{tB}\) using V. As discussed in Sect. 3, there exists a choice of poles that approximates uniformly well the matrix exponential, and this can be leveraged here as well.

Corollary 5

Let f(z) be a Laplace–Stieltjes function, \(A,-B\) be Hermitian positive definite with spectrum contained in [ab] and \(X_\ell \) be the approximation of \(X = \mathrm {vec}^{-1}(f({\mathcal {M}})\mathrm {vec}(F))\) obtained using Algorithm 1 with \(U\otimes V\) orthonormal basis of \(\mathcal {U_R}\otimes \mathcal {V_R}=\mathcal {RK}_\ell (A,U_F,\varPsi _{\ell }^{[a,b]})\otimes \mathcal {RK}_\ell (B^T,V_F,\varPsi _{\ell }^{[a,b]})\). Then,

$$\begin{aligned} \Vert X - X_\ell \Vert _2 \leqslant 16\gamma _{\ell ,\kappa }f(0^+)\rho _{[a,b]}^{\frac{\ell }{2}} \Vert F \Vert _2. \end{aligned}$$

Proof

If f(z) is a Laplace–Stieltjes function, we may express the error matrix \(X - X_\ell \) as follows:

$$\begin{aligned} X - X_\ell = \int _0^\infty \left[ e^{-tA} F e^{tB} - Ue^{-tA_{\ell }} U^* F V e^{tB_{\ell }} V^* \right] \mu (t) \ dt, \end{aligned}$$

where \(A_{\ell }=U^*AU\) and \(B_{\ell }=V^*BV\). Adding and subtracting the quantity \(Ue^{-tA_{\ell }} U^*F e^{tB}\) yields the following inequalities:

$$\begin{aligned} \begin{aligned} \Vert X-X_\ell \Vert _2&\leqslant \int _0^\infty \Vert e^{-tA}F-Ue^{-tA_{\ell }}(U^*F) \Vert _2 \Vert e^{tB} \Vert _2 \mu (t)\ dt\\&\quad + \int _0^\infty \Vert e^{tB^T}F^T-Ve^{tB_{\ell }^T}(V^*F^T) \Vert _2 \Vert e^{-tA_{\ell }} \Vert _2\mu (t)\ dt\\&\leqslant 16\gamma _{\ell ,\kappa } \int _0^\infty \mu (t)\ dt \cdot \rho _{[a,b]}^{\frac{\ell }{2}} \Vert F \Vert _2 \end{aligned} \end{aligned}$$

where in the last step we used Corollary 2 for both addends. \(\square \)

Example 1

To test the proposed projection spaces we consider the same matrix A of Example 3.6.1, and we evaluate the function \(\varphi _1\) to \({\mathcal {M}} = I \otimes A + A \otimes I\), applied to a vector \(v = \mathrm {vec}(F)\), where F is a random rank 1 matrix, generated by taking the outer product of two unit vectors with normally distributed entries. The results are reported in Fig. 5.

Fig. 5
figure 5

Convergence history of the different projection spaces for the evaluation of \(\varphi _1({\mathcal {M}})v\) with the Kronecker structured matrix \({\mathcal {M}} = I \otimes A + A \otimes I\), where A is of size \(1000 \times 1000\) and has condition number about \(5 \cdot 10^5\). The singular values of the true solution X are reported as well

4.3 Convergence bounds for Cauchy–Stieltjes functions of matrices with Kronecker structure

As already pointed out in Sect. 3, evaluating a Cauchy–Stieltjes function requires a space which approximates uniformly well the shifted inverses of the matrix argument under consideration. When considering a matrix \({\mathcal {M}} = I \otimes A - B^T \otimes I\) which is Kronecker structured, this acquires a particular meaning.

In fact, relying on the integral representation (3) of f(z) we obtain:

$$\begin{aligned} f({\mathcal {M}}) v = \int _0^\infty \mu (t) (tI + {\mathcal {M}})^{-1} \mathrm {vec}(F) \ dt = \int _0^\infty \mu (t) X_t\ dt, \end{aligned}$$

where \(X_t := \mathrm {vec}^{-1}((tI + {\mathcal {M}})^{-1} \mathrm {vec}(F))\) solves the matrix equation

$$\begin{aligned} \left( t I + A\right) X_j - X_j B = F. \end{aligned}$$
(16)

Therefore, to determine a good projection space for the function evaluation, we should aim at determining a projection space where these parameter dependent Sylvester equations can be solved uniformly accurate. We note that, unlike in the Laplace–Stieltjes case, the evaluation of the resolvent does not split into the evaluation of the shifted inverses of the factors, and this does not allow to apply Theorem 1 for the factors A and B.

A possible strategy to determine an approximation space is using polynomial Krylov subspaces \({\mathcal {K}}_m(t I + A, U_F) \otimes {\mathcal {K}}_m(B^T, V_F)\) for solving (16) at a certain point t. Thanks to the shift invariance of polynomial Krylov subspaces, all these subspaces coincide with \({\mathcal {U}}_P \otimes {\mathcal {V}}_P={\mathcal {K}}_m(A, U_F) \otimes {\mathcal {K}}_m(B^T, V_F)\). This observation is at the core of the strategy proposed in [8], which makes use of \({\mathcal {U}}_P \otimes {\mathcal {V}}_P\) in Algorithm 1. This allows to use the convergence theory for linear matrix equations to provide error bounds in the Cauchy–Stieltjes case, see [8, Section 6.2].

Since rational Krylov subspaces are usually more effective in the solution of Sylvester equations, it is natural to consider their use in place \({\mathcal {U}}_P \otimes {\mathcal {V}}_P\). However, they are not shift invariant, and this makes the analysis not straightforward. Throughout this section, we denote by \(U\otimes V\) the orthonormal basis of the tensorized rational Krylov subspace

$$\begin{aligned} {\mathcal {U}}_R\otimes {\mathcal {V}}_R := \mathcal {RK}_{\ell }(A, U_F, \varPsi ) \otimes \mathcal {RK}_{\ell }(B^T, V_F, \varXi ) \end{aligned}$$
(17)

where \(\varPsi :=\{ \psi _1,\dots ,\psi _\ell \}\) and \(\varXi :=\{ \xi _1,\dots ,\xi _\ell \}\) are the prescribed poles. We define the following polynomials of degree (at most) \(\ell \):

$$\begin{aligned} p(z):=\prod _{j=1,\psi _j\ne \infty }^\ell (z-\psi _j), \qquad q(z):=\prod _{j=1,\xi _j\ne \infty }^\ell (z-\xi _j) \end{aligned}$$
(18)

and we denote by \(A_{\ell }=U^*AU\),  \(B_{\ell }=V^*BV\) the projected \((\ell k\times \ell k)\)-matrices, where k is the number of columns of \(U_F\) and \(V_F\).

In Sect. 4.3.1, we recall and slightly extend some results about rational Krylov methods for Sylvester equations i.e., the case \(f(z) = z^{-1}\). This will be the building block for the convergence analysis of the approximation of Cauchy–Stieltjes functions in Sect. 4.3.2.

4.3.1 Convergence results for Sylvester equations

Algorithm 1 applied to \(f(z) = z^{-1}\) coincides with the Galerkin projection method for Sylvester equations [28], whose error analysis can be found in [3]; the results in that paper relate the Frobenius norm of the residual to a rational approximation problem. We state a slightly modified version of Theorem 2.1 in [3], that enables to bound the residual in the Euclidean norm. The proof is reported in the “Appendix 1”.

Theorem 4

Let \(A,-B\) be Hermitian positive definite with spectrum contained in [ab] and \(X_\ell \) be the approximate solution returned by Algorithm 1 using \(f(z)=z^{-1}\) and the orthonormal basis \(U\otimes V\) of \({\mathcal {U}}_R\otimes {\mathcal {V}}_R=\mathcal {RK}_{\ell }(A, U_F, \varPsi ) \otimes \mathcal {RK}_{\ell }(B^T, V_F, \varXi )\), then

$$\begin{aligned} \Vert AX_\ell - X_\ell B - F \Vert _2\leqslant (1+ \kappa ) \max \{\theta _\ell (I_A,I_B, \varPsi ),\ \theta _\ell (I_B,I_A, \varXi )\}\Vert F \Vert _2. \end{aligned}$$

Remark 5

Using the mixed norm inequality \(\Vert AB \Vert _F \leqslant \Vert A \Vert _F \Vert B \Vert _2\), one can state the bound in the Frobenius norm as well:

$$\begin{aligned} \Vert AX_{G}^{(\ell )} - X_{G}^{(\ell )} B - F \Vert _F \leqslant (1+ \kappa ) \sqrt{ \theta _\ell ^2(I_A,I_B, \varPsi ) + \theta _\ell ^2(I_B,I_A, \varXi ) } \cdot \Vert F \Vert _F, \end{aligned}$$

which is tighter than the one in [3].

For our analysis, it is more natural to bound the approximation error of the exact solution X, instead of the norm of the residual. Since the residual is closely related with the backward error of the underlying linear system, bounding the forward error \(\Vert X - X_\ell \Vert _2\) causes the appearances of an additional condition number.

Corollary 6

If \(X_\ell \) is the approximate solution of the linear matrix equation \(AX - XB = F\) returned by Algorithm 1 as in Theorem 4, then

$$\begin{aligned} \Vert X_\ell - X \Vert _2 \leqslant \frac{a +b}{2a^2} \max \{\theta _\ell (I_A,I_B, \varPsi ),\ \theta _\ell (I_B,I_A, \varXi )\}\Vert F \Vert _2 \end{aligned}$$

Proof

We note that \(X_\ell - X\) solves the Sylvester equation \(A(X_\ell - X) - (X_\ell - X) B = R,\) where \(R := AX_\ell - X_\ell B - F\) verifies \(\Vert R \Vert _2 \leqslant \left( 1+\kappa \right) \max \{\theta _\ell (I_A,I_B, \varPsi ),\ \theta _\ell (I_B,I_A, \varXi )\}\Vert F \Vert _2\), thanks to Theorem 4. In view of [21, Theorem 2.1] \(\Vert X_\ell - X \Vert _2\) is bounded by \(\frac{1}{2a} \Vert R \Vert _2\). \(\square \)

4.3.2 Error analysis for Cauchy–Stieltjes functions

In view of Eq. 16, the evaluation of Cauchy–Stieltjes function is closely related to solving (in a uniformly accurate way) parameter-dependent Sylvester equations. This connection is clarified by the following result.

Theorem 5

Let f(z) be a Cauchy–Stieltjes function, \(A,-B\) be Hermitian positive definite with spectrum contained in [ab] and \(X_\ell \) be the approximate evaluation of f(z) returned by Algorithm 1 using the orthonormal basis \(U\otimes V\) of the subspace \({\mathcal {U}}_R\otimes {\mathcal {V}}_R=\mathcal {RK}_{\ell }(A, U_F, \varPsi ) \otimes \mathcal {RK}_{\ell }(B^T, V_F, \varXi )\). Then,

$$\begin{aligned} \Vert X -X_\ell \Vert _2 \leqslant f(2a) \cdot (1+\kappa )\cdot \Vert F \Vert _2 \cdot \max _{t \geqslant 0} \left[ \max \Big \{\theta _\ell ( I_A, I_B-t,\varPsi ),\ \theta _\ell (I_B,I_A+t,\varXi )\Big \}\right] , \end{aligned}$$
(19)

where \(\kappa =\frac{b}{a}\) and \(\theta _\ell (\cdot ,\cdot ,\cdot )\) is as in Definition 5.

Proof

Applying the definition of \(f({\mathcal {M}})\) we have \( f({\mathcal {M}}) \mathrm {vec}{(F)} = \int _0^\infty (tI + {\mathcal {M}})^{-1} \mathrm {vec}{(F)}\mu (t)\ dt.\) We note that, for any \(t \geqslant 0\), the vector \(\mathrm {vec}(X_t) := (tI + {\mathcal {M}} )^{-1} \mathrm {vec}(F)\) is such that \(X_t\) solves the Sylvester equation \((tI + A) X_t - X_t B = F\). Then, we can write X as \(X = \int _0^\infty X_t\mu (t)\ dt.\)

Let us consider the approximation \(U Y_t V^*\) to \(X_t\) obtained by solving the projected Sylvester equation \((tI + U^* A U) Y_t - Y_t (V^* B V ) = U^* F V\), and \(Y = \int _0^\infty Y_t\mu (t)\ dt\). We remark that \(\mathcal {RK}_\ell (A,U_F,\varPsi )=\mathcal {RK}_\ell (tI + A,U_F,\varPsi +t)\).

Then, relying on Corollary 6, we can bound the error \(R_t := \Vert X_t - UY_tV^* \Vert _2\) with

$$\begin{aligned} R_t \leqslant C(t) \cdot \max \left\{ \theta _\ell \left( I_A+t, I_B, \varPsi +t\right) , \theta _\ell \left( I_B, I_A+t, \varXi \right) \right\} \Vert F \Vert _2, \end{aligned}$$

where \(C(t) := \frac{2(t+a+b)}{(t+2a)^2}\). Making use of Lemma 3(i) we get:

$$\begin{aligned} R_t&\leqslant C(t) \cdot \underbrace{\max \left\{ \theta _\ell \left( I_A, I_B-t, \varPsi \right) , \theta _\ell \left( I_B, I_A+t, \varXi \right) \right\} }_{:= \varTheta _\ell (t)}\Vert F \Vert _2. \end{aligned}$$

An estimate for the error on X is obtained by integrating \(R_t\):

$$\begin{aligned} \Vert X- X_\ell \Vert _2&\leqslant \int _0^\infty \mu (t) \frac{2(t+a+b)\Vert F \Vert _2}{(t+2a)^2}\varTheta _\ell (t) dt \\&\leqslant (1+\kappa )\Vert F \Vert _2\int _{0}^\infty \frac{\mu (t)}{t+2a} \varTheta _\ell (t) dt \\&\leqslant f(2a) \cdot (1+\kappa )\cdot \Vert F \Vert _2 \cdot \max _{t\geqslant 0}\varTheta _\ell (t), \end{aligned}$$

where we used that the function \(\frac{2(t+a+b)}{t+2a}\) is maximum over \([0, \infty ]\) at \(t=0\). \(\square \)

Inspired by Theorem 4, we look at the construction of rational functions that make the quantities \(\theta _\ell (I_A,I_B-t,\varPsi )\) and \(\theta _\ell (I_B,I_A+t,\varXi )\) small. If we choose \(\varXi =-\varPsi \) then (19) simplifies to

$$\begin{aligned} \Vert X -X_\ell \Vert _2 \leqslant f(2a) \cdot (1+\kappa )\cdot \Vert F \Vert _2 \cdot \max _{t \geqslant 0} \theta _\ell ( I_A, I_B-t,\varPsi ), \end{aligned}$$
(20)

because \(\theta _\ell ( I_A, I_B-t,\varPsi )=\theta _\ell ( I_A, -I_A-t,\varPsi )=\theta _\ell ( -I_A, I_A+t,-\varPsi )=\theta _\ell ( I_B, I_A+t,-\varPsi )\), in view of Lemma 3(iii).

Similarly to the analysis done for Cauchy–Stieltjes function for a generic matrix A, we may consider a Möbius transform that maps the Zolotarev problem involving the point at infinity in a more familiar form. More precisely, we aim at mapping the set \([-\infty , -a] \cup [a, b]\) into \([-1, -{\widetilde{a}}]\cup [{\widetilde{a}}, 1]\)—for some \({\widetilde{a}}\in (0,1)\). Then, we make use of Theorem 3 and Lemma 3(iii) to provide a choice of \(\varPsi \) that makes the quantity \(\theta _\ell (I_A, I_B-t, \varPsi )\) small, independently of t.

Lemma 5

The Möbius transformation

$$\begin{aligned} T(z):=\frac{\varDelta + z - b}{\varDelta - z + b}, \qquad \varDelta := \sqrt{b^2 - a^2}, \end{aligned}$$

maps \([-\infty ,-a]\cup [a,b]\) into \([-1, -{\widetilde{a}}]\cup [{\widetilde{a}}, 1]\), with \({{\widetilde{a}}} := \frac{\varDelta + a - b}{\varDelta - a + b}\). The inverse map \(T(z)^{-1}\) is:

$$\begin{aligned} T^{-1}(z):= \frac{(b+\varDelta )z+b-\varDelta }{1+z}. \end{aligned}$$

In addition, we have \({{\widetilde{a}}}^{-1} \leqslant 2b/a\), and therefore \(\rho _{[{{\widetilde{a}}}, 1]} \leqslant \rho _{[a, 2b]}\).

Proof

The proof can be easily obtained following the same steps of Lemma 4. As in that case, the overestimate introduced by the inequality \(\rho _{[{{\widetilde{a}}}, 1]} \leqslant \rho _{[a, 2b]}\) is negligible in practice (see Remark 3).\(\square \)

In light of the previous result, we consider Theorem 5 with the choice of poles

$$\begin{aligned} \varPsi = \varPsi _{C_2,\ell }^{[a,b]} := T^{-1}(\varPsi _\ell ^{[{\widetilde{a}},1]}), \qquad \varXi =-\varPsi _{C_2,\ell }^{[a,b]}, \end{aligned}$$
(21)

where \(\varPsi _\ell ^{[{\widetilde{a}},1]}\) indicates the set of optimal poles and zeros—provided by Theorem 3—for the domain \([-1,{\widetilde{a}}] \cup [{\widetilde{a}},1]\). This yields the following.

Corollary 7

Let f(z) be a Cauchy–Stieltjes function with density \(\mu (t)\), \(A,-B\) be Hermitian positive definite with spectrum contained in [ab] and \(X_\ell \) the approximate evaluation of f(z) returned by Algorithm 1 using the orthonormal basis \(U\otimes V\) of the subspace \(\mathcal {RK}_{\ell }(A, U_F, \varPsi _{C_2,\ell }^{[a,b]})) \otimes \mathcal {RK}_{\ell }(B^T, V_F, - \varPsi _{C_2,\ell }^{[a,b]}))\), where \(\varPsi _{C_2, \ell }^{[a,b]}\) is as in (21). Then,

$$\begin{aligned} \Vert X - X_\ell \Vert _2 \leqslant 4 \cdot f(2a) \cdot (1+\kappa )\cdot \Vert F \Vert _2 \cdot \rho _{[a, 2b]}^\ell , \qquad \rho _{[a, 2b]} := \exp \left( -\frac{\pi ^2}{\log \left( \frac{8b}{a}\right) }\right) . \end{aligned}$$

Proof

By setting \(I_A = I\)\(I_B = -I\) in the statement of Theorem 5 we get (20), so that we just need the bound

$$\begin{aligned}&\theta _\ell (I_A, I_B-t,T^{-1}(\varPsi _\ell ^{[{\widetilde{a}},1]})) \\&\quad = \theta _\ell (I_A, -I_A-t,T^{-1}(\varPsi _\ell ^{[{\widetilde{a}},1]}))\leqslant \theta _\ell (I_A, [-\infty , -a],T^{-1}(\varPsi _\ell ^{[{\widetilde{a}},1]}))\\&\quad = \theta _\ell ([{\widetilde{a}},1], [-1,-{\widetilde{a}}],\varPsi _\ell ^{[{\widetilde{a}},1]})\leqslant 4\rho _{[{\widetilde{a}},1]}^\ell , \end{aligned}$$

where the first inequality follows from Lemma 3(ii) and the last equality from Lemma 3(iii) applied with the map T(z). The claim follows combining this inequality \(\rho _{[{{\widetilde{a}}}, 1]} \leqslant \rho _{[a,2b]}\) from Lemma 5. \(\square \)

Example 2

We consider the same matrix A of Example 3.6.2, and we evaluate the inverse square root of \({\mathcal {M}} = I \otimes A + A \otimes I\), applied to a vector \(v = \mathrm {vec}(F)\), where F is a random rank 1 matrix, generated by taking the outer product of two unit vectors with normally distributed entries.

Fig. 6
figure 6

Convergence history of the different projection spaces for the evaluation of \({\mathcal {M}}^{-\frac{1}{2}}v\) with the Kronecker structured matrix \({\mathcal {M}} = I \otimes A + A \otimes I\), where A is of size \(1000 \times 1000\) and has condition number about \(5 \cdot 10^5\). The singular values of the true solution X and the bound given in Theorem 7 are reported as well

We note that, in Fig. 6, the bound from Corollary 7 accurately predicts the asymptotic convergence rate, even though it is off by a constant; we believe that this is due to the artificial introduction of \((1 + \kappa )\) in the Galerkin projection bound, which is usually very pessimistic in practice [3].

4.4 Low-rank approximability of X

The Kronecker-structured rational Krylov method that we have discussed provides a practical way to compute the evaluation of the matrix function under consideration. However, it can be used also theoretically to predict the decay in the singular values of the computed matrix X, and therefore to describe its approximability properties in a low-rank format.

4.4.1 Laplace–Stieltjes functions

In the Laplace–Stieltjes case, we may employ Corollary 5 directly to provide an estimate for the decay in the singular values.

Theorem 6

Let f(z) be a Laplace–Stieltjes function and \({\mathcal {M}} = I \otimes A - B^T \otimes I\) where \(A,-B\) are Hermitian positive definite with spectra contained in [ab]. Then, if \(\mathrm {vec}(X) = f({\mathcal {M}}) \mathrm {vec}(F)\), with \(F = U_F V_F^T\) of rank k, we have

$$\begin{aligned} \sigma _{1 + \ell k}(X) \leqslant 16 \gamma _{\ell ,\kappa } f(0^+)\rho _{[a,b]}^{\frac{\ell }{2}} \Vert F \Vert _2. \end{aligned}$$

Proof

We note that the approximation \(X_\ell \) obtained using the rational Krylov method with the poles given by Corollary 5 has rank (at most) \(\ell k\), and \(\Vert X- X_\ell \Vert _2 \leqslant 16 \gamma _{\ell ,\kappa } f(0^+)\rho _{[a,b]}^{\frac{\ell }{2}}\). The claim follows by applying the Eckart–Young theorem. \(\square \)

4.4.2 Cauchy–Stieltjes functions

In the case of Cauchy–Stieltjes function, the error estimate in Corollary 7 would provides a result completely analogue to Theorem 6. However, the bound obtained this way involves the multiplicative factor \(1+\kappa \); this can be avoided relying on an alternative strategy.

The idea is to consider the close connection between the rational problem (10) and the approximate solution returned by the factored Alternating Direction Implicit method (fADI) [3, 5, 6]. More specifically, for \(t\geqslant 0\) let us denote with \(X_t\), the solution of the shifted Sylvester equation

$$\begin{aligned} (tI + A)X_t-X_tB=U_FV_F^*. \end{aligned}$$
(22)

In view of (16), \(X_t\) is such that \(X=\int _0^\infty X_t\mu (t)dt\). Running fADI for \(\ell \) iterations, with shift parameters \(T^{-1}(\varPsi _\ell ^{[{\widetilde{a}},1]})=\{\alpha _1,\dots ,\alpha _\ell \}\) and \(T^{-1}(-\varPsi _\ell ^{[{\widetilde{a}},1]})=\{\beta _1,\dots ,\beta _\ell \}\), provides an approximate solution \(X^{ADI}_{\ell }(t)\) of (22) such that its column and row span belong to the spaces

$$\begin{aligned} {\mathcal {U}}_{\ell }^{\text {ADI}}(t)&= \mathcal {RK}(A, U_F, \{ \alpha _1 - t, \ldots , \alpha _{\ell } - t \}),&{\mathcal {V}}_{\ell }^{\text {ADI}}&= \mathcal {RK}(B^T, V_F, \{ \beta _1, \ldots , \beta _{\ell } \}). \end{aligned}$$

Note that the space \(V_{\ell }^{\text {ADI}}\) does not depend on t because the right coefficient of (22) does not depend on t. If we denote by \(U_{\ell }^{\text {ADI}}(t)\) and \(V_{\ell }^{\text {ADI}}\) orthonormal bases for these spaces, we have \(X^{\text {ADI}}_{\ell }(t) = U_{\ell }^{\text {ADI}}(t) Y_{\ell }^{\text {ADI}}(t) (V_{\ell }^{\text {ADI}})^*\), and using the ADI error representation [3, 5] we obtain \(\Vert X_t - X^{\text {ADI}}_{\ell }(t) \Vert _2 \leqslant \Vert X_t \Vert _2 \rho _{[a, 2b]}^{\ell }.\)

In particular, \(X^{ADI}_{\ell }(t)\) is a uniformly good approximation of \(X_t\) having rank (at most) \(\ell k\) and its low-rank factorization has the same right factor \(\forall t\geqslant 0\).

Theorem 7

Let f(z) be a Cauchy–Stieltjes function and \(X = \mathrm {vec}^{-1}(f({\mathcal {M}}) \mathrm {vec}(F))\), with \({\mathcal {M}} := I \otimes A - B^T\otimes I\), where \(A,-B\) are Hermitian positive definite with spectra contained in [ab]. Then the singular values \(\sigma _j(X)\) of the matrix X verifies:

$$\begin{aligned} \sigma _{1+\ell k}(X) \leqslant 4 f(2a) \rho _{[a,2b]}^\ell \Vert F \Vert _2. \end{aligned}$$

Proof

Let us define \({\widehat{X}}_{\ell }:=\int _0^\infty X^{\text {ADI}}_{\ell }(t)\mu (t)dt=\int _0^\infty U^{\text {ADI}}_{\ell }(t)\mu (t)dt\cdot Y^{\text {ADI}}_{\ell }(V^{\text {ADI}}_{\ell })^T\). Since \(V^{\text {ADI}}_{\ell }\) does not depend on t we can take it out from the integral, and therefore \({\widehat{X}}_{\ell }\) has rank bounded by \(\ell k\). Then, applying the Eckart–Young theorem we have the inequality

$$\begin{aligned} \sigma _{1+\ell s}(X)&\leqslant \Vert X-{\widehat{X}}_{\ell } \Vert _2 \leqslant \int _0^\infty \Vert X_t-X^{\text {ADI}}_{\ell }(t) \Vert _2\mu (t)dt \leqslant 4 \int _0^\infty \rho _{[a,2b]}^\ell \Vert X_t \Vert _2\mu (t)dt\\&\leqslant 4\int _0^\infty \frac{\mu (t)}{(t+2a)}dt\ \rho _{[a,2b]}^\ell \Vert F \Vert _2 =4f(2a)\rho _{[a,2b]}^\ell \Vert F \Vert _2. \end{aligned}$$

\(\square \)

5 Conclusions, possible extensions and open problems

We have presented a pole selection strategy for the rational Krylov methods when approximating the action of Laplace–Stieltjes and Cauchy–Stieltjes matrix functions on a vector. The poles have been shown to provide a fast convergence rate and explicit error bounds have been established. The theory of equidistributed sequences has been used to obtained a nested sequence of poles with the same asymptotic convergence rate. Then, the approach presented in [8] that addresses the case of a matrix argument with a Kronecker sum structure has been extended to use rational Krylov subspaces. We have proposed a pole selection strategy that ensures a good exponential rate of convergence of the error norm. From the theoretical perspective we established decay bounds for the singular values of \(\mathrm {vec}^{-1}(f(I\otimes A-B^T\otimes I)\mathrm {vec}(F))\) when F is low-rank. This generalizes the well known low-rank approximability property of the solutions of Sylvester equations with low-rank right hand side. Also in the Kronecker structured case, it has been shown that relying on equidistributed sequences is an effective practical choice.

There are some research lines that naturally stem from this work. For instance, we have assumed for simplicity to be working with Hermitian positive definite matrices. This assumption might be relaxed, by considering non-normal matrices with field of values included in the positive half plane. Designing an optimal pole selection for such problems would require the solution of Zolotarev problems on more general domains, and deserves further study. In addition, since the projected problem is also non-normal, the fast diagonalization approach for the evaluation proposed in Sect. 4.1 might not be applicable or stable, and therefore an alternative approach would need to be investigated.