1 Introduction

The use of graph models to represent complex structures is extremely widespread, ranging from real and digital social networks, to transportation networks, to networks of chemical reactions and many others. It is often of interest in applications to study diffusion processes taking place on a network, that evolve in time according to the distribution of edges in the underlying graph. Such a process can be described in terms of a system of ordinary differential equations in time, with the Laplacian matrix L of the graph as the coefficient matrix, and its solution at time t can be written as \(\varvec{u}(t) = \exp (-t L^T) \varvec{u}(0)\). An expression of this form can be computed efficiently using a polynomial Krylov method.

While a diffusion process is a local phenomenon, there are certain phenomena that allow long-range interactions and are non-local in nature: in the continuous setting, phenomena of this kind have been effectively modelled using fractional powers of the Laplace differential operator, that is \((-\varDelta )^{\alpha }\) for \(\alpha \in {[1/2, 1)}\) (see, e.g., [21, Definition 2]). Analogously, in the context of directed graphs, these phenomena can be well described with a fractional diffusion model, which employs a fractional power of the graph Laplacian instead of the Laplacian itself. Unlike for the continuous Laplace operator, in the discrete case there is no need to restrict the values of \(\alpha \) to the interval [1/2, 1), and we can use any exponent \(\alpha \in (0,1)\). This modelling approach has been recently studied in [29] in the undirected case, and in [2] for more general directed graphs; we refer to the book [26] for more information on fractional dynamics. We mention that there are alternative approaches for modelling nonlocal phenomena on graphs, e.g. the one based on the transformed k-path Laplacian presented in [14, 15]: this model shares some properties with fractional diffusion, such as superdiffusive behaviour on infinite one-dimensional graphs.

In this paper we focus on the computational aspect of fractional dynamics, in particular on the efficient computation of the solution to the fractional diffusion equation on both undirected and directed graphs using rational Krylov methods. The solution at time \(t \ge 0\) can be expressed as \(\varvec{u}(t) = f(L^T) \varvec{u}(0)\), where \(f(z) = \exp (-t z^\alpha )\) for \(\alpha \in (0,1)\). The function f has a branch cut on the negative real axis \((-\infty , 0]\), and hence it is not analytic in a neighbourhood of the spectrum of the graph Laplacian L, which is a singular M-matrix. The lack of a region of analyticity around the spectrum of L causes the error bounds for Krylov methods based on the spectrum and the field of values to be unusable, and in practice this also negatively affects the actual convergence rate of these methods. Moreover, since f is not analytic, it is preferable to approximate \(f(L^T) \varvec{u}(0)\) using rational Krylov methods instead of polynomial methods. In our experiments, in addition to the polynomial Krylov method, we investigate the performance of the Shift-and-Invert Krylov method with two different shifts, and of a rational Krylov method with asymptotically optimal poles presented in [24] for Laplace–Stieltjes functions.

To improve convergence, we propose to remove the singularity of the graph Laplacian with either a rank-one shift or a projection on a subspace. This enables us to use any rational Krylov method on the (now nonsingular) modified Laplacian, which exhibit faster convergence, and then recover the solution to the original problem with only little additional cost. We also show that the improved convergence of the projection method can be achieved without explicitly projecting the graph Laplacian, by suitably manipulating the initial vector (see Sect. 5.2.2). These techniques can be applied with no preliminary computations to any undirected graph, and they only require the solution of the singular linear system \(L^T \varvec{z} = \varvec{0}\) for a general digraph.

The paper is organized as follows. First, in Sect. 2 we provide the necessary background and notation on graphs, and we introduce the graph Laplacian L. In Sect. 3 we define the fractional powers \(L^\alpha \), \(\alpha \in (0,1)\), and we briefly discuss the properties of the fractional Laplacian. In Sect. 4 we introduce rational Krylov methods for the computation of matrix functions, and in Sect. 5 we present some techniques to remove the singularity of the graph Laplacian. In Sect. 6 we conduct some numerical experiments on real-world networks to compare the performance of different Krylov methods and demostrate the effectiveness of the desingulatization techniques proposed in Sect. 5.

2 Background and notation regarding graphs

A directed graph, or digraph, is a pair \(G = (V,E)\), where \(V = \{ v_1, \dots , v_n \}\) is a set of n nodes or vertices, and \(E \subseteq V \times V\) is the set of edges. A digraph can be represented with its adjacency matrix A, an \(n \times n\) matrix whose entries are

$$\begin{aligned} A_{ij} = {\left\{ \begin{array}{ll} 1 &{} \text {if} \,\, (i,j) \in E, \\ 0 &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$

The out-degree \(d_i\) of a node i is defined as the number of edges going out from i, i.e. edges of the form \((i, \ell ) \in E\) for some node \(\ell \in V\). The vector d of out-degrees can be computed as \(\varvec{d} = A \varvec{1}\), where \(\varvec{1}\) denotes the all-ones vector of size n. If we denote by \(D = {{\,\mathrm{diag}\,}}(\varvec{d})\) the diagonal matrix of out-degrees, the (out-degree) graph Laplacian of G is defined as \(L = D - A\).

Note that one can also define the vector of in-degrees as \(\varvec{d}_\text {in} = A^T \varvec{1}\), and the in-degree graph Laplacian as \(L_\text {in} = {{\,\mathrm{diag}\,}}(\varvec{d}_\text {in}) - A\). Here, we focus solely on the out-degree graph Laplacian L, and we refer to it as the graph Laplacian whenever there is no ambiguity. Most of the properties of L also hold for \(L_\text {in}\), and what we say for the out-degree graph Laplacian can be extended to the case of the in-degree Laplacian with only minor adjustments.

One can also consider weighted graphs, where with each edge \((i,j) \in E\) is associated a positive weight \(w_{ij}\), repesenting the strength of the connection between nodes i and j; if \((i,j) \notin E\), we write \(w_{ij} = 0\). The matrix \(W = (w_{ij})\) is a weighted adjacency matrix associated with G, and it can be used to defined a weighted vector of out-degrees \(\varvec{d}_W = W \varvec{1}\) and a weighted graph Laplacian \(L_W = {{\,\mathrm{diag}\,}}(\varvec{d}_W) - W\). In this paper we only consider unweighted graphs for simplicity, but the techniques that we propose can be applied in the same way to weighted graphs. We also mainly focus on strongly connected graphs, i.e., graphs on which there exists a directed path from node i to node j for any pair of nodes (ij). Recall that a graph is strongly connected if and only if its adjacency matrix A (and hence also L) is irreducible, i.e., there exists no permutation matrix P such that \(P^T A P\) is block triangular.

2.1 Properties of the graph Laplacian

Here we briefly discuss some properties of the graph Laplacian L, which later will be used in the definition of its fractional powers. We also introduce the classical diffusion equation on graphs.

It follows from its definition that the graph Laplacian L is a singular matrix, indeed it holds \(L \varvec{1}= \varvec{0}\). More specifically, the graph Laplacian is a singular M-matrix.

Definition 2.1

(M-matrix [6]) A matrix \(A \in {\mathbb {R}}^{n \times n}\) is an M-matrix if it holds \(A = s I - B\) for some nonnegative matrix B, where \(s \ge \rho (B)\), the spectral radius of B. It is a singular M-matrix if \(s = \rho (B)\).

One can easily prove the following basic result.

Proposition 2.1

The graph Laplacian L of a digraph G has the following properties:

  1. (a)

    L is a singular M-matrix.

  2. (b)

    The nonzero eigenvalues of L have positive real part.

  3. (c)

    0 is a semisimple eigenvalue of L, i.e. its algebraic multiplicity and geometric multiplicity are the same.

These properties are fundamental for being able to define fractional powers of L, as we will see shortly.

The graph Laplacian is used as the coefficient matrix in the diffusion equation on the graph G. Denote by \(\varvec{u}(t) \in {\mathbb {R}}^n\) a vector of concentrations at time t of a substance that is diffusing on the graph. Up to normalization, we can assume that \(\varvec{u}(t)\) is a probability vector, i.e. that \(\varvec{u}(t) \ge 0\) and \(\varvec{u}(t)^T \varvec{1}= 1\). The diffusion equation on a directed graph reads

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d}{dt}\varvec{u}(t)^T = - \varvec{u}(t)^T L,&{} \quad t \in (0, T],\\ \varvec{u}(0) = \varvec{u}_0 \ge 0,&{} \quad \varvec{u}_0^T \varvec{1}= 1, \end{array}\right. } \end{aligned}$$
(2.1)

and the solution to this system of ordinary differential equations can be explicitly stated in terms of the matrix exponential,

$$\begin{aligned} \varvec{u}(t)^T = \varvec{u}_0^T e^{-t L}. \end{aligned}$$

Using properties of M-matrices, one can easily prove that \(e^{-tL}\) is a stochastic matrix, i.e. that it has nonnegative entries and \(e^{-tL} \varvec{1}= \varvec{1}\), and hence the solution \(\varvec{u}(t)\) is a probability vector at all times \(t \ge 0\). Note that this property would not be preserved if we used column vectors instead of row vectors in (2.1): see, e.g., the discussion in [9].

3 The fractional graph Laplacian

In this section, we recall the general definition of a matrix function in terms of the Jordan canonical form, following [19, Section 1.2], and we use it to define the fractional graph Laplacian and the related fractional diffusion process.

Recall that any matrix \(A \in {\mathbb {C}}^{n \times n}\) can be expressed in the Jordan canonical form as

$$\begin{aligned} \begin{aligned} Z^{-1} A Z&= J = {{\,\mathrm{diag}\,}}(J_1, \dots , J_p), \\ J_k =\begin{bmatrix} \lambda _k \\ \end{bmatrix} \quad \text {or} \quad J_k&= \begin{bmatrix} \lambda _k &{} 1 &{} &{} \\ &{} \lambda _k &{} \ddots &{} \\ &{} &{} \ddots &{} 1 \\ &{} &{} &{} \lambda _k \end{bmatrix} \in {\mathbb {C}}^{m_k \times m_k}, \end{aligned} \end{aligned}$$
(3.1)

where Z is nonsingular and \(m_1 + m_2 + \dots + m_p = n\). An eigenvalue \(\lambda \) is semisimple if and only if all the Jordan blocks associated with \(\lambda \) are \(1 \times 1\). We have the following definition.

Definition 3.1

The function f is said to be defined on the spectrum of A if the values

$$\begin{aligned} f^{(j)}(\lambda _i), \qquad j = 0, \dots , m_i-1, \quad i = 1, \dots , n, \end{aligned}$$

exist, where \(f^{(0)} = f\) and \(f^{(j)}\) is the j-th derivative of f.

Provided that f is defined on the spectrum of A, the matrix function f(A) can be defined for any matrix using the Jordan canonical form.

Definition 3.2

Let f be defined on the spectrum of \(A \in {\mathbb {C}}^{n \times n}\), and let A have the Jordan canonical form (3.1). Then we define

$$\begin{aligned} f(A) = Z f(J) Z^{-1} = Z {{\,\mathrm{diag}\,}}\big (f(J_1), \dots , f(J_p)\big )Z^{-1}, \end{aligned}$$

where

$$\begin{aligned} f(J_k) = \begin{bmatrix} f(\lambda _k) \end{bmatrix} \quad \text {or} \quad f(J_k) = \begin{bmatrix} f(\lambda _k) &{} f'(\lambda _k) &{} \dots &{} \dfrac{f^{(m_k-1)}(\lambda _k)}{(m_k-1)!} \\ &{} f(\lambda _k) &{} \ddots &{} \vdots \\ &{} &{} \ddots &{} f'(\lambda _k) \\ &{} &{} &{} f(\lambda _k) \end{bmatrix}. \end{aligned}$$

By Proposition 2.1, the function \(f(z) = z^\alpha \), \(\alpha \in (0,1)\) is defined on the spectrum of the graph Laplacian L, since the eigenvalue 0 is semisimple and all other eigenvalues are in the right half-plane. Here we denote by \(z^\alpha \) the branch of the fractional power with a cut on the negative real axis \((-\infty , 0]\), i.e. if \(z = \rho e^{i \theta }\), with \(\rho > 0\) and \(\theta \in (-\pi , \pi )\), then \(z^\alpha = \rho ^\alpha e^{i \alpha \theta }\).

With the above definition, the fractional Laplacian \(L^\alpha \) is still an M-matrix. Indeed, we have the following result.

Theorem 3.1

([2]) If A is a singular M-matrix with a semisimple 0 eigenvalue, then \(A^\alpha \) is a singular M-matrix for every \(\alpha \in (0,1]\).

Moreover, since \(L^\alpha \varvec{1}= \varvec{0}\), we can interpret the fractional graph Laplacian as the Laplacian of a weighted graph on the same set of nodes as G, and we can use it to define a fractional diffusion process on G, with a system of differential equations analogous to (2.1):

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d}{dt} \varvec{u}(t)^T = -\varvec{u}(t)^T L^\alpha ,&{} \quad t \in (0, T],\\ \varvec{u}(0) = \varvec{u}_0 \ge 0,&{} \quad \varvec{u}_0^T \varvec{1}= 1. \end{array}\right. } \end{aligned}$$
(3.2)

The solution of this system can be explicitly written in the form

$$\begin{aligned} \varvec{u}(t)^T = \varvec{u}_0^T e^{-t L^\alpha }, \qquad t \ge 0. \end{aligned}$$
(3.3)

As in the case of classical diffusion, the solution \(\varvec{u}(t)\) to (3.2) is a probability vector at all times \(t \ge 0\).

4 Rational Krylov methods

In this section we briefly introduce rational Krylov methods for the computation of expressions of the form \(f(A) \varvec{b}\), with the goal of applying them for the computation the solution to the fractional diffusion equation (3.2), which can be expressed in the form \(\varvec{u}(t) = f(L^T) \varvec{u}_0\), where \(f(z) = e^{-t z^\alpha }\). For a more extensive treatment of rational Krylov methods, including the problem of the selection of poles, we refer, e.g., to [17, 18].

In many applications it is required to compute the product \(f(A) \varvec{b}\), where A is a large and sparse matrix. In these cases, the computation of \(f(A) \varvec{b}\) by first computing the whole matrix f(A) and then forming the product \(f(A) \varvec{b}\) is extremely expensive and often unfeasible; moreover, the matrix function f(A) is generally dense even when the original matrix A is sparse, making the full computation of f(A) costly also in terms of storage for large scale problems. A rational Krylov methods overcomes these difficulties by directly approximating the product \(f(A) \varvec{b}\) using a low-dimensional search space, without explicitly computing f(A). In each iteration of a rational Krylov method it is required to solve a shifted linear system involving A, making the iterations more expensive than those of a polynomial Krylov method, which only requires a matrix-vector product with A at each iteration. However, the increased cost per iteration of a rational method is offset by the often superior approximation properties of rational functions, compared to polynomial approximation, especially for functions that are not analytic.

For any \(k \ge 1\), define the rational Krylov subspace of order k associated with A and \(\varvec{b}\) as

$$\begin{aligned} {\mathcal {Q}}_k(A, \varvec{b}) = q_{k-1}(A)^{-1} {{\,\mathrm{span}\,}}\{ \varvec{b}, A \varvec{b}, \dots , A^{k-1} \varvec{b} \}, \end{aligned}$$

where \(q_{k-1}(z) = \displaystyle \prod _{j = 1}^{k-1} (1 - z/\xi _j)\) is a polynomial identified by the poles \((\xi _k)_{k \ge 1}\), which are located in \(({\mathbb {C}}\cup \infty )\setminus (\sigma (A) \cup \{0\})\). If all poles are equal to \(\infty \), the rational Krylov subspace \({\mathcal {Q}}_k(A, \varvec{b})\) reduces to the polynomial Krylov subspace

$$\begin{aligned} {\mathcal {P}}_k(A, \varvec{b}) = {{\,\mathrm{span}\,}}\{ \varvec{b}, A \varvec{b}, \dots , A^{k-1}\varvec{b}\} = \{ p_{k-1}(A) \varvec{b} \,:\, p \in \varPi _{k-1} \}, \end{aligned}$$

where \(\varPi _{k-1}\) denotes the set of polynomials of degree \(\le k-1\).

The rational Krylov subspaces \({\mathcal {Q}}_k(A,\varvec{b})\) form a sequence of nested subspaces, each of dimension k, as long as \(k \le K\), where K is the invariance index of the sequence, i.e. the smallest index such that \({\mathcal {Q}}_K(A, \varvec{b}) = {\mathcal {Q}}_{K+1}(A,\varvec{b})={\mathcal {Q}}_{K+j}(A,\varvec{b})\) for all \(j \ge 0\).

Generally it is assumed that \(k \le K\). If this is the case, we can compute an orthonormal basis of \({\mathcal {Q}}_k(A, \varvec{b})\) using Ruhe’s rational Arnoldi algorithm [30], which is summarized in Algorithm 1. The first basis vector is chosen as \(\varvec{v}_1 = \varvec{b}/{\Vert } \varvec{b}{\Vert }_2\). After j iterations, the columns of the matrix \(V_j = [\varvec{v}_1, \dots , \varvec{v}_j] \in {\mathbb {C}}^{n\times j}\) form an orthonormal basis of \({\mathcal {Q}}_j(A, \varvec{b})\). To construct a basis of \({\mathcal {Q}}_{j+1}(A, \varvec{b})\), the Arnoldi algorithm makes use of a continuation vector \(\varvec{w}_j \in {\mathcal {Q}}_{j}(A, \varvec{b})\) such that \((A-\xi _{j+1}I)^{-1} A \varvec{w}_j \in {\mathcal {Q}}_{j+1}(A, \varvec{b}) \setminus {\mathcal {Q}}_j(A, \varvec{b})\). As is customary, we assume that the last computed basis vector \( \varvec{v}_j\) is a continuation vector, and we compute \( \varvec{v}_{j+1}\) by orthonormalizing \(\varvec{w}_j = (I - A/\xi _j)^{-1}A \varvec{v}_j\) against all the previous basis vectors.

figure a

An approximation to \(\varvec{y} = f(A) \varvec{b}\) in the subspace \({\mathcal {Q}}_k(A,\varvec{b})\) can be computed as

$$\begin{aligned} \bar{\varvec{y}}_k = V_k f(V_k^* A V_k) V_k^* \varvec{b}. \end{aligned}$$
(4.1)

For the solution of the fractional diffusion problem, we are going to use real poles (in particular, located on \((-\infty , 0)\)), so the matrices \(V_k\) are going to be real.

When the sequence of poles \((\xi _k)_{k \ge 1}\) consists of a single repeated pole \(\xi \), the rational Krylov subspaces that are generated are known as Shift-and-Invert Krylov subspaces, and they can be written more simply as

$$\begin{aligned} {\mathcal {S}}_k(A,\varvec{b}) := (A - \xi I)^{-k+1} {{\,\mathrm{span}\,}}\{ \varvec{b}, \dots , A^{k-1} \varvec{b} \} = {\mathcal {P}}_k((A - \xi I)^{-1}, \varvec{b}), \end{aligned}$$

i.e. it corresponds to a polynomial Krylov subspace relative to the matrix \((A - \xi I)^{-1}\).

The Shift-and-Invert method was first investigated for the approximation of matrix functions in [13, 27]. Even though this type of Krylov subspace sacrifices some flexibility in the choice of the poles, it is appealing because at each iteration we are required to solve a linear system with the same matrix \((A - \xi I)\); this allows us, for instance, to compute an LU factorization of \((A - \xi I)\) only once, and then we can apply it at each iteration to solve the linear systems at a reduced cost. Therefore, although a Shift-and-Invert method will typically require more iterations to converge compared to a rational Krylov method with a carefully chosen sequence of poles, it can still be competitive in terms of execution time.

The effectiveness of the approximation to \(f(A) \varvec{b}\) given by (4.1) can be related to the problem of approximating f with rational functions. Denote by \({\mathbb {W}}(A)\) the field of values of A [20, Definition 1.1.1], i.e. the set

$$\begin{aligned} {\mathbb {W}}(A) = \{ \varvec{x}^*A \varvec{x} : \varvec{x} \in {\mathbb {C}}^n,\, {\Vert }\varvec{x}{\Vert }_2 = 1 \}. \end{aligned}$$

The field of values, also known as numerical range, is a convex and compact set which contains the spectrum \(\sigma (A)\), and it reduces to the convex hull of \(\sigma (A)\) when A is a normal matrix.

The following theorem by Crouzeix and Palencia provides a bound for the norm of a matrix function using the field of values \({\mathbb {W}}(A)\).

Theorem 4.1

([10, 11]) Assume that f is analytic in a neighbourhood of \({\mathbb {W}}(A)\). Then it holds

$$\begin{aligned} {\Vert }f(A){\Vert }_2 \le C {\Vert }f{\Vert }_{{\mathbb {W}}(A)}, \qquad C \le 1 + \sqrt{2}, \end{aligned}$$

where \({\Vert }f{\Vert }_E = \sup _{z \in E} {|}f(z){|}\) for any set \(E \subset {\mathbb {C}}\).

A conjecture by Crouzeix states that the inequality \({\Vert }f(A){\Vert }_2 \le C {\Vert }f{\Vert }_{{\mathbb {W}}(A)}\) holds with \(C = 2\) for any matrix A.

With Theorem 4.1 one can prove the following.

Proposition 4.1

([18, Corollary 3.4]) Let f be analytic in a neighbourhood of \({\mathbb {W}}(A)\). Then the rational Krylov approximation \(\bar{\varvec{y}}_k\) defined in (4.1) satisfies

$$\begin{aligned} {\Vert }f(A) \varvec{b} - \bar{\varvec{y}}_k{\Vert }_2 \le 2C {\Vert }\varvec{b}{\Vert }_2 \min _{p_{k-1} \in \varPi _{k-1}} {\Vert }f - p_{k-1}/q_{k-1}{\Vert }_{{\mathbb {W}}(A)}, \end{aligned}$$
(4.2)

where \(\varPi _{k-1}\) denotes the set of polynomials of degree \(\le k-1\).

The bound given by Proposition 4.1 decays rapidly to zero when f is an entire function (e.g., \(f(z) = e^z\)), or if it has a large region of analiticity surrounding the field of values of A. Unfortunately, in the case of fractional diffusion on graphs, the 0 eigenvalue of the Laplacian is located on the boundary of the region of analiticity of f. Moreover, for most directed graphs the field of values of the Laplacian intersects the negative real axis \((-\infty , 0)\), preventing us from using convergence results based on the field of values. The presence of an eigenvalue at 0 can also be detrimental in practice for the convergence of Krylov methods. With this motivation in mind, in Sect. 5 we propose some techniques to remove the singularity of the graph Laplacian, in order to work with nonsingular matrices and improve the convergence of Krylov methods.

4.1 Laplace–Stieltjes functions

The problem of the selection of poles for rational Krylov methods is a highly active area of reseach, and many different choices have been proposed in the literature, depending on the function f and on the spectrum of A (see, for instance, [18]). Most of the existing analysis deals with real symmetric matrices, since in that case the field of values is reduced to an interval on the real line, and hence the minimization problem (4.2) becomes easier to handle.

A pole selection strategy for the evaluation of \(f(A) \varvec{b}\) was recently proposed in [24] for the case of a Hermitian positive definite matrix A and a Cauchy-Stieltjes or Laplace–Stieltjes function f. For a matrix A with spectrum contained in the positive interval [ab], the choice of poles described in [24] gives after k iterations an error

$$\begin{aligned} {\Vert }f(A)\varvec{b} - \bar{\varvec{y}}_k{\Vert }_2 \sim O\big (\rho _{[a,b]}^{\frac{k}{2}}\big ), \qquad \text {where} \quad \rho _{[a,b]} = \exp \big (-\pi ^2/\log (\tfrac{4b}{a}) \big ). \end{aligned}$$
(4.3)

However, the poles that satisfy the error bound for iteration \(k+1\) are not obtained by adding a new pole to the ones of iteration k, so in order to effectively use this pole selection strategy one would need to decide a priori the number of iterations to be performed. In order to overcome this drawback, in [24, Section 3.5] the authors use the method of equidistributed sequences (EDS) to construct an infinite sequence of poles with the same asymptotic rate of convergence, that can be more easily used in practice. For the details on the construction of this pole sequence, we refer to the discussion in [24, Section 3.5].

In this section we observe that the function \(f(z) = e^{-t z^\alpha }\) is a Laplace–Stieltjes function, and hence we can use the pole sequence proposed in [24] for the fractional diffusion problem (3.2). Even though there are no guarantees on the effectiveness of this pole sequence for general matrices, in our numerical experiments we observed that it provides a good convergence rate even when A is the (singular and nonsymmetric) Laplacian of a directed graph.

Definition 4.1

A function \(f: (0, \infty ) \rightarrow {\mathbb {R}}\) is a Laplace–Stieltjes function if it can be expressed in the form

$$\begin{aligned} f(z) = \int _0^\infty e^{-t z} d\mu (t), \end{aligned}$$

where \(\mu \) is a positive measure on \([0, \infty )\).

The class of Laplace–Stieltjes functions coincides with the class of completely monotonic functions, i.e. infinitely differentiable functions defined on \((0,\infty )\) such that

$$\begin{aligned} (-1)^k f^{(k)}(z) \ge 0 \qquad \forall \, z > 0 \quad \text {and} \quad k \ge 0. \end{aligned}$$
(4.4)

The equivalence between these two classes of functions is known as Bernstein’s theorem [31, Theorem 1.4].

A class of functions which is closely related to completely monotonic functions is the class of Bernstein functions, that consists of all functions \(f : (0, \infty ) \rightarrow {\mathbb {R}}\) of class \(C^\infty \) such that

$$\begin{aligned} f(z) \ge 0 \quad \text {and} \quad (-1)^{k-1}f^{(k)} (z) \ge 0, \qquad \forall \, k \ge 1 \quad \text {and} \quad \forall \, z > 0. \end{aligned}$$
(4.5)

Observe that a nonnegative function \(f : (0,\infty ) \rightarrow {\mathbb {R}}\) is a Bernstein function if and only if \(f'\) is a completely monotonic function. The fractional power \(f(z) = z^\alpha \), for \(\alpha \in (0,1)\), is an example of a Bernstein function. By [31, Theorem 3.7], if f is a positive Bernstein function, then the function \(g(z) = e^{-t f(z)}\) is completely monotonic for all \(t > 0\). This proves that \(g(z) = e^{-t z^\alpha }\) is a completely monotonic (equivalently, Laplace–Stieltjes) function for all \(t > 0\) and \(\alpha \in (0,1)\). This fact can also be easily proven directly by computing the derivatives of g and checking that condition (4.4) is verified.

5 Dealing with the singularity

As we have discussed previously, the functions \(f(z) = z^\alpha \) and \(g(z) = e^{-t z^\alpha }\) that are involved in fractional dynamics are not analytic at \(z=0\). Since the graph Laplacian L always has a zero eigenvalue, the convergence of rational Krylov methods for the computation of \(f(L^T) \varvec{b}\) and \(g(L^T) \varvec{b}\) may be hindered by the fact that the function has no region of analyticity surrounding the spectrum of L.

In this section we propose a rank-one shift and a subspace projection that can be used to transform the graph Laplacian into a nonsingular matrix, and we provide simple formulas that link f(L) and \(f(L^T)\varvec{b}\) with functions of the transformed matrix. We are also going to show that Krylov methods directly applied to the singular graph Laplacian can inherit the improved convergence of the projection approach, at least in exact arithmetic, provided that the initial vector \(\varvec{b}\) is suitably modified.

We mention that the idea of removing the singularity when working with a singular Laplacian has already appeared in previous literature. For instance, in [7] the authors consider finite element approximations of the pure Neumann problem, and they use projections to restrict the problem to the subspace of functions that are orthogonal to constants, where the continuous Laplace operator is nonsingular.

We present these techniques in detail for strongly connected directed graphs. Recall that in this case the eigenvalue 0 of the Laplacian L is simple.

5.1 Rank-one shift

Recall that \(L \varvec{1}= \varvec{0}\), and let \(\varvec{z} > \varvec{0}\) be such that \(\varvec{z}^T L = \varvec{0}^T\) and \(\varvec{z}^T \varvec{1}= 1\) (the positivity of \(\varvec{z}\) is a consequence of the Perron-Frobenius Theorem [25]). The vector \(\varvec{z}\) is uniquely defined by the above identities if the graph G is strongly connected.

The right and left eigenvectors \(\varvec{1}\) and \(\varvec{z}\) can be respectively completed to a right and left Jordan basis for L with two matrices \(R,\, S \in {\mathbb {C}}^{n \times (n-1)}\), so that we have

$$\begin{aligned} Z^{-1} L Z = J = \begin{bmatrix} 0 &{} 0 \\ 0 &{} J_1 \end{bmatrix}, \qquad \text {where } Z = \begin{bmatrix} \varvec{1}&{} R \\ \end{bmatrix} \quad \text {and} \quad Z^{-1} = \begin{bmatrix} \varvec{z}^T \\ S^T \end{bmatrix}. \end{aligned}$$

The matrix \(J_1 \in {\mathbb {C}}^{(n-1) \times (n-1)}\) contains all the other Jordan blocks of L, which correspond to nonzero eigenvalues.

Now, denoting by \(\varvec{e}_1 \in {\mathbb {R}}^n\) the first vector of the canonical basis, observe that \(\varvec{1}\varvec{z}^T = Z \varvec{e}_1 \varvec{e}_1^T Z^{-1}\), and hence for all \(\theta \in {\mathbb {R}}\) we have

$$\begin{aligned} L + \theta \varvec{1}\varvec{z}^T = Z \begin{bmatrix} \theta &{} 0 \\ 0 &{} J_1 \end{bmatrix}Z^{-1}, \end{aligned}$$

i.e. the matrix \(L + \theta \varvec{1}\varvec{z}^T\) has the same spectrum as L except for the eigenvalue 0, which is replaced by \(\theta \). Therefore, using basic properties of matrix functions, for any function f defined on the spectra of L and \(L + \theta \varvec{1}\varvec{z}^T\) it holds

$$\begin{aligned} f(L + \theta \varvec{1}\varvec{z}^T) = Z \begin{bmatrix} f(\theta ) &{} 0 \\ 0 &{} f(J_1) \end{bmatrix} Z^{-1} = f(L) + [f(\theta ) - f(0)] \varvec{1}\varvec{z}^T. \end{aligned}$$
(5.1)

Identity (5.1) allows us to compute \(f(L + \theta \varvec{1}\varvec{z}^T)\) instead of f(L) and then recover the latter for a minimal cost. For any \(\theta > 0\) (e.g., \(\theta = 1\)), the matrix \(L + \theta \varvec{1}\varvec{z}^T\) is nonsingular and all its eigenvalues have strictly positive real part, so we expect Krylov methods to converge faster when f has a branch cut on \((-\infty , 0]\), e.g. for \(f(z) = z^{\alpha }\).

In particular, for fractional diffusion the objective is the computation of \(f(L^T)\varvec{b}\) for a probability vector \(\varvec{b}\) and \(f(z) = e^{-t z^\alpha }\), and identity (5.1) becomes

$$\begin{aligned} f(L^T)\varvec{b} = f(L^T + \theta \varvec{z} \varvec{1}^T)\varvec{b} + [f(0) - f(\theta )]\varvec{z}. \end{aligned}$$
(5.2)

When the matrix L is large and sparse and a rational Krylov method is used to approximate \(f(L^T + \theta \varvec{z} \varvec{1}^T)\), it would be preferable to solve shifted linear systems involving the dense matrix \(L^T + \theta \varvec{z} \varvec{1}^T\) without explicitly forming it. This can be done by using the Sherman-Morrison formula: for an invertible matrix A and two vectors \(\varvec{u}\), \(\varvec{v}\) such that \(1 + \varvec{v}^T A^{-1} \varvec{u} \ne 0\), the matrix \(A + \varvec{u} \varvec{v}^T\) is invertible and it holds

$$\begin{aligned} (A + \varvec{u} \varvec{v}^T)^{-1} = A^{-1} - \frac{A^{-1} \varvec{u} \varvec{v}^T A^{-1}}{1 + \varvec{v}^T A^{-1} \varvec{u}}. \end{aligned}$$
(5.3)

In our setting, for a pole \(\xi \in (-\infty , 0)\) the invertibility condition \(1 + \varvec{1}^T (L^T - \xi I)^{-1} \varvec{z} \ne 0\) is always satisfied (since (\(L^T - \xi I)^{-1} \ge 0\), being the inverse of a nonsingular M-matrix). By applying the identity (5.3) with \(A = L^T - \xi I\), and observing that \((L^T - \xi I)^{-1} \varvec{z} = -\xi ^{-1} \varvec{z}\) and \(\varvec{1}^T(L^T - \xi I)^{-1} = -\xi ^{-1} \varvec{1}\), we obtain

$$\begin{aligned} (L^T + \theta \varvec{z} \varvec{1}^T -\xi I)^{-1}= & {} (L^T - \xi I)^{-1} - \frac{\xi ^{-2}\theta }{1-\xi ^{-1}\theta } \varvec{z} \varvec{1}^T \nonumber \\= & {} (L^T - \xi I)^{-1} + \frac{\theta }{\xi (\theta -\xi )} \varvec{z} \varvec{1}^T. \end{aligned}$$
(5.4)

Remark 5.1

We mention that the Sherman-Morrison formula has already been applied in the literature in the context of rational Krylov methods. For instance, in [32, Section 3.1] the authors use the Sherman-Morrison-Woodbury formula in the construction of an “augmented” Krylov subspace associated with a singular matrix, arising in connection with the solution of a constrained Sylvester equation.

Remark 5.2

Recalling the Jordan canonical form of L, we have

$$\begin{aligned} Z^{-1} \xi (L - \xi I)^{-1} Z = \begin{bmatrix} -1 &{} 0 \\ 0 &{} \xi (J_1 - \xi I)^{-1} \end{bmatrix}, \end{aligned}$$

where \((J_1 - \xi I)\) is invertible for all \(\xi \le 0\), since all the eigenvalues of \(J_1\) have positive real part. By taking the limit for \(\xi \rightarrow 0^{-}\) in the above expression we get

$$\begin{aligned} \lim _{\xi \rightarrow 0^{-}} \xi (L^T -\xi I)^{-1} = - \varvec{z} \varvec{1}^T, \end{aligned}$$
(5.5)

and hence for small \(\xi < 0\) and any vector \(\varvec{w}\) the identity (5.4) becomes

$$\begin{aligned} (L^T + \theta \varvec{z} \varvec{1}^T -\xi I)^{-1} \varvec{w}= & {} (L^T - \xi I)^{-1}\varvec{w} + \frac{\theta }{\xi (\theta -\xi )} (\varvec{1}^T \varvec{w}) \varvec{z} \nonumber \\\approx & {} -\xi ^{-1} (\varvec{1}^T \varvec{w}) \varvec{z} + \frac{\theta }{\xi (\theta -\xi )} (\varvec{1}^T \varvec{w}) \varvec{z}, \end{aligned}$$
(5.6)

i.e. we are very close to subtracting two multiples of the vector \(\varvec{z}\) of approximately the same length. Hence formula (5.6) may suffer from severe numerical cancellation when the pole \(\xi \) is very close to the origin, and therefore its use is not advised in that situation. Indeed, in our numerical experiments we observed a large loss of precision when solving linear systems with formula (5.6) for poles \(\xi < 0\) of order \(10^{-6}\).

In order to address the issue mentioned in Remark 5.2, we now derive an alternative way to compute the solution of the shifted linear system \((L^T - \xi I ) \varvec{\phi } = \varvec{w}\), in order to avoid the cancellation in (5.6) when \(\xi \) is small. We have

$$\begin{aligned} \varvec{1}^T \varvec{w} = \varvec{1}^T (L^T - \xi I) \varvec{\phi } = -\xi \, \varvec{1}^T \varvec{\phi } \,\implies \, \varvec{1}^T \varvec{\phi } = -\xi ^{-1} \varvec{1}^T \varvec{w}, \end{aligned}$$

so we can define \(\varvec{\psi } := \varvec{\phi } + \xi ^{-1} (\varvec{1}^T \varvec{w}) \varvec{z}\), and it holds by construction that \(\varvec{1}^T \varvec{\psi } = 0\). It is also straightforward to verify that \(\varvec{\psi }\) is the solution to the linear system

$$\begin{aligned} (L^T - \xi I) \varvec{\psi } = \varvec{w} - (\varvec{1}^T \varvec{w}) \varvec{z}, \end{aligned}$$

and that the vector \(\varvec{w} - (\varvec{1}^T \varvec{w}) \varvec{z}\) is orthogonal to \(\varvec{1}\). Hence, we can compute \(\varvec{\phi }\) as

$$\begin{aligned} \varvec{\phi } = \varvec{\psi } - \xi ^{-1} (\varvec{1}^T \varvec{w}) \varvec{z}, \qquad \text {where} \quad (L^T - \xi I) \varvec{\psi } = \varvec{w} - (\varvec{1}^T \varvec{w})\varvec{z}. \end{aligned}$$
(5.7)

With formula (5.7), we have explicitly separated a component of the solution that is proportional to \(\xi ^{-1} \varvec{z}\). By substituting (5.7) in (5.6), we obtain

$$\begin{aligned} (L^T + \theta \varvec{z} \varvec{1}^T - \xi I)^{-1} \varvec{w}= & {} \varvec{\phi } + \frac{\theta }{\xi (\theta - \xi )} (\varvec{1}^T \varvec{w}) \varvec{z} \nonumber \\= & {} \varvec{\psi } - \xi ^{-1} (\varvec{1}^T \varvec{w}) \varvec{z} + \frac{\theta }{\xi (\theta - \xi )} (\varvec{1}^T \varvec{w}) \varvec{z} \nonumber \\= & {} \varvec{\psi } + \frac{\varvec{1}^T \varvec{w}}{\theta - \xi } \, \varvec{z}. \end{aligned}$$
(5.8)

Observe that cancellation is avoided when using (5.8), because the subtraction of the two close multiples of the vector \(\varvec{z}\) is performed analytically. Moreover, because of (5.5) and since \((\varvec{w} - (\varvec{1}^T \varvec{w})\varvec{z}) \perp \varvec{1}\), we have

$$\begin{aligned} \lim _{\xi \rightarrow 0^-} \xi (L^T - \xi I)^{-1} (\varvec{w} - (\varvec{1}^T \varvec{w})\varvec{z}) = \varvec{0}, \end{aligned}$$

so for small \(\xi < 0\) we do not expect \(\varvec{\psi }\) to have a component of order \(\xi ^{-1}\) along the vector \(\varvec{z}\) (note that, in general, this argument fails for \(\varvec{\phi }\)). Our numerical experiments confirm that the use of formula (5.8) fixes the problem of cancellation.

Remark 5.3

Note that if the graph is undirected, or more generally if it is balanced (i.e. each node has equal in- and out-degree), we also have \(\varvec{z} = \varvec{1}\) up to a normalization factor, so no preliminary computation is needed to use this approach with the rank-one shift. On the other hand, for a general digraph it is first required to compute a nonzero vector \(\varvec{z}\) such that \(L^T \varvec{z} = \varvec{0}\).

The problem of solving this linear system was recently discussed, for instance, in [3]. One possible approach is to compute an LDU factorization of the transpose of the graph Laplacian, \(L^T = {\mathcal {L}}{\mathcal {D}}{\mathcal {U}}\), where \({\mathcal {L}}\) is unit lower triangular, \({\mathcal {U}}\) is unit upper triangular, and \({\mathcal {D}}\) is diagonal with \({\mathcal {D}}_{ii} > 0\) for \(i = 1, \dots , n-1\) and \({\mathcal {D}}_{nn} = 0\). Such a factorization always exists since L is an irreducible singular M-matrix [6], and it can be computed in a stable way using Gaussian elimination, with no pivoting required [16]. The original linear system \(L^T \varvec{z} = \varvec{0}\) is thus equivalent to \({\mathcal {D}} {\mathcal {U}} \varvec{z} = \varvec{0}\), which can be solved by fixing \(z_n = 1\) and solving the lower triangular linear system \({\mathcal {U}} \varvec{z} = \varvec{e}_n\) via backward substitution. We remark that \({\mathcal {L}}^{-1} \ge 0\), so the vector \(\varvec{z}\) is nonnegative, and it can be indeed normalized so that \(\varvec{z}^T \varvec{1}= 1\). We also mention that when L is sparse this method can be improved by computing the LDU factorization of \(P^T L^T P\) instead of \(L^T\), where P is a permutation matrix suitably chosen to reduce the fill-in in the factors \({\mathcal {L}}\) and \({\mathcal {U}}\). For example, the Matlab routines amd and symrcm can be used for this purpose.

Alternatively, when L is very large and sparse, the linear system \(L^T \varvec{z} = \varvec{0}\) can be solved iteratively, e.g. with a preconditioned GMRES method (see [3] and references therein). Of course, if L is large and we choose to solve \(L^T \varvec{z} = \varvec{0}\) iteratively, we should also use an iterative method for the solution of the shifted linear systems at each step of the rational Krylov iteration. However, in this paper we do not address this specific subproblem, and we instead focus on the case where it is feasible to solve the linear systems with a direct method.

5.2 Projected Krylov methods

Another way to obtain a nonsingular matrix from the graph Laplacian is to project L on the \(n-1\) dimensional subspace \({\mathcal {S}} = {{\,\mathrm{span}\,}}\{ \varvec{1}\}^{\perp }\). We remark that the approach we present here is similar to the one described in [8, Section 4], where the authors separate the eigenvalue 0 from the rest of the spectrum of a symmetric positive semidefinite matrix A, to compute \(f(A) \varvec{b}\) with an integral on a contour surrounding \(\sigma (A) \setminus \{ 0 \}\). See also [22, 23] for a discussion of more general spectral splitting methods for symmetric matrices.

Let \(\{ \varvec{q}_1, \dots , \varvec{q}_{n-1}\}\) be an orthonormal basis of \({\mathcal {S}}\), and define the \(n \times (n-1)\) matrix \(Q := \begin{bmatrix}\varvec{q}_1&\!\dots \!&\varvec{q}_{n-1} \end{bmatrix}\). The matrix \({{\tilde{Q}}} := \begin{bmatrix} Q &{} \frac{1}{\sqrt{n}}\varvec{1}\\ \end{bmatrix}\) is orthogonal, and we have \(Q^T Q = I_{n-1}\) and \(QQ^T = I_n - \frac{1}{n}\varvec{1}\varvec{1}^T\). Here we denoted by \(I_k\) the identity matrix of size \(k \times k\) in order to stress that the two matrices have a different size; in the sequel, we will drop the subscript when there is no ambiguity. Observe that the matrix \(Q^T L Q\) is nonsingular, since \({{\,\mathrm{range}\,}}Q = {{\,\mathrm{span}\,}}\{\varvec{1}\}^\perp \), \(\ker Q^T = \ker L = {{\,\mathrm{span}\,}}\{\varvec{1}\}\) and \({{\,\mathrm{range}\,}}L = {{\,\mathrm{span}\,}}\{ \varvec{z}\}^\perp \).

We are going to rewrite f(L) in terms of \(f(Q^T L Q)\) by using some properties of matrix functions. Recalling that \(L \varvec{1}= \varvec{0}\) and that \(\varvec{1}^T Q = \varvec{0}^T\), we have:

$$\begin{aligned} {{\tilde{Q}}}^T L {{\tilde{Q}}} = \begin{bmatrix} Q^T \\ \frac{1}{\sqrt{n}}\varvec{1}^T \end{bmatrix} L \begin{bmatrix} Q&\frac{1}{\sqrt{n}}\varvec{1}\end{bmatrix} = \begin{bmatrix} Q^T L Q &{} \varvec{0}\\ \frac{1}{\sqrt{n}}\varvec{1}^T L Q &{} 0 \end{bmatrix}. \end{aligned}$$

Now, using well known properties of matrix functions, we have

$$\begin{aligned} {{\tilde{Q}}}^T f(L) {{\tilde{Q}}} = f({{\tilde{Q}}}^T L {{\tilde{Q}}}) = \begin{bmatrix} f(Q^T L Q) &{} \varvec{0}\\ \varvec{\varphi }^T &{} f(0) \end{bmatrix}, \end{aligned}$$
(5.9)

for some \(\varvec{\varphi } \in {\mathbb {R}}^{n-1}\). The vector \(\varvec{\varphi }\) can be expressed in closed form (see, e.g., [19, Theorem 1.21]), but this is not necessary for our purposes.

Let us assume at first that our goal is to compute \(f(L^T)\varvec{v}\) for a vector \(\varvec{v}\) such that \(\varvec{1}^T \varvec{v} = 0\). Using (5.9), we get

$$\begin{aligned} \nonumber f(L^T)\varvec{v}&= {{\tilde{Q}}} f({{\tilde{Q}}}^T L {{\tilde{Q}}})^T {{\tilde{Q}}}^T \varvec{v} \\ \nonumber&= \begin{bmatrix} Q&\frac{1}{\sqrt{n}}\varvec{1}\end{bmatrix} \begin{bmatrix} f(Q^T L^T Q) &{} \varvec{\varphi } \\ \varvec{0}^T &{} f(0) \end{bmatrix} \begin{bmatrix} Q^T \varvec{v} \\ 0 \end{bmatrix}\\&= Q f(Q^T L^T Q) Q^T \varvec{v}. \end{aligned}$$
(5.10)

Now, consider the computation of \(f(L^T)\varvec{b}\) for a generic vector \(\varvec{b}\). If \(\varvec{1}^T \varvec{b} = \beta \ne 0\), we can always write \(\varvec{b} = \varvec{v} + \beta \varvec{z}\) for some vector \(\varvec{v} \perp \varvec{1}\) (recall that \(\varvec{z}\) satisfies \(L^T \varvec{z} = \varvec{0}\) and \(\varvec{1}^T \varvec{z} = 1\)). Hence, using (5.10) we have

$$\begin{aligned} \nonumber f(L^T) \varvec{b}&= f(L^T) \varvec{v} + \beta f(L^T) \varvec{z} \\&= Q f(Q^T L^T Q) Q^T \varvec{v} + \beta f(0) \varvec{z}. \end{aligned}$$
(5.11)

Using (5.11), we can compute \(f(L^T)\varvec{b}\) by using a rational Krylov method on the nonsingular projected matrix \(Q^T L^T Q\). As the rank-one shift, this requires knowledge of \(\varvec{z}\), the left 0-eigenvector of the graph Laplacian, which must be computed beforehand by solving the singular linear system \(L^T \varvec{z} = \varvec{0}\). In order to make this approach viable, we need to be able to compute matrix-vector products with Q efficiently: we address this problem in Sect. 5.2.1. We are also going to show that in exact arithmetic the Krylov methods for \(f(L^T)\varvec{v}\) and \(Q f(Q^T L^T Q) Q^T \varvec{v}\) construct precisely the same approximations after an equal number of iterations, so it is actually not necessary to perform the projection explicitly.

5.2.1 Fast matrix-vector products with Q

In this part we show how to construct a matrix Q with orthonormal columns spanning the subspace \({\mathcal {S}} = {{\,\mathrm{span}\,}}{\varvec{1}}^\perp \), such that the matrix-vector products of the form \(Q \varvec{u}\) and \(Q^T \varvec{v}\) can be perfomed with cost O(n).

Let us define the orthogonal matrix \({{\tilde{Q}}} = \begin{bmatrix} Q &{} \frac{1}{\sqrt{n}}\varvec{1}\\ \end{bmatrix}\) as

$$\begin{aligned} {{\tilde{Q}}} = \begin{bmatrix} 1/\sqrt{n} &{} \cdots &{} \cdots &{} 1/\sqrt{n} &{} 1/\sqrt{n} \\ r &{} s &{} \dots &{} s &{} 1/\sqrt{n} \\ s &{} r &{} \ddots &{} \vdots &{} \vdots \\ \vdots &{} \ddots &{} \ddots &{} s &{} \vdots \\ s &{} \dots &{} s &{} r &{} 1/\sqrt{n} \\ \end{bmatrix}, \qquad \begin{aligned} \\ \text {where} \qquad s&= \frac{1}{1-n}\Big (1 + \frac{1}{\sqrt{n}}\Big ), \\ r&= s+1 \end{aligned} \end{aligned}$$

so that we have

$$\begin{aligned} Q = \frac{1}{\sqrt{n}} \begin{bmatrix} 1 \\ \varvec{0}_{n-1} \end{bmatrix} \varvec{1}_{n-1}^T + s \begin{bmatrix} 0 \\ \varvec{1}_{n-1} \end{bmatrix} \varvec{1}_{n-1}^T + \begin{bmatrix} \varvec{0}_{n-1}^T \\ I_{n-1} \end{bmatrix}, \end{aligned}$$
(5.12)

where we denoted by \(\varvec{0}_{n-1}\), \(\varvec{1}_{n-1}\) and \(I_{n-1}\) respectively the all-zeroes vector, the all-ones vector, and the identity matrix of size \(n-1\). It is straightforward to see that with the above definition \({{\tilde{Q}}}\) is indeed an orthogonal matrix. Now, for any vector \(\varvec{v} \in {\mathbb {R}}^n\) and \(\varvec{u} \in {\mathbb {R}}^{n-1}\) we have

$$\begin{aligned} \nonumber&Q \varvec{u} = \frac{1}{\sqrt{n}} \varvec{1}_{n-1}^T \varvec{u} \begin{bmatrix} 1 \\ 0_{n-1} \end{bmatrix} + s \varvec{1}_{n-1}^T \varvec{u} \begin{bmatrix} 0 \\ \varvec{1}_{n-1} \end{bmatrix} + \begin{bmatrix} 0 \\ \varvec{u} \end{bmatrix} \\&Q^T \varvec{v} = \frac{1}{\sqrt{n}} [\varvec{v}]_1 \varvec{1}_{n-1} + s \sum _{j = 2}^n [\varvec{v}]_j \varvec{1}_{n-1} + \begin{bmatrix} v_2 \\ \vdots \\ v_n \end{bmatrix}\\ \nonumber&(Q^T L^T Q - \xi I_{n-1})^{-1} \varvec{u} = Q^T (L^T - \xi I)^{-1} Q \varvec{u}. \end{aligned}$$
(5.13)

The last equality in (5.13) follows from Lemma 5.1(b) in Sect. 5.2.2 below.

Hence the matrix-vector products \(Q \varvec{u}\) and \(Q^T \varvec{v}\) can be computed with cost O(n), and the solution of a shifted linear system with \(Q^T L^T Q\) can be reduced with cost O(n) to the solution of a shifted linear system with \(L^T\).

5.2.2 Implicit projection

In the following part, we are going to examine how rational Krylov methods for the computation of \(f(L^T) \varvec{b}\) are related to their projected counterpart, i.e. to methods that first approximate \(f(Q^T L^TQ) Q^T \varvec{b}\) using rational Krylov subspaces and then use (5.10) to compute \(f(L^T) \varvec{b}\), in the case of an initial vector \(\varvec{b} \perp \varvec{1}\). Note that the assumption \(\varvec{b} \perp \varvec{1}\) is not satisfied when computing the solution to the fractional diffusion equation, since in that case the initial vector \(\varvec{u}_0\) is a probability vector, and hence \(\varvec{1}^T \varvec{u}_0 = 1\). However, with the same procedure used in identity (5.11), the results of this section can be used with minor modifications for any initial vector \(\varvec{b}\).

We are going to prove our result in a slightly more general scenario: let \(A \in {\mathbb {R}}^{n \times n}\), and let \(\varvec{x}\) be a left eigenvector of A such that \(\varvec{x}^T A = \lambda \varvec{x}^T\). The specific case of the graph Laplacian will then correspond to \(A = L^T\), \(\varvec{x} = \varvec{1}\) and \(\lambda = 0\). Let Q be an \(n \times (n-1)\) matrix whose columns are an orthonormal basis of \({{\,\mathrm{span}\,}}\{\varvec{x}\}^\perp \). If \(\varvec{b} \perp \varvec{x}\), the same argument used in the proof of (5.10) gives us

$$\begin{aligned} f(A) \varvec{b} = Q f(Q^TAQ) Q^T \varvec{b}. \end{aligned}$$
(5.14)

Recall that the usual rational Arnoldi algorithm for \(f(A)\varvec{b}\) computes an orthonormal sequence \(\left\{ \varvec{v}_k \right\} _{k \ge 1}\) such that \(\varvec{v}_1 = \varvec{b}/{\Vert }\varvec{b}{\Vert }_2\) and \({{\,\mathrm{span}\,}}\left\{ \varvec{v}_1, \dots , \varvec{v}_k\right\} = {\mathcal {Q}}_k(A, \varvec{b})\). If we define \(V_k = \left[ \varvec{v}_1\, \dots \, \varvec{v}_k\right] \), and \(B_k = V_k^T A V_k\), a rational Krylov method then yields the approximation

$$\begin{aligned} \bar{\varvec{y}}_k := V_k f(B_k) V_k^T \varvec{b}. \end{aligned}$$
(5.15)

Alternatively, if we work with the right hand side of (5.14), after k iterations the rational Arnoldi algorithm constructs the matrix \(U_k = \left[ \varvec{u}_1 \, \dots \, \varvec{u}_k\right] \in {\mathbb {R}}^{(n-1) \times k}\), whose columns \(\{ \varvec{u}_1, \dots , \varvec{u}_k \}\) are an orthonormal basis for \({\mathcal {Q}}_k(Q^T A Q, Q^T \varvec{b})\). Then the vector \(f(Q^T A Q) Q^T \varvec{b}\) can be approximated by

$$\begin{aligned} U_k f(C_k) U_k^T Q^T \varvec{b}, \end{aligned}$$

where \(C_k = U_k^T (Q^TAQ) U_k\). Applying now (5.14), we have the following approximation to \(f(A)\varvec{b}\):

$$\begin{aligned} \bar{\bar{\varvec{y}}}_k = Q U_k f(C_k) U_k^T Q^T \varvec{b}. \end{aligned}$$
(5.16)

We will refer to the method described by equation (5.16) as a projected rational Krylov method.

The main result of this section is the following.

Theorem 5.1

Let \(\bar{\varvec{y}}_k\) and \(\bar{\bar{\varvec{y}}}_k\) be the approximations to \(f(A) \varvec{b}\) defined in (5.15) and (5.16), respectively, where \(\varvec{b} \perp \varvec{x}\). Then, in exact arithmetic it holds that \(\bar{\varvec{y}}_k = \bar{\bar{\varvec{y}}}_k\).

We start by proving a few basic properties that we will use repeatedly in the following discussion.

Lemma 5.1

Using the same notation as above, the following properties hold:

  1. (a)

    \({\mathcal {Q}}_k(A, \varvec{b}) \perp \varvec{x}\) and \(QQ^T \varvec{v} = \varvec{v}\) for all \(\varvec{v} \in {\mathcal {Q}}_k(A, \varvec{b})\).

  2. (b)

    \((Q^T A Q - \xi I)^{-1} = Q^T (A - \xi I)^{-1} Q\) for any \(\xi \in {\mathbb {C}}\) such that the inverses exist.

  3. (c)

    \({\mathcal {Q}}_k(Q^T A Q, Q^T \varvec{b}) = Q^T {\mathcal {Q}}_k(A, \varvec{b})\) and \( Q {\mathcal {Q}}_k(Q^T A Q, Q^T \varvec{b}) = {\mathcal {Q}}_k(A, \varvec{b})\).

  4. (d)

    Let \(\varvec{w}_k \in {\mathcal {Q}}_k(A, \varvec{b})\) and \(\varvec{z}_k = Q^T \varvec{w}_k \in {\mathcal {Q}}_k(Q^TAQ, Q^T \varvec{b})\). It holds

    $$\begin{aligned} (A - \xi _k I)^{-1}A \varvec{w}_k \!\in \! {\mathcal {Q}}_{k}(A, \varvec{b}) \!\iff \! (Q^TAQ - \xi _k I)^{-1}Q^TAQ \varvec{z}_k \!\in \! {\mathcal {Q}}_{k}(Q^TAQ,Q^T \varvec{b}). \end{aligned}$$

    This implies that \(\varvec{w}_k\) is a continuation vector for \({\mathcal {Q}}_k(A, \varvec{b})\) if and only if \(\varvec{z}_k\) is a continuation vector for \({\mathcal {Q}}_k(Q^T A Q, Q^T \varvec{b})\).

Proof

(a) Let \(\varvec{v} \in {\mathcal {Q}}_k(A, \varvec{b})\), i.e. \(\varvec{v} = q_{k-1}(A)^{-1}p(A) \varvec{b}\). Since it holds that \(\varvec{x}^T A = \lambda \varvec{x}^T\), we also have \(\varvec{x}^T \varvec{v} = \varvec{x}^T q_{k-1}(A)^{-1}p(A) \varvec{b} = q_{k-1}(\lambda )^{-1}p(\lambda ) \varvec{x}^T \varvec{b} = 0\), since \(\varvec{b} \perp \varvec{x}\). For the second part of the statement, we have \(QQ^T \varvec{v} = \varvec{v} - \varvec{x} \varvec{x}^T\varvec{v} = \varvec{v}\) since \(\varvec{v} \perp \varvec{x}\).

(b) Let us show that \((Q^TAQ - \xi _k I)Q^T (A - \xi _k I)^{-1}Q = I\). We have:

$$\begin{aligned} (Q^TAQ - \xi _k I)Q^T (A - \xi _k I)^{-1} Q&= Q^T(A-\xi _k I) QQ^T (A-\xi _k I)^{-1}Q \\&= Q^T(A-\xi _k I) (I - \varvec{x} \varvec{x}^T) (A - \xi _k I)^{-1}Q \\&= I - Q^T(A-\xi _k I) \varvec{x} (\lambda -\xi _k)^{-1} \varvec{x}^T Q = I, \end{aligned}$$

where in the last two equalities we used \(\varvec{x}^T(A - \xi _k)^{-1} = (\lambda - \xi _k)^{-1} \varvec{x}^T\) and \(\varvec{x}^T Q = \varvec{0}^T\).

(c) Let \(\varvec{v} \in {\mathcal {Q}}_k(Q^T A Q, Q^T \varvec{b})\), so \(\varvec{v} = q_{k-1}(Q^T A Q)^{-1}p(Q^TAQ) Q^T \varvec{b}\), for some polynomial p with \(\deg p \le k-1\). Using that \(QQ^T = I - \varvec{x} \varvec{x}^T\) and \(\varvec{b} \perp \varvec{x}\), we can prove that \(p(Q^T A Q) Q^T \varvec{b} = Q^T p(A) \varvec{b}\). Because of (b), we also have

$$\begin{aligned} (Q^T A Q - \xi _{k-1} I)^{-1} Q^T p(A) \varvec{b}&= Q^T (A - \xi _{k-1}I)^{-1} (I - \varvec{x} \varvec{x}^T) p(A) \varvec{b} \\&= Q^T (A - \xi _{k-1}I)^{-1} p(A) \varvec{b}, \end{aligned}$$

since \(x^T p(A) \varvec{b} = p(\lambda ) \varvec{x}^T \varvec{b} = 0\). Using similar operations on the other factors of \(q_{k-1}\), we finally obtain

$$\begin{aligned} \varvec{v} = q_{k-1}(Q^T A Q)^{-1}p(Q^TAQ) Q^T \varvec{b} = Q^T q_{k-1}(A)^{-1} p(A) \varvec{b} \in Q^T {\mathcal {Q}}_k(A, \varvec{b}). \end{aligned}$$

The second identity follows from the fact that \(QQ^T \varvec{v} = \varvec{v}\) for all \(\varvec{v} \in {\mathcal {Q}}_k(A, \varvec{b})\).

(d) Using (b) and \({\mathcal {Q}}_k(A, \varvec{b}) \perp \varvec{x}\), we have the following:

$$\begin{aligned} (Q^TAQ - \xi _k I)^{-1}Q^TAQ \varvec{z}_k = Q^T(A - \xi _k I)^{-1} Q Q^T A Q Q^T \varvec{w}_k = Q^T (A - \xi _k I)^{-1} A \varvec{w}_k. \end{aligned}$$

Because of (c), it follows that \((A - \xi _k I)^{-1} A \varvec{w}_k \in {\mathcal {Q}}_{k}(A, \varvec{b})\) if and only if \(Q^T (A - \xi _k I)^{-1} A \varvec{w}_k \in Q^T {\mathcal {Q}}_{k}(A, \varvec{b}) = {\mathcal {Q}}_k(Q^T A Q, Q^T \varvec{b})\). The second statment follows from the fact that \((A - \xi _kI)^{-1} A \varvec{v} \in {\mathcal {Q}}_k(A, \varvec{b})\) for all \(\varvec{v} \in {\mathcal {Q}}_k(A, \varvec{b})\), recalling that \(\varvec{v} \in {\mathcal {Q}}_k(A, \varvec{b})\) is a continuation vector if and only if \((A-\xi _k I)^{-1}A \varvec{v} \in {\mathcal {Q}}_{k+1}(A, \varvec{b}) \setminus {\mathcal {Q}}_k(A, \varvec{b})\). \(\square \)

Having established some basic facts about the projected Krylov subspace, we are now ready to prove Theorem 5.1.

Proof

(Proof of Theorem 5.1) We start by showing that there exists a \(k \times k\) diagonal and orthogonal matrix \(D_k\) such that \(U_k = Q^T V_k D_k\). Since \(\varvec{u}_1 = Q^T \varvec{v}_1\) by definition, it is enough to prove that \(\varvec{u}_{k+1} = \pm Q^T \varvec{v}_{k+1}\), with the assumption that \(U_{k} = Q^T V_{k} D_k\), for \(k \ge 1\). If we denote by \(\varvec{z}_k\) a continuation vector for \({\mathcal {Q}}_k(Q^T A Q, Q^T \varvec{b})\), then by Lemma 5.1(d) we have that \(\varvec{w}_k = Q \varvec{z}_k\) is a continuation vector for \({\mathcal {Q}}_k(A, \varvec{b})\). We have the following chain of equalities, using Lemma 5.1(b) for the second equality:

$$\begin{aligned} {{\,\mathrm{span}\,}}\left\{ U_k,\, \varvec{u}_{k+1}\right\}&= {{\,\mathrm{span}\,}}\left\{ U_{k},\, (Q^TAQ - \xi _k I)^{-1} Q^TAQ \varvec{z}_k\right\} \\&= {{\,\mathrm{span}\,}}\left\{ Q^T V_{k} D_k,\, Q^T(A-\xi _kI)^{-1} A \varvec{w}_k \right\} \\&= Q^T {{\,\mathrm{span}\,}}\left\{ V_{k},\, (A-\xi _kI)^{-1} A \varvec{w}_k \right\} \\&= Q^T {{\,\mathrm{span}\,}}\left\{ V_{k},\, \varvec{v}_{k+1} \right\} = {{\,\mathrm{span}\,}}\left\{ U_k,\, Q^T \varvec{v}_{k+1} \right\} . \end{aligned}$$

The vector \(Q^T \varvec{v}_{k+1}\) is orthogonal to the columns of \(U_k = Q^T V_k D_k\), since

$$\begin{aligned} \varvec{v}_{k+1}^TQQ^TV_{k} = \varvec{v}_{k+1}^T (I - \varvec{x} \varvec{x}^T) V_{k} = \varvec{0}^T. \end{aligned}$$

So we have \(\varvec{u}_{k+1} = \pm Q^T \varvec{v}_{k+1}\), since both vectors are in \({\mathcal {Q}}_k(Q^TAQ, Q^T \varvec{b})\) and they are orthogonal to the columns of \(U_k\). Hence we have obtained \(U_k = Q^TV_k D_k\) and \(Q U_k = QQ^T V_k D_k = V_k D_k\), where \(D_k\) is a diagonal matrix whose diagonal elements are equal to \(\pm 1\).

Now it is straightforward to see that \(C_k = D_k B_k D_k\) and that \(\bar{\varvec{y}}_k = \bar{\bar{\varvec{y}}}_k\). Indeed we have

$$\begin{aligned} \begin{aligned} \bar{\bar{\varvec{y}}}_k&= Q U_k f(C_k) U_k^T Q^T \varvec{b} \\&= V_k D_k f(U_k^T Q^T AQ U_k)D_k V_k^T \varvec{b} \\&= V_k D_k f(D_k V_k^T A V_k D_k) D_k V_k^T \varvec{b} \\&= V_k f(B_k) V_k^T \varvec{b} = \bar{\varvec{y}}_k. \end{aligned} \end{aligned}$$

\(\square \)

The result of Theorem 5.1 can also be seen as a consequence of the implicit Q theorem for rational Arnoldi decompositions [5, Theorem 3.2].

Although Theorem 5.1 is guaranteed to hold only in exact arithmetic, we observed from our experiments that the error curves given by the two approximations (5.15) and (5.16) are almost always overlapping, so the implicit projection is a valid (and cheaper) alternative to (5.16).

Remark 5.4

The approach described in this section is similar to the deflation and augmentation strategies used in the solution of linear systems with Krylov methods: the aim of these techniques is to include exact or approximate spectral information on the matrix in order to speed up the convergence. This can be done by either adding a few known eigenvectors to the Krylov subspace, or by directly solving a deflated problem constructed using the spectral information on the matrix. Since the implicit projection method constructs a Krylov subspace that is orthogonal to the vector \(\varvec{x}\) (see Lemma 5.1(a)), it can be interpreted as an implicit way to construct an augmented Krylov subspace, also containing the eigenvector \(\varvec{x}\). For additional details on deflation and augmentation techniques used in Krylov methods for the solution of linear systems, we refer to the review article [33, Section 9] and to the references cited therein.

6 Numerical experiments

In this section we test and compare the performance of the various methods for the computation of \(f(A) \varvec{b}\) that we presented earlier, using them to approximate the solution to the fractional diffusion equation (3.2) on real-world networks, both undirected and directed. Recall that the solution to (3.2) at time t can be expressed in the form

$$\begin{aligned} \varvec{u}(t) = f(L^T) \varvec{u}_0, \qquad f(z) = e^{-t z^\alpha }, \quad t \ge 0, \quad \alpha \in (0,1). \end{aligned}$$
(6.1)

Since the graphs we consider are strongly connected, the eigenvalues \(\lambda _1, \dots , \lambda _n\) of the graph Laplacian can be ordered in a way such that \(0 = \lambda _1 < {|}\lambda _2{|} \le \dots \le {|}\lambda _n{|}\).

We use the result of Theorem 5.1 to compute \(f(L^T) \varvec{u}_0\) via (5.11) in the following way: letting \(\beta = \varvec{1}^T \varvec{u}_0 > 0\), there exists \(\varvec{w} \perp \varvec{1}\) such that \(\varvec{u}_0 = \varvec{w} + \beta \varvec{z}\) (recall that \(L^T \varvec{z} = \varvec{0}\)), and thus we can compute \(f(L^T) \varvec{u}_0\) as

$$\begin{aligned} f(L^T) \varvec{u}_0 = f(L^T) \varvec{w} + \beta f(L^T) \varvec{z} = f(L^T) \varvec{w} + \beta \varvec{z}. \end{aligned}$$
(6.2)

Theorem 5.1 guarantees that, at least in exact arithmetic, a rational Krylov method for \(f(L^T) \varvec{w}\) yields the same approximate solution as the same Krylov method applied to the projected problem \(f(Q^T L^T Q) Q^T \varvec{w}\). We refer to the method obtained by using (6.2) and approximating \(f(L^T)\varvec{w}\) with a Krylov method as an implicitly projected Krylov method.

As we mentioned earlier, we use poles located on the negative real axis \((-\infty , 0)\). For the Shift-and-Invert Krylov method, we compare two different choices of poles. Recall that in the case of a symmetric positive definite matrix A, if \(a>0\) is a lower bound for the smallest eigenvalue of A and \(b>0\) is an upper bound for its largest eigenvalue, so that \(\sigma (A)\subset [a, b]\), an effective pole choice for the Shift-and-Invert Krylov method is given by \(\xi = - \sqrt{ab}\) (see, e.g., [1, Section 6]). In analogy with this choice, we use the pole \(\xi = -\sqrt{{|}\lambda _2 \lambda _n{|}}\): if the graph is undirected, when we use the rank-one shift approach (5.2) with \(\theta \ge {|}\lambda _2{|}\), this pole corresponds exactly to the optimal choice \(\xi = -\sqrt{\lambda _\text {min}\lambda _\text {max}}\) for symmetric positive definite matrices; the same choice appears to be reasonable also for the singular graph Laplacian and in the directed case. Indeed, our experiments show that this choice always provides a reliable convergence rate.

As a second possible choice for the Shift-and-Invert Krylov method, we use the pole \(\xi = -t^{-2/\alpha }\) proposed in [28]; this choice is based on an integral bound for the error of the Shift-and-Invert Krylov method, obtained using an integral expression for the function f. The choice \(\xi = -t^{-2/\alpha }\) was proposed for specific functions that arise in the context of fractional differential equations, like \(f(z) = e^{-tz^\alpha }\) or \(f(z) = (1 + tz^\alpha )^{-1}\), with \(\alpha \in (0,1)\) and \(t > 0\). This pole choice is particularly effective when t is large, but it is more sensitive to changes in the parameters.

For the rational Krylov method based on the equidistributed sequence (EDS) described in Sect. 4.1, we computed the asymptotically optimal poles using the spectral interval \([0.99 \cdot {|}\lambda _2{|}, \,1.01\cdot {|}\lambda _n{|}]\), once again ignoring the presence of the eigenvalue of the Laplacian at 0. The resulting poles are located on the negative real axis \((-\infty , 0)\). Similarly to the choice \(\xi = -\sqrt{{|}\lambda _2 \lambda _n{|}}\) for the Shift-and-Invert Krylov method, this pole sequence is guaranteed to have the asymptotic rate of convergence (4.3) on the rank-one shifted matrix \(L^T + \theta \varvec{z} \varvec{1}^T\) when the graph is undirected (for \(\theta \ge {|}\lambda _2{|}\)). The experiments that we performed suggest that the same choice is still very effective also in the directed case and even when applied directly to the singular graph Laplacian. Note that this method, as well as the Shift-and-Invert method with \(\xi = -\sqrt{{|}\lambda _2 \lambda _n{|}}\), requires the knowledge of the largest and smallest nonzero eigenvalues of the graph Laplacian L, that have to be computed beforehand.

All the experiments were performed in Matlab R2019b on a laptop running Ubuntu 20.04, with 8 GB of RAM and an Intel Core i5-3337U CPU running at 1.8 GHz, using the rat_krylov function in the Rational Krylov toolbox [4]. The shifted linear systems in the Shift-and-Invert Krylov method were solved by computing beforehand an LU decomposition of the permuted matrix \(P^T L^T P - \xi I\), where P is a fill-in reducing permutation matrix obtained using the amd Matlab function. In the rank-one shifted and in the projected version of the methods, we used the modified Sherman-Morrison formula (5.8), with the vector \(\varvec{\psi }\) defined in (5.7), and the identities (5.13) to avoid explicitly forming the dense matrices \(L^T + \theta \varvec{z} \varvec{1}^T\) and \(Q^T L^T Q\). The use of (5.8) over (5.4) allowed us to avoid the cancellation in (5.6) for poles close to zero, as discussed after Remark 5.2. We set \(\theta = 1\) in (5.2) and we used the matrix Q defined by (5.12).

The error that we display is the relative error in the 2-norm, \({\Vert }\varvec{y} - \bar{\varvec{y}}_k{\Vert }_2\), where \(\varvec{y}\) is the solution to (3.2) at a certain time t, or an accurate approximation computed with a Krylov method when the size of the graph is large. In all our experiments, we first extracted the largest strongly connected component (LCC) of a graph and we restricted our problem to that component. Information on the number of nodes and edges of these components, as well as the maximum and minimum nonzero eigenvalues of the corresponding graph Laplacians, are reported in Table 1. The real-world networks that we used are available in the Sparse Matrix Collection [12].

Table 1 Some information on the graphs used in the experiments. A denotes the \(n \times n\) adjacency matrix of the LCC of each graph

As we observed in Sect. 3, the solution \(\varvec{u}(t)\) to the fractional diffusion equation (3.2) is a probability vector for all \(t \ge 0\), and hence it is desirable for the approximations computed with Krylov methods to have the same property. In our experiments, we observed that this is indeed the case for the Krylov methods applied to the shifted matrix \(L^T + \varvec{z} \varvec{1}^T\), as well as for the projected and implicitly projected Krylov methods. On the other hand, in general the approximate solutions obtained by working directly with the singular graph Laplacian \(L^T\) do not have entries that sum up to 1, and often exhibit a wildly oscillating error; moreover, upon closer inspection, we observed that a significant portion of the error lied along the left null-eigenvector \(\varvec{z}\) of the graph Laplacian L. By subtracting a multiple of \(\varvec{z}\) from the approximate solution to enforce \(\varvec{1}^T \bar{\varvec{y}}_k = 1\), we were able to “correct” this oscillating component of the error; specifically, for each k we replaced the approximate solution \(\bar{\varvec{y}}_k\) obtained after k iterations of a Krylov method with the corrected approximation

$$\begin{aligned} \bar{\varvec{y}}_k - (\varvec{1}^T \bar{\varvec{y}}_k - 1) \varvec{z}, \end{aligned}$$
(6.3)

whose entries now sum up to 1. We found that this correction greatly reduced the error both in the undirected and in the directed case, and hence we always applied it to the standard version of Krylov methods in all our experiments. On the other hand, the error correction (6.3) was never needed for the shifted, projected and implicitly projected variants of the methods. The error with and without the correction (6.3) is illustrated for the directed graph Roget in Fig. 1.

Fig. 1
figure 1

Convergence of Krylov methods for the computation of the solution to the fractional diffusion equation (3.2) with \(\alpha = 0.5\) and \(t = 1\) on the LCC of the directed graph Roget, with and without the error correction (6.3). Note the logarithmic scale on the vertical axis

Fig. 2
figure 2

Convergence of Krylov methods for the computation of the solution to the fractional diffusion equation (3.2) on the LCC of the undirected graph minnesota, for different values of \(\alpha \) and t. Note the logarithmic scale on the vertical axis

Fig. 3
figure 3

Convergence of Krylov methods for the computation of the solution to the fractional diffusion equation (3.2) on the LCC of the directed graph wiki-Vote, for different values of \(\alpha \) and t. Note the logarithmic scale on the vertical axis

Fig. 4
figure 4

Convergence of Krylov methods for the computation of the solution to the fractional diffusion equation (3.2) on the LCC of the undirected graphs Oregon-1, ca-HepPh and as-22july06, for different values of \(\alpha \) and t. Note the logarithmic scale on the vertical axis

Fig. 5
figure 5

Convergence of Krylov methods for the computation of the solution to the fractional diffusion equation (3.2) on the LCC of the directed graphs enron, p2p-Gnutella and hvdc1, for different values of \(\alpha \) and t. Note the logarithmic scale on the vertical axis

The first set of experiments is performed on graphs of moderate size (about 1000 nodes), where the solution to (6.1) can still be computed directly in a reasonable amount of time via an eigendecomposition of the graph Laplacian. In these experiments, we used the largest connected component (LCC) of the undirected graph minnesota (2640 nodes) and the LCC ot the directed graph wiki-Vote (1300 nodes). The results for different values of t and \(\alpha \) are shown in Figs. 2 and 3. We can see that the EDS method always converges in the smallest number of iterations, with a rate that is always equal to or better than the one predicted by the bound (4.3), even in the nonsymmetric case. The Shift-and-Invert (S&I) Krylov method with \(\xi = -\sqrt{{|}\lambda _2 \lambda _n{|}}\) has a reliable convergence rate for all choices of the parameters, while the one with \(\xi = -t^{-2/\alpha }\) is more effective for large t (see, e.g., Fig. 3c); however, it is more sensitive to changes in the parameters, and sometimes converges slowly (Fig. 2a). As expected, the polynomial Krylov method usually converges very slowly, except for the case \(\alpha = 1\), corresponding to the matrix exponential (Fig. 2c), for which polynomial Krylov methods are known to be effective. Note that in Fig. 2b it holds \(t^{-2/\alpha } = 10^{-4}\), and observe that the rank-one shifted S&I method with \(\xi = -t^{-2/\alpha }\) attains the same accuracy as the other methods, as a result of formula (5.8); because of the cancellation discussed in Remark 5.2, the same accuracy could not be attained by using (5.4) in place of (5.8). The error curves of the desingularized methods are always overlapped to each other (showing, in particular, that the result of Theorem 5.1 also holds in finite precision arithmetic), and they often represent an improvement over the standard version of Krylov methods. Note that the desingularization techniques seem to be always effective for the polynomial Krylov method, reducing the error of at least one or two orders of magnitude compared to the standard version. We also point out that in certain cases the desingularized methods manage to attain a higher final accuracy: this is most apparent in Fig. 3c for the S&I method with \(\xi = -\sqrt{{|}\lambda _2\lambda _n{|}}\).

The second set of experiments deals with larger graphs with about 10000 or 20000 nodes, for which the computation of an eigendecomposition of the graph Laplacian would be very expensive. Based on the results of the experiments on smaller graphs, in this case we compute the error using as the reference solution an approximation to \(f(L^T) \varvec{u}_0\) computed using the EDS rational Krylov method with implicit projection, stopping the iterations when the 2-norm of the difference between two consecutive iterates is of the order of machine precision. To avoid bias in the error curves, we chose a different starting point for the EDS of the reference solution, thus producing a sequence of poles that is different from the one used to plot the error of the EDS method. We used the LCC of the undirected graphs Oregon-1 (11174 nodes), ca-HepPh (11204 nodes) and as-july06 (22963 nodes), and of the directed graphs enron (8271 nodes), p2p-Gnutella30 (8490 nodes) and hvdc1 (24836 nodes). The results for different values of \(\alpha \) and t are shown in Figs. 4 and 5. The use of desingularization is again shown to be beneficial, often leading to faster convergence and attaining a better maximum accuracy. In Figs. 4b and 5a we can observe that the S&I method with \(\xi = -t^{-2/\alpha }\) does not suffer from cancellation by virtue of (5.8), despite the presence of poles close to zero. These experiments also show that the polynomial Krylov method can have a variable and unpredictable convergence rate, depending on the graph: there are situations in which the convergence can take place quickly (Fig. 5b) or with moderate speed (Fig. 4a), but more often than not this method converges very slowly and the error practically stagnates (see, e.g., Fig. 4b).

We have reported in Table 2 the execution times required by each method to reach a relative accuracy of \(10^{-10}\), using the same graphs and parameters \(\alpha \), t as in Figs. 4 and 5. A “-” indicates that the polynomial Krylov method failed to attain the requested accuracy after 300 iterations. For each graph, the first line in Table 2 lists the execution times for the standard Krylov methods, and the second line reports the average of the times for the 3 different methods with desingularization. For a fixed choice of poles, the execution times of the desingularized methods are extremely similar, since these methods require the same number of iterations and roughly the same amount of work per iteration, with the explicit projection method being slightly more expensive. Observe that for the graph p2p-Gnutella30 the polynomial Krylov method is faster than the rational methods in terms of execution time, even though it requires more iterations. However, in all the other experiments its convergence is too slow for it to be competitive.

In our implementation, the linear systems in each iteration of a rational Krylov method are solved with a direct method. This strategy suffers from severe fill-in when the network has a small-world structure: this is evident in the execution times of the EDS rational Krylov method for the directed graphs enron and p2p-Gnutella30, where a different shift is used at each iteration, and hence a new factorization has to be computed. On the other hand, this effect is less pronounced in the case of the Shift-and-Invert method, since the factorization is only computed once and reused in each iteration. As an example, if L is the Laplacian of the graph enron with nodes permuted with amd, the lower triangular factor in the LU factorization of \(L + I\) has 1.26 million nonzero entries. In contrast, for the graph hvdc1 the lower triangular factor of \(L + I\) only has 120.714 nonzero elements.

The problem of fill-in can be circumvented by using an iterative method for solving the linear systems in each iteration of a rational method: however, this strategy gives rise to a variety of questions, such as the choice of preconditioners and stopping criteria for the inner iteration, marking it as an interesting direction for future research.

Table 2 Execution times in seconds required to reach a relative accuracy of \(10^{-10}\), with reference to the plots in Figs. 4 and 5

7 Conclusions

In this work we have discussed the use of rational Krylov methods for the solution of the fractional diffusion equation on a graph. In order to improve the convergence speed of the methods, we have proposed three different procedures to deal with the eigenvalue at zero of the graph Laplacian, namely a rank-one shift, a subspace projection, and an implicit version of this projection. The experiments we conducted show that these three procedures yield in practice the same convergence curves, and often they are faster and attain higher accuracy than the original Krylov methods. To be applied, these methods only require the computation of the left zero-eigenvector of the graph Laplacian, and an additional cost of O(n) per iteration for the rank-one shift and projection techniques. The implicit projection approach is extremely easy to implement, since it only modifies the starting vector for the Krylov method and it requires no additional computations at each iteration.

Among the Krylov methods that we tested, the one based on the EDS and the S&I method with \(\xi = -\sqrt{{|}\lambda _2 \lambda _n{|}}\) converge quickly regardless of the parameters \(\alpha \) and t; however, these methods require the computation of approximations to the eigenvalues \(\lambda _2\) and \(\lambda _n\) of the graph Laplacian. On the other hand, the S&I method with \(\xi = -t^{-2/\alpha }\) requires no previous knowledge of the spectrum of L, but its rate of convergence is more sensitive to changes in the parameters; even so, this method can sometimes outperform the others, especially when t is large.