1 Introduction

If \(\mathscr {L}\,^{(n)}\) is a matrix that discretizes a linear self-adjoint differential operator \(\mathscr {L}\), obtained by a discretization scheme like Finite Differences (FD), Finite Elements (FE), Isogeometric Galerkin Analysis (IgA), etc., several problems require that the spectrum of \(\mathscr {L}\,^{(n)}\) converges uniformly to the (point) spectrum of the operator \(\mathscr {L}\). More precisely, let us set the notation: for any fixed \(k,j \in {\mathbb {N}}\setminus \{0\}\), indicate with \(\lambda _{k}\) and \(\lambda _{-j}\) the k-th nonnegative and the j-th negative real eigenvalue of a given operator, respectively. Given a \(d_n\times d_n\) matrix \(\mathscr {L}\,^{(n)}\) with \(d_n\) real eigenvalues, denote with \(d^+_n\) the sum of the null and positive indices of inertia and with \(d^-_n\) the negative index of inertia, i.e.,

$$\begin{aligned} d^+_n:= \left| \left\{ \lambda \left( \mathscr {L}\,^{(n)}\right) \, : \, \lambda \left( \mathscr {L}\,^{(n)}\right) \ge 0 \right\} \right| , \qquad d^-_n:= \left| \left\{ \lambda \left( \mathscr {L}\,^{(n)}\right) \, : \, \lambda \left( \mathscr {L}\,^{(n)}\right) < 0 \right\} \right| . \end{aligned}$$

Clearly, \(d_n = d^-_n + d^+_n\). Finally, sort the eigenvalues in nondecreasing order, that is,

$$\begin{aligned} \ldots \le \lambda _{-j}^{(n)} \le \cdots \le \lambda _{-1}^{(n)}< 0 \le \lambda _1^{(n)} \le \cdots \le \lambda _k^{(n)} \le \ldots \end{aligned}$$

With this notation, the property of uniform spectral convergence we mentioned above translates into asking that

$$\begin{aligned} \mathscr {E}:= \limsup _{n \rightarrow \infty }\mathscr {E}_n=0, \qquad \mathscr {E}_n:=\max _{\begin{array}{c} k=-d^-_n,\ldots , d^+_n\\ k\ne 0 \end{array}}\left| \frac{\lambda _k^{(n)}}{\lambda _k}-1\right| , \end{aligned}$$
(1.1)

where n is the mesh fineness parameter, and \(\lambda _k^{(n)}\) and \(\lambda _{k}\) are the k-th eigenvalues of the discretized and the continuous operator, respectively. Typically, \(\mathscr {E}>0\): the relative error estimates for the eigenvalues and eigenfunctions are good only in the lowest modes, that is, for \(|k|=1,\ldots , K_n\), where \(K_n/d_n\rightarrow \sigma \) as \(n\rightarrow \infty \) and such that \(\sigma \ll 1\). In general, a large portion of the eigenvalues, the so-called higher modes, are not approximations of their continuous counterparts in any meaningful sense. This may negatively affect the solutions obtained by discrete approximations of elliptic boundary-value problems, or parabolic and hyperbolic initial value problems. In these cases, all modes may contribute in the solution to some extent and inaccuracies in the approximation of higher modes can not always be ignored. Regarding this, see for example the spectral-gap problem of the one dimensional wave equation for the uniform observability of the control waves [8, 20, 28, 35] or structural engineering problems [27, Sects. 3–6].

In this setting, the theory of Generalized Locally Toeplitz (GLT) sequences provides the necessary tools to understand whether the methods used to discretize the operator \(\mathscr {L}\) are effective in approximating the whole spectrum. The GLT theory originated from the seminal work of P. Tilli on Locally Toeplitz sequences [48] and was later developed by S. Serra-Capizzano in [41, 42]. It was devised to compute and analyze the spectral distribution of matrices arising from the numerical discretization of integral equations and differential equations. Let \(\hat{\mathscr {L}}\,^{(n)}\) be the discrete operator normalized by an appropriate power of \(d_n\), which depends on the dimension of the underlying space and on the maximum order of derivatives involved. It usually happens that the sequence of matrices \(\left\{ \hat{\mathscr {L}\,}^{(n)} \right\} _{n}\) enjoys an asymptotic spectral distribution as \(n\rightarrow \infty \), i.e., as the mesh goes to zero. More precisely, for any test function \(F(t) \in C_c({\mathbb {C}})\) it holds that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{d_n} \sum _{\begin{array}{c} k=-d^-_n\\ k\ne 0 \end{array}}^{d^+_n}F\left( \lambda _k\left( \hat{\mathscr {L}\,}^{(n)}\right) \right) = \frac{1}{m(D)}\int _D F(\omega ({\varvec{y}}))m(d{\varvec{y}}), \end{aligned}$$

where \(\lambda _k\left( \hat{\mathscr {L}\,}^{(n)}\right) , k=-d^-_n,\ldots ,-1,1,\ldots ,d^+_n\) are the eigenvalues of the normalized operator \(\hat{\mathscr {L}\,}^{(n)}\) and \(\omega : D\subset {\mathbb {R}}^d \rightarrow {\mathbb {C}}\) is referred to as the spectral symbol of the sequence \(\left\{ \hat{\mathscr {L}\,}^{(n)} \right\} _{n}\), see [49, Eq. (3.2)] in relation to Toeplitz matrices. The GLT theory allows to compute the spectral symbol \(\omega \) related to \(\mathscr {L}\,^{(n)}\), especially if the numerical method employed to produce \(\mathscr {L}\,^{(n)}\) belongs to the family of the so-called local methods, such as FD methods, FE methods and collocation methods with locally supported basis functions.

In several recent papers the sampling of the spectral symbol was suggested to be used to approximate the spectrum of the discrete matrix operators \(\hat{\mathscr {L}\,}^{(n)}\) and \(\mathscr {L}\,^{(n)}\). Unfortunately, this approach is not always successful in general, as we will show with an example. The main reference is the paper by T. J. R. Hughes and coauthors [24], which reviews the state-of-the-art of the symbol-based analysis for the eigenvalues distribution carried on in the framework of the isogeometric Galerkin approximation (IgA). Inspired by the latter work [24], we show that the symbol \(\omega \) can measure the maximum spectral relative error \(\mathscr {E}\) defined in (1.1) and it suggests a suitable (non-uniform) grid such that, if coupled to an increasing refinement of the order of accuracy of the scheme, guarantees \(\mathscr {E}=0\).

The paper is organized as follows.

  • In Sect. 2, the spectral symbol \(\omega \) and the monotone rearrangement \(\omega ^*\) are introduced.

  • In Sect. 3 we present an asymptotic result, Theorem 3.1, which connects the eigenvalue distribution of a matrix sequence and the monotone rearrangement of its spectral symbol. It is the main tool for the results in Sect. 4. In particular, under suitable regularity assumptions on \(\omega \), thanks to Theorem 3.2 and Theorem 3.3 we prove that

    $$\begin{aligned} \mathscr {E}=\sup _{\begin{array}{c} x\in (0,1)\\ x:\zeta ^{*}(x)\ne 0 \end{array}}\left| \frac{\omega ^{*}(x)}{\zeta ^{*}(x)} -1\right| , \end{aligned}$$
    (1.2)

    where \(\zeta \) is the (normalized) Weyl distribution function of the eigenvalues of \(\mathscr {L}\,\) and \(\omega \) is the spectral symbol associated to the numerical scheme applied to discretize \(\mathscr {L}\,\).

  • Section 4 is devoted entirely to numerical experiments and applications. The validity of (1.2) is shown in Table 2 and Fig. 2. Moreover, in Sect. 4.1.1 we provide an example about the unfeasibility to obtain an accurate approximation of the eigenvalues of a differential operator by just uniformly sampling the spectral symbol \(\omega \). In Sects. 4.1.2 and 4.1.3 we generalize the aforementioned results to the case of central FD and IgA methods of higher order. By means of a suitable non-uniform grid, suggested by the spectral symbol and combined by an increasing order of the approximation, we obtain (1.1), namely \(\mathscr {E}=0\). Finally, in Sect. 4.2, we describe a simple application to the general d-dimensional Laplacian.

  • Section 5 is devoted to draw conclusions and set the lines for future research.

Due to the technicalities that involve the results presented in Sects. 3 and 4, all the proofs are moved to the appendix to improving the fluency of the reading. The appendix is therefore organized as follows.

  • Section A collects the proofs of the results presented in Sect. 3.

  • Section B provides rigorous proofs for some numerical evidences observed in Sect. 4.

  • In Sect. C is provided the FD numerical scheme used in Sect. 4.1.1 and Sect. 4.1.2.

  • In Sect. D are listed some known results about the IgA numerical scheme used in Sect. 4.1.3.

2 Spectral symbol and monotone rearrangement

In this section we provide the definitions of spectral symbol of a sequence of matrices and its monotone rearrangement, which are the main tools we will use throughout this paper to study the asymptotic spectral distribution of sequences of discretization matrices. We will use the notation \(X^{(n)},Y^{(n)}\) to denote general \(d_n\times d_n\) square matrices, while \(T^{(n)}\) will be used for Toeplitz matrices, i.e., matrices with constant elements along their diagonals: \(\left( T^{(n)}\right) _{i,j}= t_{i-j}\) for all \(i,j=1,\ldots ,d_n\), and with

$$\begin{aligned} \mathbf{t }=\left[ t_{-d_n+1}, \ldots , t_0, \ldots , t_{d_n-1}\right] \in {\mathbb {C}}^{2d_n-1}.\end{aligned}$$

If \(t_k\) is the k-th Fourier coefficient of a complex integrable function f defined over the interval \([-\pi , \pi ]\), then \(T^{(n)}= T^{(n)}(f)\) is referred to be the Toeplitz matrix generated by f.

With the symbol \(\left\{ X^{(n)}\right\} _n\) we will denote a sequence of square matrices with increasing dimensions \(d_n\times d_n\), i.e., such that \(d_n \rightarrow \infty \) as \(n \rightarrow \infty \). When we will confront the spectrum of two sequences of matrices, we will assume that they have the same dimension. The eigenvalues will be sorted in nondecreasing order, that is,

$$\begin{aligned} \lambda _{-d_n^-}^{(n)}\le \cdots \le \lambda _{-j}^{(n)} \le \cdots \le \lambda _{-1}^{(n)}< 0 \le \lambda _1^{(n)} \le \cdots \lambda _k^{(n)} \le \cdots \le \lambda _{d_n^+}^{(n)}, \end{aligned}$$

where \(d_n^-\) is the negative index of inertia and \(d_n^+\) is the sum of the null and positive indices of inertia. Clearly, \(d_n= d_n^- + d_n^+\). If the eigenvalues do not depend on the parameter n, then we will omit the superscript (n).

We will consider all the Euclidean spaces equipped with the usual Lebesgue measure \(m(\cdot )\). We will not specify the dimension of \(m(\cdot )\) by a subscript since it will be clear from the context.

2.1 Spectral symbol

The following definition of spectral symbol has been slightly modified in accordance to our notation and purposes.

Definition 2.1

(Spectral symbol) Let \(\left\{ X^{(n)}\right\} _n\) be a sequence of matrices and let \(\omega :D\subseteq {\mathbb {R}}^d\rightarrow {\mathbb {K}}\) (\({\mathbb {K}}={\mathbb {R}}\) or \({\mathbb {C}}\)) be a measurable function and D a measurable set with \(0<m(D)<\infty \), such that the Lebesgue integral

$$\begin{aligned} \int _{D} \omega ({{\varvec{y}}})m(d{{\varvec{y}}}) \end{aligned}$$

exists, finite or infinite. We say that \(\{X^{(n)}\}_n\) is distributed like \(\omega \) in the sense of the eigenvalues, in symbols \(\{X^{(n)}\}_n \sim _\lambda \omega \), if

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{d_n}\sum _{\begin{array}{c} k=-d^-_n\\ k\ne 0 \end{array}}^{d^+_n}F\left( \lambda _k\left( X^{(n)}\right) \right) =\frac{1}{m(D)}\int _{D} F\left( \omega ({{\varvec{y}}})\right) m(d{{\varvec{y}}}),\qquad \forall F\in C_c(\mathbb K). \end{aligned}$$
(2.1)

We will call \(\omega \) the (spectral) symbol of \(\{X^{(n)}\}_n\).

Relation (2.1) is satisfied for example by Hermitian Toeplitz matrices generated by real-valued functions \(\omega \in \text {L}^1([-\pi ,\pi ])\), i.e., \(\left\{ T^{(n)}(\omega ) \right\} _n\sim _\lambda \omega \), see [26, 49, 50]. For a general overview on Toeplitz operators and spectral symbol, see [10, 11]. What is interesting to highlight is that matrices with a Toeplitz-like structure naturally arise when discretizing over a uniform grid problems which have a translation invariance property, such as linear differential operators with constant coefficients.

Remark 1

If D is compact, \(\omega \) continuous, and \(\lambda _k \in R_\omega = \left[ \min \omega , \max \omega \right] \) for every \(k\in {\mathbb {N}}\), then taking \(F(t)=t\chi _{R_\omega }(t)\), with \(\chi _{R_\omega }\) a \(C^\infty \) cut-off such that \(\chi _{R_\omega }(t)\equiv 1\) on \(R_\omega \), gives

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{d_n} \sum _{\begin{array}{c} k=-d^-_n\\ k\ne 0 \end{array}}^{d^+_n}\lambda _k\left( X^{(n)}\right) = \frac{1}{m(D)}\int _{D} \omega ({{\varvec{y}}})m(d{{\varvec{y}}}). \end{aligned}$$
(2.2)

Because the Riemannian sum over equispaced points converges to the integral of the right hand side of the above formula, (2.2) could suggest that the eigenvalues \(\lambda _k\left( X^{(n)}\right) \) can be approximated by a pointwise evaluation of the symbol \(\omega ({{\varvec{y}}})\) over an equispaced grid of D, for \(n\rightarrow \infty \), except for at most an \(o(d_n)\) of outliers, see Definition 2.3. This is mostly the content of [22, Remark 3.2] and [24, Sect. 2.2].

We remark that in the special case of Toeplitz matrices generated by regular enough functions \(\omega \),

$$\begin{aligned} \lambda _k\left( T^{(n)}(\omega )\right) = \omega \left( \frac{k\pi }{d_n+1}\right) + O(d_n^{-1}) \qquad \text{ for } \text{ every } k=1,\ldots ,d_n, \end{aligned}$$

see [51] and [9]. In some sense, this justifies the informal meaning given above to the spectral symbol.

That said, unfortunately the discretization of a linear differential operator does not always own a Toeplitz-like structure. Nevertheless, the GLT theory provides tools to prove the validity of (2.1) for more general matrix sequences. For a review of the GLT theory and its applications we mainly refer to [22, 23], and to [5, 6]. We conclude this subsection with two definitions.

Definition 2.2

(Essential range) Let \(\omega \) be a real valued measurable function and define the set \(R_\omega \subset {\mathbb {R}}\) as

$$\begin{aligned} t\in R_\omega \quad \Longleftrightarrow \quad m\left( \left\{ {\varvec{y}} \in D \, : \, \left| \omega ({\varvec{y}}) - t\right| < \epsilon \right\} \right)>0 \quad \forall \epsilon >0. \end{aligned}$$

We call \(R_\omega \) the essential range of \(\omega \). Notice that \(R_\omega \) is closed.

Definition 2.3

(Outliers) Given a matrix sequence \(\{X^{(n)}\}\) such that \(\left\{ X^{(n)}\right\} _{n} \sim _{\lambda } \omega \), if \(\lambda _k\left( X^{(n)}\right) \notin R_\omega \) we call it an outlier.

2.2 Monotone rearrangement

Dealing with a univariate and monotone spectral symbol \(\omega ({{\varvec{y}}}) = \omega (\theta )\) has several advantages. Unfortunately, in general \(\omega \) is multivariate or not monotone. Nevertheless, it is possible to consider a rearrangement \(\omega ^*: (0,1) \rightarrow (\inf R_\omega , \sup R_\omega )\) such that \(\omega ^*\) is univariate, monotone nondecreasing and still satisfies the limit relation (2.1). This can be achieved in the following way.

Definition 2.4

Let \(\omega : D \subset {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) be measurable. Define

$$\begin{aligned} \omega ^* : (0,1)\rightarrow R_\omega , \qquad \omega ^*(x) = \inf \left\{ t \in R_\omega \,:\, \phi _\omega (t)> x\right\} \end{aligned}$$
(2.3)

where

$$\begin{aligned} \phi _\omega : {\mathbb {R}}\rightarrow [0,1], \qquad \phi _\omega (t) := \frac{1}{m(D)}m \left( \left\{ {{\varvec{y}}}\in D \, : \, \omega ({{\varvec{y}}}) \le t \right\} \right) , \end{aligned}$$
(2.4)

and where, in case of bounded \(R_\omega \), we consider the extension \(\omega ^*: [0,1] \rightarrow R_\omega \).

In analysis, \(\omega ^*\) is called monotone rearrangement (see [44, pg. 189]) while in probability theory it is called (generalized) inverse distribution function (see [47, pg. 260]). For “historical” reasons we will use the analysts’ name; see [18, Definition 3.1 and Theorem 3.3] and [39], where the monotone rearrangement was first introduced in the context of spectral symbol functions. Clearly, \(\omega ^*\) is a.e. unique, univariate, and monotone increasing, which make it a good choice for an equispaced sampling. On the other hand, \(\omega ^*\) could not have an analytical expression or it could be not feasible to compute, therefore it is often needed an approximation \(\omega ^*_r\). In Algorithm 1 we summarize the steps presented in [22, Example 10.2] and [24, Sect. 3] in order to approximate the eigenvalues \(\lambda _k\left( X^{(n)}\right) \) by means of an equispaced sampling of the rearranged symbol \(\omega ^*\) (or its approximated version \(\omega ^*_r\)). For the sake of clarity and without loss of generality, we suppose \(D=[0,1]\times [-\pi ,\pi ]\) and \(\omega \) continuous. As standard result in approximations of monotone rearrangements, it holds that \(\Vert \omega ^*_{r(n)} - \omega ^* \Vert _\infty \rightarrow 0\) as \(n\rightarrow \infty \), see [13, 45]. For a detailed survey about monotone rearrangements we refer to the recent work by G. Talenti [46].

figure a

3 Asymptotic spectral distribution

In this section we present the main results about the asymptotic spectral distribution of the eigenvalues of a matrix sequence with given spectral symbol. Since the proofs are rather technical, they can be found in Sect. A to not break the flow of the reading.

Theorem 3.1 connects the monotone rearrangement of the symbol function to the asymptotic eigenvalues distribution of a matrix sequence. Theorem 3.2 provides an analytic way to measure the maximum spectral relative error between two sequences of matrices, see Definition 3.1. Finally, Theorem 3.3 exploits the previous results to give an effective measure for the maximum spectral relative error that characterizes a discretization scheme, when approximating a linear self-adjoint differential operator.

3.1 Monotone rearrangement and spectral distribution

Let \(\{X^{(n)}\}_n\) be a matrix sequence such that \(\{X^{(n)}\}_n \sim _\lambda \omega \) and define the (discrete) eigenvalues counting function \(N(X^{(n)},\cdot ) : {\mathbb {R}}\rightarrow \{0,1,\ldots ,d_n\}\),

$$\begin{aligned} N\left( X^{(n)},t\right) := \left| \left\{ k=-d^-_n,\ldots ,-1, 1,\ldots ,d^+_n \, : \, \lambda _k \left( X^{(n)}\right) \le t \right\} \right| . \end{aligned}$$

Since the monotone rearrangement takes as argument values in [0, 1], it is helpful to define a new indexing \({\hat{k}}\) associated to \(X^{(n)}\) such that \({\hat{k}}/d_n \in [0,1]\). Let us set

$$\begin{aligned}&\vartheta _{X^{(n)}} : \left\{ - d^-_n\left( X^{(n)}\right) , \ldots ,-1,1,\ldots , d^+_n\left( X^{(n)}\right) \right\} \rightarrow \{1,\ldots , d_n\},\nonumber \\&{\hat{k}}=\vartheta _{X^{(n)}}(k):= {\left\{ \begin{array}{ll} k + d^-_n\left( X^{(n)}\right) +1 &{} \text{ if } k <0,\\ k + d^-_n\left( X^{(n)}\right) &{} \text{ if } k >0. \end{array}\right. } \end{aligned}$$
(3.1)

We have the following asymptotic relations.

Theorem 3.1

(Discrete Weyl’s law) Let \(\{X^{(n)}\}_n\) be a matrix sequence such that \(\{X^{(n)}\}_n \sim _\lambda \omega \). Then

$$\begin{aligned} \left\{ X^{(n)}\right\} _n \sim _\lambda \omega ^*(x), \quad x\in (0,1), \end{aligned}$$
(3.2)

and

$$\begin{aligned} \lim _{n \rightarrow \infty }\frac{N\left( X^{(n)},t\right) }{d_n} = \phi _\omega (t) \quad \text{ for } \text{ every } \text{ point } \text{ of } \text{ continuity } \text{ of } \phi _{\omega }. \end{aligned}$$
(3.3)

Suppose now that \(\phi _{\omega }\) is continuous or, equivalently, that

$$\begin{aligned} m\left( \left\{ {{\varvec{y}}}\,:\, \omega ({{\varvec{y}}})=t\right\} \right) =0 \quad \forall \, t \in {\mathbb {R}}. \end{aligned}$$

Let \({\hat{k}}={\hat{k}}(n)\) as in (3.1) be such that the sequence \(\{{\hat{k}}(n)/d_n\}_n\) converges from the left to \(x_0 \in (0,1)\), as \(n\rightarrow \infty \). Then, there exists the limit for the sequence \(\left\{ \lambda _{{\hat{k}}(n)}\left( X^{(n)}\right) \right\} _n\) and it holds that

$$\begin{aligned} \lim _{x\rightarrow x_0^-} \omega ^*\left( x\right) =\sup \left\{ t \in R_\omega \, : \, t\le \lim _{n \rightarrow \infty } \lambda _{{\hat{k}}(n)}\left( X^{(n)}\right) \right\} . \end{aligned}$$
(3.4)

In particular, if \(\omega ^*\) is continuous at \(x_0\),

$$\begin{aligned} \omega ^*\left( x_0\right) =\lim _{n \rightarrow \infty } \lambda _{{\hat{k}}(n)}\left( X^{(n)}\right) . \end{aligned}$$
(3.5)

Finally, if \(\lambda _{{\hat{k}}(n)}\left( X^{(n)}\right) \ge \inf \left( R_\omega \right) \) (\(\le \sup (R_\omega )\)) eventually, then (3.5) holds for \(x_0=0\) (\(x_0=1\)) as well.

Proof

See Sect. A. \(\square \)

In some sense, the limit relation (2.2) can be viewed as the strong law for large numbers for specially chosen sequences of dependent complex/real-valued random variables \(\left\{ \lambda _{k(n)}\left( X^{(n)}\right) \right\} _n\). See [26] for the link between the spectrum of Hermitian matrices and equally distributed sequences (in the sense of Weyl) as defined in [31, Definition 7.1], and [34] as a recent survey about equidistributions from a probabilistic point of view.

Corollary 3.1

With the same notations of Theorem 3.1and assuming \(\phi _{\omega }\) continuous, it holds that

$$\begin{aligned} \lim _{n \rightarrow \infty }\frac{\left| \left\{ k=-d^-_n,\ldots ,-1,1,\ldots ,d^+_n \, : \, \lambda _k \left( X^{(n)}\right) \notin R_\omega \right\} \right| }{d_n} = 0, \end{aligned}$$

that is, the number of possible outliers is \(o(d_n)\). Moreover,

$$\begin{aligned} \lim _{n \rightarrow \infty }\frac{\left| \left\{ k=-d^-_n,\ldots ,-1,1,\ldots ,d^+_n \, : \, \lambda _k \left( X^{(n)}\right) \le t,\, \lambda _k \left( X^{(n)}\right) \in R_\omega \right\} \right| }{d_n} = \phi _\omega (t). \end{aligned}$$

Proof

See Sect. A. \(\square \)

Corollary 3.2

With the same notations of Theorem 3.1, assume that \(\phi _{\omega }\) is continuous and \(\omega ^*\) is absolutely continuous. Let \(\omega _n^*\) be the (pseudo) inverse function of \(\frac{N\left( X^{(n)}, \cdot \right) }{d_n}\), then \(\left\| \omega _n^* - \omega ^*\right\| _\infty \rightarrow 0\) as \(n\rightarrow \infty \). Let \(h : {\mathbb {N}}\rightarrow \infty \) be such that

$$ \lim _{n \rightarrow \infty }h(d_n)=\infty , \qquad \lim _{n \rightarrow \infty }\frac{\left\| \omega _n^* - \omega ^*\right\| _\infty }{\frac{1}{h(d_n)}}=0. $$

Let \(g: (\inf R_\omega ,\sup R_\omega ) \rightarrow {\mathbb {R}}\) be a differentiable real function and let \(\{\hat{k}(n)\}_n\) be a sequence of integers such that

  1. (i)

    \(\frac{\hat{k}(n)}{d_n} \rightarrow x_0 \in [0,1]\);

  2. (ii)

    \(\lambda _{\hat{k}(n)+1}\left( X^{(n)}\right) > \lambda _{\hat{k}(n)}\left( X^{(n)}\right) \in (\inf R_\omega ,\sup R_\omega )\) eventually for \(n \rightarrow \infty \).

Then

$$ \lim _{n\rightarrow \infty } h(d_n)\left[ g\left( \lambda _{\hat{k}(n)+\left\lceil \frac{d_n}{h(d_n)}\right\rceil }\left( X^{(n)}\right) \right) - g\left( \lambda _{\hat{k}(n)}\left( X^{(n)}\right) \right) \right] = \lim _{x\rightarrow x_0} \left( g(\omega ^*(x))\right) ' $$

almost everywhere.

Proof

See Sect. A. \(\square \)

Corollary 3.3

Let \(\phi _{\omega }\) and \(\omega ^*\) be continuous, and suppose that \(R_\omega \) is bounded. Then, in presence of no outliers (eventually), the absolute error between a uniform sampling of \(\omega ^*\) and the eigenvalues of \(X^{(n)}\) converges to zero, namely

$$\begin{aligned} \max _{\begin{array}{c} k=-d^-_n,\ldots ,d^+_n\\ k\ne 0 \end{array}} \left\{ \left| \lambda _{k}\left( X^{(n)}\right) - \omega ^*\left( \frac{{\hat{k}}}{d_n+1}\right) \right| \right\} \rightarrow 0 \qquad \text{ as } n\rightarrow \infty . \end{aligned}$$

Proof

See Sect. A. \(\square \)

3.2 Local and maximum spectral relative errors

Exploiting the results obtained in the preceding subsection, we can now prescribe a way to measure the maximum relative error between two sequences of eigenvalues. Given two sequences of matrices \(\{X^{(n)}\}_n, \{Y^{(n)}\}_n\) of the same dimension \(d_n\), we will use the notation \({\hat{k}}\) as in (3.1) to have an index with a common set, \({\hat{k}}\in \left\{ 1,\ldots , d_n \right\} \), for both the eigenvalues associated to \(X^{(n)}\) and \(Y^{(n)}\). Moreover, we define

$$\begin{aligned} \vartheta := \vartheta _{X^{(n)}}^{-1}\circ \vartheta _{Y^{(n)}}, \end{aligned}$$
(3.6)

where \(\vartheta _{(\cdot )}\) was introduced in (3.1) as well. Let us observe that, if \(\lambda _j\left( Y^{(n)}\right) \) is the j-th eigenvalue associated to \(Y^{(n)}\) and \(\lambda _i\left( X^{(n)}\right) \) is the i-th eigenvalue associated to \(X^{(n)}\), then \(\vartheta (j)=i\) if and only if \({\hat{j}}={\hat{i}}\). See Fig. 1.

Fig. 1
figure 1

Example of the new index notation used to compare the eigenvalues of two matrices \(X^{(n)}\), \(Y^{(n)}\). In this case we set \(n=6\) and non-null eigenvalues. Observe that the new labeling is made necessary due to the fact that, in general, \(X^{(n)}\) and \(Y^{(n)}\) can have different indices of inertia. In this example, \(d^-_6(X^{(6)})=2\) and \(d^-_6(Y^{(6)})=4\). The function \(\vartheta \) in (3.6) allows to pass from the original indices of \(Y^{(n)}\) to the corresponding indices of \(X^{(n)}\)

Definition 3.1

(Local and maximum spectral relative errors) Given two sequences of matrices \(\{X^{(n)}\}_n, \{Y^{(n)}\}_n\) of the same dimension \(d_n\), define the sequence

$$\begin{aligned} \mathscr {E}_n\left( X^{(n)}, Y^{(n)}\right) := \max _{{\hat{k}}=1,\ldots ,d_n}\left\{ \delta _{{\hat{k}}}^{(n)}\right\} , \end{aligned}$$

where

$$\begin{aligned} \delta _{{\hat{k}}}^{(n)}:={\left\{ \begin{array}{ll}\left| \frac{\lambda _{{\hat{k}}}\left( X^{(n)}\right) }{\lambda _{{\hat{k}}}\left( Y^{(n)}\right) }-1\right| &{} \text{ if } \lambda _{{\hat{k}}}\left( Y^{(n)}\right) \ne 0,\\ 0 &{} \text{ if } \lambda _{{\hat{k}}}\left( Y^{(n)}\right) =\lambda _{{\hat{k}}}\left( X^{(n)}\right) = 0,\\ \infty &{} \text{ if } \lambda _{{\hat{k}}}\left( X^{(n)}\right) \ne \lambda _{{\hat{k}}}\left( Y^{(n)}\right) =0. \end{array}\right. } \end{aligned}$$

We call \(\delta _{{\hat{k}}}^{(n)}\) the local spectral relative error, and we call

$$\begin{aligned} \mathscr {E}\left( X^{(n)}, Y^{(n)}\right) :=\limsup _{n\rightarrow \infty } \mathscr {E}_n\left( X^{(n)}, Y^{(n)}\right) \end{aligned}$$

the maximum spectral relative error (MSRE). When both the matrix sequences \(\{X^{(n)}\}_n\), \(\{Y^{(n)}\}_n\) are well understood from the context, with abuse of notation we will just write \(\mathscr {E}\), and this applies in particular when we will compare the eigenvalues of a discretized operator \(\mathscr {L}\,^{(n)}\) with the eigenvalues of the continuous operator \(\mathscr {L}\,\), whose first \(d_n\) eigenvalues can be arranged as the diagonal entries of a diagonal matrix.

For brevity, let us set \(\kappa ^-:= \inf _{n}\left\{ d_n^-\left( Y^{(n)}\right) \right\} , \kappa ^+:= \sup _{n}\left\{ d_n^+\left( Y^{(n)}\right) \right\} \). Moreover, for any fixed \(x_0 \in [0,1]\) let us define

$$\begin{aligned}&F:= \left\{ \left\{ a_{m}\right\} _{m} \, : \, \exists \lim _{m \rightarrow \infty } a_{m} \; \text{ finite } \text{ or } \text{ infinite } \right\} ,\\&G_{x_0}:= \left\{ \left\{ \delta _{\hat{k}(n_j)}^{(n_j)} \right\} _{n_j} \, : \, \lim _{n_j\rightarrow \infty } \frac{\hat{k}(n_j)}{d_{n_j}}= x_0 \; \text{ eventually } \text{ by } \text{ passing } \text{ to } \text{ a } \text{ subsequence } \right\} ,\\&H_{x_0}:=\left\{ \lim _{n_j \rightarrow \infty } \delta _{k(n_j)}^{(n_j)} \, : \, \left\{ \delta _{\hat{k}(n_j)}^{(n_j)}\right\} _{n_j} \in F\cap G_{x_0} \right\} , \quad \sigma _{k}=\limsup _{n \rightarrow \infty }\left| \frac{\lambda _{\vartheta (k)}\left( X^{(n)}\right) }{\lambda _{k}\left( Y^{(n)}\right) }-1\right| . \end{aligned}$$

Theorem 3.2

Fix two sequences of matrices such that \(\{X^{(n)}\}_n \sim _\lambda \omega _1\) and \(\{Y^{(n)}\}_n \sim _{\lambda } \omega _2\). If

$$ m\left( \left\{ {{\varvec{y}}}\,:\, \omega _1({{\varvec{y}}})=t\right\} \right) = m\left( \left\{ \varvec{z}\,:\, \omega _2(\varvec{z})=t\right\} \right) = 0 \quad \forall \, t \in {\mathbb {R}}, $$

then

$$\begin{aligned} \mathscr {E}\left( X^{(n)},Y^{(n)}\right) \ge \max \left\{ \sup _{\begin{array}{c} x\in (0,1);\\ x:\omega _2^*(x)\ne 0 \end{array}} \left| \frac{\omega _1^*(x)}{\omega _2^*(x)} -1 \right| ; \sigma \right\} , \end{aligned}$$
(3.7)

where

$$\begin{aligned}&\sigma :={\left\{ \begin{array}{ll} 0 &{} \text{ if } \omega _2^*(x)\ne 0 \, \forall \, x\in [0,1],\\ \sup H_{x_0} &{} \text{ if } \exists \, x_0\in [0,1] \text{ such } \text{ that } \omega _2^*(x_0)=\omega _1^*(x_0)=0,\\ +\infty &{} \text{ if } \exists \, x_0\in [0,1] \text{ such } \text{ that } \omega _2^*(x_0)=0\ne \omega _1^*(x_0). \end{array}\right. } \end{aligned}$$

If \(\omega _2^*(x_0)=0\) and

$$\begin{aligned} \limsup _{n \rightarrow \infty }\left| \left| \frac{\lambda _{\vartheta (k(n))}\left( X^{(n)}\right) }{\lambda _{k(n)}\left( Y^{(n)}\right) }-1\right| - \sigma _{k(n)} \right| =0 \quad \text{ for } \text{ every } k(n) \text{ such } \text{ that } \lim _{n \rightarrow \infty }\frac{k(n)}{d_n}=0, \end{aligned}$$

then

$$ \sup H_{x_0}=\sup _{\begin{array}{c} k\in [\kappa ^-,\kappa ^+] \\ k\ne 0 \end{array}} \{\sigma _{k}\}. $$

Moreover, if \(\omega ^*_1, \omega ^*_2\) are continuous, \(R_{\omega _1}\) and \(R_{\omega _2}\) are bounded and both the sequences do not have outliers, then equality holds in (3.7).

Proof

See Sect. A. \(\square \)

3.3 Linear self-adjoint differential operators and eigenvalue distribution

The asymptotic distribution of the eigenvalues for partial differential operators on general manifolds has been widely studied and developed, see for example [32, 38] and all the references therein. The topic is too vast to cover it properly, therefore we will concentrate our examples only on a couple of cases: Sturm-Liouville operators for the one dimensional case and elliptic self-adjoint operators for the multi-dimensional case, see Sect. 4.1. Nevertheless, the tools presented in this section can be applied to study the quality of a discretization scheme to preserve the discrete spectrum of many classes of self-adjoint operators. The approach is the following: given an operator \(\mathscr {L}\,\) and its discretized version \(\mathscr {L}\,^{(n)}\), if \(\{h(d_n)\mathscr {L}\,^{(n)}\}_n \sim _{\lambda } \omega \) for a function \(h : {\mathbb {N}}\rightarrow {\mathbb {R}}\) (which depends on the dimension of the underlying space and the higher order of derivatives involved), then study the limit of

$$\begin{aligned} \zeta _n(t)=\frac{N^{(n)}\left( \mathscr {L},t\right) }{d_n}:=\frac{\left| k=-d^-_n,\ldots, -1, 1, \ldots, d^+_n \, : \, h(d_n)\lambda _k\left( \mathscr {L}\right) \le t\right| }{d_n}. \end{aligned}$$
(3.8)

We have the following result.

Theorem 3.3

Let \(\mathscr {L}\) be a self-adjoint linear operator and let \(\mathscr {L}\,^{(n)}\) be a \(d_n\times d_n\) matrix obtained from \(\mathscr {L}\) by a discretization scheme. Let \(d^-_n, d^+_n\) be the negative and nonnegative indices of inertia of \(\mathscr {L}\,^{(n)}\). Define

$$\begin{aligned} X^{(n)}:= \mathscr {L}\,^{(n)}, \qquad Y^{(n)}:=\mathop {\text {diag}}_{\begin{array}{c} k=-d^-_n,\ldots ,d^+_n\\ k\ne 0 \end{array}} \left\{ \lambda _{k}\left( \mathscr {L}\right) \right\} . \end{aligned}$$

Suppose that:

  1. (i)

    \(\left| \frac{\lambda _{\vartheta (k)}\left( \mathscr {L}^{(n)}\right) }{\lambda _{k}\left( \mathscr {L}\right) } -1\right| \rightarrow 0\) as \(n \rightarrow \infty \) for every \(k=k(n)\) such that \(\frac{k(n)}{d_n}\rightarrow 0\), where \(\vartheta \) is defined in (3.6);

  2. (ii)

    \(\{h(d_n)\mathscr {L}\,^{(n)}\}_n \sim _{\lambda } \omega \) for some fixed \(h : {\mathbb {N}}\rightarrow {\mathbb {R}}\);

  3. (iii)

    \(m\left( \left\{ {{\varvec{y}}}\, : \, \omega ({{\varvec{y}}})=t \right\} \right) =0\) for every \(t \in {\mathbb {R}}\), and \(R_\omega \) bounded;

  4. (iv)

    \(\lim _{n \rightarrow \infty } \zeta _n(t) = \zeta (t)\), where \(\zeta _n\) is defined in (3.8) and \(\zeta : {\mathbb {R}}\rightarrow [0,1]\) is continuous and surjective.

Then

$$\begin{aligned} \mathscr {E}=\mathscr {E}\left( X^{(n)}, Y^{(n)}\right) \ge \sup _{\begin{array}{c} x\in [0,1]\\ x\,:\, \zeta ^*(x)\ne 0 \end{array}} \left| \frac{\omega ^*(x)}{\zeta ^*(x)} -1\right| . \end{aligned}$$

Moreover, if \(\omega ^*\),\(\zeta ^*\) are continuous and \(h(d_n)\lambda _k\left( \mathscr {L}\,^{(n)} \right) \in R_\omega \) eventually, then equality holds.

Proof

See Sect. A. \(\square \)

The (normalized) Weyl function \(\zeta \) is known for many kind of differential operators. See the next section for a couple of examples.

4 Numerical experiments

We will write \(\mathscr {L}\,^{(n,\eta )}\) to denote a \(d_n\times d_n\) square matrix which is the discretization of a linear differential operator \(\mathscr {L}\) by means of a numerical scheme of order of approximation \(\eta \). In the case where the approximation order \(\eta \) is clear by the context, then we will omit it. If the discretized operator \(\mathscr {L}\,^{(n)}\) is normalized by a constant depending on the fineness mesh parameter n, then we will denote it with \(\hat{\mathscr {L}\,}^{(n)}\). We will use the subscripts dir and BCs to denote a (discretized) linear differential operator characterized with Dirichlet or generic boundary conditions, respectively. When it will be necessary to highlight the dependency of the differential operator on a variable coefficient \(p({{\varvec{x}}})\), we will write it as subscript. So, for example, the normalized discretization of a diffusion operator with Dirichlet BCs by means of the IgA scheme of order \(\eta \) will be denoted by

$$\begin{aligned} \hat{\mathscr {L}\,}^{(n,\eta )}_{\text {dir},p({{\varvec{x}}})}. \end{aligned}$$

All the computations are performed on MATLAB R2018b running on a desktop-pc with an Intel i5-4460H @3.20 GHz CPU and 8GB of RAM.

4.1 Application to Euler-Cauchy differential operator

We begin our analysis with respect to a toy-model example. In this subsection, the main objectives are:

  • to show numerical evidences of Theorem 3.3, i.e., that the monotone rearrangement \(\omega ^*(x)\) of the spectral symbol \(\omega ({{\varvec{y}}})\) measures the maximum spectral relative error \(\mathscr {E}\). See Sects. 4.1.1, 4.1.2 and 4.1.3;

  • to disprove that, in general, a uniform sampling of the spectral symbol \(\omega ({{\varvec{y}}})\) can provide an accurate approximation of the eigenvalues of the normalized and unnormalized discrete operators \(\hat{\mathscr {L}\,}^{(n)}\) and \(\mathscr {L}\,^{(n)}\), respectively. See Sect. 4.1.1;

  • to show that a discretization scheme can lead to \(\mathscr {E}=0\) if coupled with a suitable (possibly non-uniform) grid and an increasing order of approximation. See Sects. 4.1.2 and 4.1.3.

Let us fix \(\alpha >0\) and let us consider the following self-adjoint operator with Dirichlet BCs,

$$\begin{aligned} \mathscr {L}_{\text {dir},\alpha x^2}[u](x) := -\left( \alpha x^2 u'(x)\right) ', \qquad \text {dom}\left( \mathscr {L}_{\text {dir},\alpha x^2}\right) =W^{1,2}_0(1,\text {e}^{\sqrt{\alpha }}). \end{aligned}$$
(4.1)

The formal equation is an Euler-Cauchy differential equation and by means of the Liouville transformation

$$\begin{aligned} y(x)= \int _1^x \left( \sqrt{\alpha }t\right) ^{-1}dt, \end{aligned}$$
(4.2)

the operator (4.1) has the same spectrum of

$$\begin{aligned} \varDelta _{\text {dir},\alpha }[v](y):= -v''(y) + \frac{\alpha }{4}v(y), \qquad \text {dom}\left( \varDelta _{\text {dir},\alpha }\right) =W^{1,2}_0(0,1), \end{aligned}$$

which is a self-adjoint operator in Schroedinger form with constant potential \(V(y)\equiv \alpha /4\). For a general review, see for example [17, 21, 52]. It is clear that

$$\begin{aligned} \lambda _k \left( \mathscr {L}_{\text {dir},\alpha x^2}\right) = \lambda _k\left( \varDelta _{\text {dir},\alpha } \right) = \lambda _k\left( \varDelta _{\text {dir}} \right) +\frac{\alpha }{4} = k^2 \pi ^2 +\frac{\alpha }{4}\quad \text{ for } \text{ every } k\ge 1, \end{aligned}$$

where

$$\begin{aligned} \varDelta _{\text {dir}}[v](y):= -v''(y), \qquad \text {dom}\left( \varDelta _{\text {dir}}\right) =W^{1,2}_0(0,1). \end{aligned}$$

For later reference, notice that

$$\begin{aligned} \lim _{\alpha \rightarrow 0} \lambda _k \left( \mathscr {L}_{\text {dir},\alpha x^2}\right) = \lambda _k\left( \varDelta _{\text {dir}} \right) = k^2 \pi ^2 \quad \text{ for } \text{ every } k\ge 1, \end{aligned}$$
(4.3)

namely, the diffusion coefficient \(p(x)=\alpha x^2\) produces a constant shift of \(\alpha /4\) to the eigenvalues of the unperturbed Laplacian operator with Dirichlet BCs, i.e., \(\varDelta _{\text {dir}}\).

We introduce the following definition.

Definition 4.1

(Numerical and analytic spectral relative errors) Let \(\mathscr {L}_{\text {dir},\alpha x^2}^{(n)}\) be the discrete differential operator obtained from (4.1) by means of a generic numerical discretization method. If

$$\begin{aligned} \left\{ (d_n+1)^{-2}\;\;\mathscr {L}_{\text {dir},\alpha x^2}^{(n)}\right\} _n \sim _{\lambda } \omega _\alpha (x,\theta ), \qquad (x,\theta ) \in [1,\text {e}^{\sqrt{\alpha }}]\times [-\pi ,\pi ], \end{aligned}$$

then fix \(n,n',r \in {\mathbb {N}}\), with \(n'\gg n\) and \(r=r(n)\ge n\) and compute the following quantities

$$\begin{aligned} \mathbf{err }_{\alpha ,k}^{(n)}= \left| \frac{\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}^{(n)}\right) }{\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}^{(n')}\right) } -1\right| , \qquad \mathbf{err }_{\alpha ,r,k}^{*,(n)}= \left| \frac{(d_n+1)^2\omega ^{*,(n)}_{\alpha ,r,k}}{\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}^{(n')}\right) } -1 \right| \end{aligned}$$

for \(k=1,\cdots , d_n\). Specifically, \(\omega ^{*,(n)}_{\alpha ,r,k}= \omega ^*_{\alpha ,r}\left( \frac{k}{d_n+1}\right) \), where \(\omega ^*_{\alpha ,r}\) is the (approximated) monotone rearrangement of the spectral symbol \(\omega _\alpha \) obtained by the procedure described in Algorithm 1. We call \(\mathbf{err }_{\alpha ,k}^{(n)}\) the numerical spectral relative error and \(\mathbf{err }_{\alpha ,r,k}^{*,(n)}\) the analytic spectral relative error. The difference between these definitions and Definition 3.1 is that in this case we are using as a comparing sequence the eigenvalues of the same discrete operator on a finer mesh, since supposedly we do not know the exact eigenvalues of the continuous operator but we do know that \(\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}^{(n')}\right) \) converges to the exact eigenvalue \(\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}\right) \) as \(n'\rightarrow \infty \). We say that \(\omega _\alpha \) spectrally approximates the discrete differential operator \(\mathscr {L}_{\text {dir},\alpha x^2}^{(n)}\) if

$$\begin{aligned} \limsup _{n \rightarrow \infty }\; \left( \mathbf{err }_{\alpha ,r,k}^{*,(n)}\right) =0 \qquad \text{ for } \text{ every } \text{ fixed } k. \end{aligned}$$

4.1.1 Approximation by 3-points central FD method on uniform grid

In our example, if we apply the standard central 3-points FD scheme, see [43], then the sequence of the normalized discretization matrices \(\hat{\mathscr {L}}_{\text {dir},\alpha x^2}^{(n)}= (n+1)^{-2}\mathscr {L}_{\text {dir},\alpha x^2}^{(n)}\) of the operator (4.1) has spectral symbol

$$\begin{aligned} \omega _\alpha (x,\theta )= \frac{\alpha x^2}{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2}4\sin ^2\left( \frac{\theta }{2}\right) , \qquad \text{ with }\quad D=[1,\text {e}^{\sqrt{\alpha }}]\times [0,\pi ], \end{aligned}$$

see [22, Theorem 10.5]. Working with this toy-model problem using the 3-points central FD scheme provides us a further advantage, since we can analytically compute the monotone rearrangement \(\omega _\alpha ^*\), or at least a finer approximation than \(\omega ^*_{\alpha ,r}\) which does not depend on the extra parameter r and is less computationally expensive. Indeed, from Eq. (2.4) we have that

$$\begin{aligned} \phi _{\omega _\alpha } : \left[ 0,\frac{4\alpha \text {e}^{2\sqrt{\alpha }}}{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2}\right] \rightarrow \left[ 0,1\right] , \qquad \end{aligned}$$
(4.4)

where

$$\begin{aligned} \phi _{\omega _\alpha }(t) = \frac{1}{\pi \left( \text {e}^{\sqrt{\alpha }}-1\right) } m \left(\left\{ (x,\theta ) \in [1,\text {e}^{\sqrt{\alpha }}]\times [0,\pi ] \, : \, \frac{4\alpha x^2\sin ^2(\theta /2)}{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2} \le t \right\}\right) . \end{aligned}$$

After some analytical manipulation, \(\phi _{\omega _\alpha }\) can be expressed explicitly

$$\begin{aligned}&\phi _{\omega _\alpha }(t) = \frac{1}{\pi \left( \text {e}^{\sqrt{\alpha }}-1\right) }& \cdot {\left\{ \begin{array}{ll} \varPhi \left( t,\text {e}^{\sqrt{\alpha }}\right) - \varPhi \left( t,1\right) &{} \text{ if } t \in \left[ 0, \frac{4\alpha }{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2}\right] ,\nonumber \\ \pi \left( \frac{\left( \text {e}^{\sqrt{\alpha }}-1\right) \sqrt{t}}{2\sqrt{\alpha }}-1\right) + \left[ \varPhi \left( t,\text {e}^{\sqrt{\alpha }}\right) - \varPhi \left( t,\frac{\left( \text {e}^{\sqrt{\alpha }}-1\right) \sqrt{t}}{2\sqrt{\alpha }}\right) \right] &{} \text{ if } t \in \left[ \frac{4\alpha }{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2}, \frac{4\alpha \text {e}^{2\sqrt{\alpha }}}{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2}\right] , \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} \varPhi (t,x) = \frac{\left( \text {e}^{\sqrt{\alpha }}-1\right) \sqrt{t}\log \left( 2\alpha x\left( \sqrt{1-\frac{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2}{4\alpha x^2}}\right) +1\right) }{\sqrt{\alpha }} + 2x\arcsin \left( \frac{\left( \text {e}^{\sqrt{\alpha }}-1\right) \sqrt{t}}{2\sqrt{\alpha }x}\right) . \end{aligned}$$

Since we have an analytical expression for \(\phi _{\omega _\alpha }(t)\), it is then possible to compute a numerical approximation of its generalized inverse \(\omega _\alpha ^*\)over the uniform grid \(\left\{ \frac{k}{n+1} \right\} _{k=1}^n\), for example by means of a Newton method. This approximation of the monotone rearrangement does not depend on the extra parameter r: therefore, when we will compute the analytical spectral relative error with respect to \(\omega _\alpha ^*\) we will write \(\mathbf{err }_{\alpha ,k}^{*,(n)}\) without the subscript r.

Fig. 2
figure 2

For \(\alpha =1\), comparison between the distribution of the first \(n=10^2\) eigenvalues of the discrete operator \(\mathscr {L}_{\text {dir},\alpha x^2}^{(n)}\) (red-dotted line) and the n-equispaced samples of \((n+1)^2\omega ^*_{\alpha ,r}\) with \(r=10^3\) (blue-continuous line). On the x-axis is reported the quotient k/n, for \(k=1,\ldots ,n\). The sovrapposition of the graphs is explained by Theorem 3.1 and the limit (3.5)

In Fig. 2 it is possible to check that an equispaced sampling of \((n+1)^2\omega ^*_{\alpha ,r}\) asymptotically distributes exactly as the eigenvalues of the unnormalized discrete operator \(\mathscr {L}_{\text {dir},\alpha x^2}^{(n)}\). Indeed, \(\phi _{\omega _\alpha }\) is continuous and strictly monotone increasing which implies that \(\omega _\alpha ^*\) is continuous: then relation (3.5) applies. Moreover, according to Eq. (4.3), we observe that

$$\begin{aligned}&\lim _{\alpha \rightarrow 0} \phi _{\omega _\alpha }(t) = \frac{2}{\pi }\arcsin \left( \frac{\sqrt{t}}{2}\right)&t\in [0,4],\\&\lim _{\alpha \rightarrow 0}\omega _\alpha ^* (x) = 4\sin ^2\left( \frac{\pi x}{2}\right)&x \in [0,1], \end{aligned}$$

which means that the monotone rearrangement \(\omega _\alpha ^*\) converges to the spectral symbol \(\omega \) as \(\alpha \rightarrow 0\), that is, to the spectral symbol which characterizes the differential operator \(\varDelta _{\text {dir}}\) discretized by means of a 3-points FD scheme. The eigenvalues of \((n+1)^{-2}\left( \varDelta _{\text {dir}}^{(n)}\right) \) are the exact sampling of \(\omega (\theta )=4\sin ^2\left( \theta /2\right) \) over the uniform grid \(\left\{ \frac{k\pi }{n+1} \right\} _{k=1}^n\), see [43, p. 154]. This asymptotic behaviour reflects what we already observed in (4.3).

All these remarks would suggest that \(\omega _\alpha ^*\), or equivalently \(\omega ^*_{\alpha ,r}\), spectrally approximates the normalized discrete operator \(\hat{\mathscr {L}}_{\text {dir},\alpha x^2}^{(n)}\). Unfortunately, this conjecture looks to be partially proven wrong by Fig. 3: it shows the comparison between the graphs of the numerical spectral relative error \(\mathbf{err }_{\alpha ,k}^{(n)}\) and the analytic spectral relative error \(\mathbf{err }_{\alpha ,r,k}^{*,(n)}\), for several different increasing values of the parameter r. We observe a discrepancy in the analytical prediction of the eigenvalue error \(\mathbf{err }_{\alpha ,r,k}^{*,(n)}\), for small \(k\ll n\), with respect to the numerical relative error \(\mathbf{err }_{\alpha ,k}^{(n)}\).

Fig. 3
figure 3

Comparison between the numerical spectral relative error \( \mathbf{err}_{\alpha,k}^{(n)}\) and the analytic spectral relative error \( \mathbf{err}_{\alpha,k,r}^{*,(n)}\) for increasing \(r=10^2,5\cdot 10^2,8\cdot 10^2\). The values of n and \(n'\) are fixed at \(10^2\) and \(10^4\), respectively, and \(\alpha =1\). The maximum discrepancy between the numerical relative error and the analytical relative errors is achieved for \(k=1\) in all the three cases, and apparently it decreases as r increases

In particular, the maximum discrepancy is achieved at \(k=1\), for every r. The discrepancy apparently decreases as the number of grid points r increases, as well observed in [24, Figure 48] for some test-problems in the setting of Galerkin discretization by linear \(C^0\) B-splines. In that same paper, some plausible hypotheses and suggestions were advanced:

  • The discrepancy could depend on the fact that it was used \(\omega ^*_{\alpha ,r}\) instead of \(\omega _{\alpha }^*\), and then that discrepancy should tend to zero in the limit \(r\rightarrow \infty \), since \(\omega ^*_{\alpha ,r} \rightarrow \omega _{\alpha }^*\).

  • Numerical instability of the analytic relative error \(\mathbf{err }_{\alpha ,r,k}^{*,(n)}\) for small eigenvalues, [24, Remark 3.1].

  • Change the sampling grid into an “almost uniform” grid: see [24, Remark 3.2] for details.

The problem is that these hypotheses, which stem from numerical observations, cannot be validated: the descent to zero of the observed discrepancy as r increases is only apparent. Indeed, what happens is that, for every fixed k it holds

$$\begin{aligned} \left| \mathbf{err }_{\alpha ,r,k}^{*,(n)}\right| \rightarrow c_{\alpha ,k}= \frac{\frac{\alpha }{4}}{k^2\pi ^2+\frac{\alpha }{4}}>0 \quad \text{ as } r,n \rightarrow \infty , \end{aligned}$$
(4.5)

with \(c_{\alpha ,k}\) independent of n and \(n'\). This is the content of Proposition B.1 and Remark 2 in Sect. B, with \(p(x)=\alpha x^2,w(x)\equiv 1, q(x)\equiv 0\). We then have a lower bound for the analytic spectral relative error which can not be avoided by refining the grid points. Of course, as \(n\rightarrow \infty \), then \(c_{\alpha ,k} \rightarrow 0\) as k increases. Those remarks are summarized in Table 1.

Table 1 For every fixed k and \(\alpha \), the analytic relative error \(\mathbf{err }_{\alpha ,k}^{*,(n)}\) converges to the lower bound \(c_{\alpha ,k}\) as n increases, where \(c_{\alpha ,k}\) is given in (4.5)

The problem lies on the wrong informal interpretation given to the limit relation in Definition 2.1, and suggested by Remark 1. Indeed, the asymptotic equality (2.1) tells us that

$$\begin{aligned} \left( \frac{k(n)}{n}, \lambda _{k(n)}\left( \hat{\mathscr {L}}_{\text {dir},\alpha x^2}^{(n)}\right) \right) \rightarrow \left( x, \omega ^*_\alpha (x) \right) \qquad \text{ as } n\rightarrow \infty \end{aligned}$$

for every k(n) such that \(k(n)/n\rightarrow x \in [0,1]\), see Theorem 3.1. Therefore, since \(\omega ^*_\alpha \) is continuous and \(R_\omega \) is bounded, it follows that

$$\begin{aligned} \left\| \omega ^{*,(n)}_{\alpha ,r,k}-\lambda _k\left( \hat{\mathscr {L}}\,_{\text {dir},\alpha x^2}^{(n)}\right) \right\| _\infty \rightarrow 0 \quad \text{ as } n\rightarrow \infty , \end{aligned}$$

by Corollary 3.3 and as observed for example in [22, Example 10.2 p. 198]. On the contrary, a uniform sampling of the symbol \(\omega \) does not necessarily provide an accurate approximation of the eigenvalues of the operator \(\hat{\mathscr {L}}\,_{\text {dir},\alpha x^2}^{(n)}\), in the sense of the relative error. The uniform sampling of the symbol works perfectly only for specific subclasses of discretization schemes and operators, but it fails in general.

As a last remark, there does not exist an “almost” uniform grid as well, nor in an asymptotic sense as described in [24, Remark 3.2]. Knowing the exact sampling grid which guarantees \(\omega ^*_\alpha \) to spectrally approximate the discrete differential operator is equivalent to know the eigenvalue distribution of the original differential operator.

What we can do instead is to apply Theorem 3.3. With reference to (3.8), it is easy to prove that

$$\begin{aligned} \zeta _n(t)= \frac{\left| \left\{ k=1,\ldots ,n \, : \, \frac{\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}\right) }{n^2} \le t \right\} \right| }{n}\rightarrow \zeta (t)=\frac{\sqrt{t}}{\pi }, \end{aligned}$$

that is,

$$\begin{aligned} \zeta ^*(x)= \pi ^2x^2. \end{aligned}$$
(4.6)

For every fixed k it holds that \(\lambda _{k}\left( \hat{\mathscr {L}\,}^{(n)}_{\text {dir},\alpha x^2}\right) \) converges to \(\lambda _{k}\left( \hat{\mathscr {L}}_{\text {dir},\alpha x^2}\right) \), see [43], and by the continuity of \(\phi _{\omega _\alpha }\) and by Eq. (4.6), then items (i)–(iv) of Theorem 3.3 are satisfied. Finally, since it holds that \(\hat{\mathscr {L}}_{\text {dir},\alpha \tau (x)^2}^{(n)}\) does not have outliers by [40, Theorem 2.2], we conclude that

$$\begin{aligned} \mathscr {E}= \max _{x\in [0,1]} \left| \frac{\omega ^*_\alpha (x)}{x^2\pi ^2} -1\right| >0. \end{aligned}$$
(4.7)

In Fig. 4 and Table 2 it is numerically checked the validity of (4.7).

Table 2 In this table we check numerically the validity of Theorem 3.3 for different values of \(\alpha \) and n
Fig. 4
figure 4

For \(\alpha =1.2\) and \(n=5\cdot 10^3\), comparison between the eigenvalues distribution of the normalized discrete differential operator \(\hat{\mathscr {L}}_{\text {dir},\alpha x^2}^{(n)}\) and the exact eigenvalues of the differential operator \(\mathscr {L}_{\text {dir},\alpha x^2}\), normalized by \((n+1)^2\). The maximum spectral relative error \(\mathscr {E}\) is obtained for \(k_0\approx 3151\), which corresponds to the maximum of \(|\omega _{\alpha }^*(x)/x^2\pi ^2 -1|\), achieved at \({\bar{x}}\approx 0.6301 \approx k_0/n\). See Table 2

4.1.2 Discretization by \((2\eta +1)\)-points central FD method on non-uniform grid

Clearly, everything said in the preceding Sect. 4.1.1 remains valid even if we increase the order of accuracy of the FD method. Within this subsection we will make use of the FD scheme presented in Appendix C. The theoretical analysis of the scheme requires a proper standalone treatment, therefore we leave it to an upcoming future work and some of its properties are listed as conjectures in Conjecture C.1. Nevertheless, we introduce it now since it is helpful to the flow of our discussion and to provide (numerical) evidence to our conclusions.

First of all, we can say that the spectral symbol \(\omega _{\alpha ,\eta }\) in (C.2) does not spectrally approximate the discrete differential operator \(\mathscr {L}_{\text {dir},\alpha x^2}^{(n,\eta )}\), in the sense of the relative error, for any \(\eta \ge 1\), as we have already seen in the previous subsection for \(\eta =1\).

What is interesting instead is to change the sampling grid and to increase the order of accuracy \(\eta \) of the FD discretization method. Indeed, as it was observed in (4.7), it is not possible to achieve \(\mathscr {E}=0\) if \(\omega _{\alpha ,\eta }^*(x) \ne x^2\pi ^2\). From (C.2), for every \(\eta \ge 1\) it is easy to check that \(\max _{(x,\theta )\in [a,b]\times [0,\pi ]}\omega _{\alpha ,\eta }(x,\theta )= \omega _{\alpha ,\eta }^*(1)\ne \pi ^2\), and so we do not have any improvement by just increasing the order of accuracy \(\eta \). On the other hand, observe that if we fix a new sampling grid \(\left\{ {\bar{x}}_j\right\} _{j=1}^n = \left\{ \tau (x_j)\right\} _{j=1}^n\), with \(\tau : [1,\text {e}^{\sqrt{\alpha }}]\rightarrow [1,\text {e}^{\sqrt{\alpha }}]\) a diffeomorphism, then

  • from Corollary C.1 and (C.2),

    $$\begin{aligned} \lim _{\eta \rightarrow \infty }\omega _{\alpha ,\eta }(x,\theta ) = \frac{\alpha \tau (x)^2}{\left( \tau '(x)\right) ^2\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2} \theta ^2 \qquad \text{ for } \text{ every } (x,\theta ) \in [1,\text {e}^{\sqrt{\alpha }}]\times [0,\pi ]; \end{aligned}$$
  • defining \(\tau _2(y) = \text {e}^{\sqrt{\alpha }y}\), then \(\frac{\alpha \tau _2(y)^2}{\left( \tau _2'(y)\right) ^2}\equiv 1\).

In some sense, the spectral symbol \(\omega _{\alpha ,\eta }\) suggests us to change the uniform grid \(\left\{ x_j\right\} _{j=1}^n \subset [1,\text {e}^{\sqrt{\alpha }}]\) by means of the diffeomorphism induced by the Liouville transformation. Indeed, from (4.2) we have that

$$\begin{aligned} y(x)= \frac{\log (x)}{\sqrt{\alpha }} \quad \text{ for } x \in [1,\text {e}^{\sqrt{\alpha }}], \qquad x(y)= \text {e}^{\sqrt{\alpha }y} \quad \text{ for } y \in [0,1], \end{aligned}$$

and therefore we can compose a \(C^\infty \)-diffeomorphism \(\tau : [1,\text {e}^{\sqrt{\alpha }}]\rightarrow [1,\text {e}^{\sqrt{\alpha }}]\) such that

$$\begin{aligned} \tau : [1,\text {e}^{\sqrt{\alpha }}] \xrightarrow []{\tau _1} [0,1] \xrightarrow []{\tau _2} [1,\text {e}^{\sqrt{\alpha }}], \qquad \tau _1(x) = \frac{1}{\text {e}^{\sqrt{\alpha }}-1}\left( x-1\right) , \qquad \tau _2(y)= \text {e}^{\sqrt{\alpha }y}. \end{aligned}$$
(4.8)

The new non-uniform grid is then given by

$$\begin{aligned} \left\{ {\bar{x}}_j\right\} _{j=1}^n = \left\{ \tau (x_j)\right\} _{j=1}^n, \end{aligned}$$
(4.9)

and it holds that

$$\begin{aligned} \lim _{\eta \rightarrow \infty }\omega _{\alpha ,\eta }^*(x) =\pi ^2 x^2 \qquad x \in [0,1], \end{aligned}$$

that is exactly (4.6). Since the discretization scheme is convergent, for every fixed k the local spectral relative error \(\delta _k^{(n)}\) (see Definition 3.1) converges to zero as \(n \rightarrow \infty \), and by [40, Theorem 2.2] it would hold that \(\lambda _k \left( \mathscr {L}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )}\right) \in R_{\omega _{\alpha ,\eta }}\) for every \(k,\ldots , n\), and for every fixed \(\eta \ge 1\). From these remarks, if the validity of Conjecture C.1 would be proved right, then we would be able to apply Theorem 3.3 and it would follow that

$$\begin{aligned} \mathscr {E}=\mathscr {E}(\eta ) = \left| \frac{\omega _{\alpha ,\eta }^*(x)}{\pi ^2 x^2} -1 \right| \rightarrow 0 \qquad \text{ as } \eta \rightarrow \infty . \end{aligned}$$

See Fig. 5 and Table 3.

Fig. 5
figure 5

Graphic comparison between the eigenvalues distribution of the normalized discrete differential operators \(\hat{\mathscr {L}}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )}\) obtained by means of \((2\eta +1)\)-points central FD discretization on uniform (\(\tau (x)=x\)) and non-uniform grids (\(\tau \) as in (4.8)). The parameters \(\alpha \) and n are fixed, with \(\alpha =1\) and \(n=10^3\), while \(\eta \) changes. Let us observe that in Fig. 5c,d, i.e., in the case of central FD discretization on the non-uniform grid given by (4.9), the graph of the eigenvalue distribution seems to converge uniformly to the graph of the exact eigenvalues \((n+1)^{-2}\lambda _k\), as \(\eta \) increases. The same phenomenon does not happen in the case of central FD discretization on uniform grid, as it is clear from Fig. 5a,b. See Table 3 for a numerical comparison of the maximum spectral relative errors

Table 3 Comparison between the maximum spectral relative errors of the discrete differential operators \(\mathscr {L}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )}\) obtained by means of \((2\eta +1)\)-points central FD discretization on uniform (\(\tau (x)=x\)) and non-uniform grids (\(\tau \) as in (4.8)), for increasing values of n and \(\eta \)

4.1.3 IgA discretization by B-spline of degree \(\eta \) and smoothness \(C^{\eta -1}\)

In this subsection we continue our analysis in the IgA framework, see Appendix D where we present some known results that are useful for our analysis. Here we just collect all the numerical results of the tests, which confirm again what observed in Sects. 4.1.1 and 4.1.2. The only difference relies on the fact that we took out the largest eigenvalues of the discrete operator \(\mathscr {L}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )} \). This is due to the fact that the IgA discretization suffers of a fixed number of outliers which depends on the degree \(\eta \) and it is independent of n, see [15, Chapter 5.1.2 p. 153]. So, we consider only the eigenvalues \(\lambda _k\left( \mathscr {L}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )}\right) \) which belong to \(R_{\omega _{\alpha ,\eta }}\).

In Fig. 6 we compare the graphs of the eigenvalue distributions between the discrete eigenvalues \(\lambda _k\left( \mathscr {L}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )}\right) \) and the exact eigenvalues \(\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}\right) \), for different values of \(\eta \) on uniform (\(\tau (x)=x\)) and non-uniform grids (\(\tau \) as in (4.8)). They line up with the numerics of Table 4: if the sampling grid is given by (4.9), the maximum spectral relative error \(\mathscr {E}\) decreases as the order of approximation increases.

Table 4 Comparison between the maximum spectral relative errors of the discrete differential operators \(\mathscr {L}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )}\) obtained by means of IgA discretization of order \(\eta \) on uniform (\(\tau (x)=x\)) and non-uniform grids (\(\tau \) as in (4.8)), for increasing values of \(\eta \) and n
Fig. 6
figure 6

Graphic comparison between the eigenvalues distribution of the normalized discrete differential operators \(\hat{\mathscr {L}}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )}\) obtained by means of IgA discretization of order \(\eta \) on uniform (\(\tau (x)=x\)) and non-uniform grids (\(\tau \) as in (4.8)). The parameters \(\alpha \) and n are fixed, with \(\alpha =1\) and \(n=10^2\), while \(\eta \) changes. Let us observe that in Fig. 6c, d, i.e., in the case of IgA discretization on the non-uniform grid given by (4.9), the graph of the eigenvalue distribution seems to converge uniformly to the graph of the exact eigenvalues \((n+1)^{-2}\lambda _k\), as \(\eta \) increases. The same phenomenon does not happen in the case of IgA discretization on uniform grid, as it is clear from Fig. 6a, b. Let us notice that we did not take in consideration the outliers, see Definition 2.3

4.2 d-dimensional Dirichlet Laplacian

The results of Sect. 3 can be generalized to study the maximum spectral relative error between more general linear self-adjoint differential operators and the numerical scheme implemented for their discretization. Let us sketch the ideas with a plain example.

Let \(\varDelta : W_0^{1,2}\left( [0,1]^d\right) \rightarrow \text {L}^2\left( [0,1]^d\right) \) be the second order elliptic operator defined by the formal equation

$$\begin{aligned} \varDelta [u]({{\varvec{x}}}) := - \sum _{j=1}^d\frac{\partial ^2 u}{\partial _{x^2_j}}({{\varvec{x}}}), \qquad [0,1]^d \subset {\mathbb {R}}^d. \end{aligned}$$

It is well-known that it has empty essential spectrum and that

$$\begin{aligned} \lambda _k\left( \varDelta \right)&= \lambda _{k_1\cdots k_d}\left( \varDelta \right) =\sum _{j=1}^d \pi ^2k_j^2 \quad \text{ for } k_1,\ldots ,k_d\ge 1,\\ \zeta (t)&=\lim _{n \rightarrow \infty }\frac{\left| k=1,\ldots ,n^d \, : \, \frac{\lambda _k\left( \varDelta \right) }{(n+1)^2}\le t\right| }{n^d} = c_d\left( \frac{t}{4\pi ^2}\right) ^{\frac{d}{2}}, \quad t \in [0,4\pi ^2/(c_d)^{2/d}] , \end{aligned}$$

where \(c_d\) is the volume of the unit ball in \({\mathbb {R}}^d\), see [32]. Define

$$\begin{aligned} Y^{(n)}:= (n+1)^{-2}\mathop {\text {diag}}_{k=1,\ldots ,n^d}\left\{ \lambda _{k}\left( \varDelta \right) \right\} . \end{aligned}$$

Then it holds that \(\left\{ Y^{(n)}\right\} _n \sim _{\lambda } \zeta ^*(x) = 4\pi ^2\left( \frac{x}{c_d}\right) ^{2/d}\), \(x\in [0,1]\).

On the other hand, a discretization of the operator \(\varDelta \) by means of separation of variables and the classic equispaced \((2d+1)\)-points FD scheme, leads to

$$\begin{aligned} \varDelta ^{(n)}= (n+1)^2\sum _{k=1}^d \underbrace{I^{(n)}\otimes \cdots \otimes I^{(n)}}_{k-1}\otimes T^{(n)}\left( 2-2\cos (\theta _k)\right) \otimes \underbrace{I^{(n)}\otimes \cdots \otimes I^{(n)}}_{d-k}, \end{aligned}$$

for \(\theta _k \in [0,\pi ]\), where \(I^{(n)}\) is the identity matrix and \(\otimes \) is the Kronecker product, and all the matrices are of dimension \(d_n = n^d\). It holds that

$$\begin{aligned} \left\{ (n+1)^{-2}\varDelta ^{(n)}\right\} _n=\left\{ {\hat{\varDelta }}^{(n)}\right\} _n \sim _\lambda \omega \left( {\varvec{\theta }}\right) = \sum _{k=1}^d (2-2\cos (\theta _k)) \end{aligned}$$

where

$$\begin{aligned} {\varvec{\theta }}=(\theta _1,\ldots ,\theta _d)\in [0,\pi ]^d, \end{aligned}$$

see [23, Chapter 7.3]. Since the discretization scheme is convergent, for every fixed k the local spectral relative error \(\delta _k^{(n)}\) (see Definition 3.1) converges to zero as \(n \rightarrow \infty \), and by [40, Theorem 2.2] it holds that \(\lambda _k \left( \varDelta ^{(n)}\right) \in R_{\omega }\) for every kn. Moreover, it is immediate to check that both \(\omega ^*,\zeta ^*\) are continuous, therefore we can apply Theorem 3.3 to get the maximum spectral relative error:

$$\begin{aligned} \mathscr {E}=\sup _{x\in (0,1]} \left| \frac{\omega ^*(x)}{\zeta ^*(x)} -1\right| \ge \left| \frac{\omega ^*(1)}{\zeta ^*(1)} -1\right| = \left| \frac{4d}{\frac{4\pi ^2}{(c_d)^{2/d}}} -1\right| >0. \end{aligned}$$

5 Conclusions

Given a differential operator \(\mathscr {L}\) discretized by means of a numerical scheme, the knowledge of the spectral symbol \(\omega \) provides a way to measure how far the spectrum of the discretized operator is from a uniform approximation of all the spectrum of the original differential operator \(\mathscr {L}\). Moreover, the symbol itself gives insights about how to build a discretization scheme such that the discrete operator could benefit of a spectrum that converges uniformly to the spectrum of the continuous operator, i.e., such that \(\mathscr {E}=0\), see (1.1). This is crucial in some engineering applications and our results improve those in the recent review paper by T. J. R. Hughes and coauthors.

In light of the proposed analysis, it becomes a priority to devise new specific discretization schemes with mesh-dependent order of approximation which guarantee a good balance between convergence to zero of the maximum spectral relative error and computational costs.

Finally, other applications of the spectral symbol related to the computation of the maximum spectral relative error have been recently addressed in the graphs setting, for PDEs approximation, see [1].