1 Introduction

In 1949, Love investigated for the first time on a mathematical model describing the capacity of a circular plane condenser consisting of two identical coaxial discs placed at a distance q and having a common radius r. In his paper [9], he proved that the capacity of each disk is given by:

$$ C= \frac{r}{\pi} {\int}_{-1}^{1} f(x) dx, $$

where f is the solution of the following integral equations of the second kind:

$$ f(y)\pm \frac{1}{\pi}{\int}_{-1}^{1} \frac{\omega^{-1}}{(x-y)^{2}+\omega^{-2}} f(x) dx=1 $$
(1.1)

with ω = q/r a real positive parameter. Then, he proved that (1.1) has a unique, continuous, real, and even solution which analytically has the following form:

$$ f(y)=1+\sum\limits_{j=1}^{\infty} (\mp 1)^{j} {\int}_{-1}^{1} K_{j}(x,y) dx, $$

where the iterated kernels are given by:

$$ K_{1}{\kern-.5pt}(x{\kern-.5pt},{\kern-.5pt}y)\! =\! \frac{1}{\pi} \frac{\omega^{-1}}{(x\! -\! y)^{2}\! +\! \omega^{-2}}, \quad K_{j}{\kern-.5pt}(x,\! y)\! =\! {\int}_{-1}^{1} K_{j-1}{\kern-.5pt}(x{\kern-.5pt},{\kern-.5pt}s{\kern-.5pt}) {\kern-.5pt}K_{1}{\kern-.5pt}(s{\kern-.5pt},{\kern-.5pt}y) ds, \quad j\! =\! 2, {\dots} . $$

From a numerical point of view, the developed methods [7, 8, 12, 15, 17] for the undisputed most interesting case (i.e., when ω− 1 → 0) have followed the very first methods [4, 5, 16, 19, 20], and the most recent ones [11], proposed for the case when ω = 1.

If ω− 1 → 0 the kernel function is “close” to be singular on the bisector x = y. This kind of kernel belongs to the so-called nearly singular kernels class. Moreover, Phillips noted in [16] that:

$$ \frac{1}{\pi}{\int}_{-1}^{1} \frac{\omega^{-1}}{(x-y)^{2}+\omega^{-2}} f(x) dx \to f(y) \qquad \text{if} \qquad \omega^{-1} \to 0. $$
(1.2)

Hence, for ω sufficiently large, the left-hand side of (1.1), in the case “−” is considered, becomes approximately zero which does not coincide with the right-hand side of (1.1).

In [12], the authors presented a numerical approach based on a suitable transformation to move away from the poles x = y ± ω− 1i from the real axis. The numerical method produced very accurate results in the case when ω− 1 is not so small but they are poor if ω− 1 → 0.

Then, in order to get satisfactory errors also in this latter case, in [15] the author proposed to dilate the integration interval and to decompose it into N subintervals. Hence, the equation was reduced to an equivalent system of N integral equations and a Nyström method based on a Gauss-Legendre quadrature formula was proposed for its numerical approximation. The approach produces satisfactory order of convergence even if ω− 1 is small. However, the dimension of the structured linear system that one needs to solve is very large as ω− 1 decreases.

In [7], the authors improve the results given in [15] by using the same transformation as in [8] which takes into account the behavior of the unknown function showed in (1.2). Then, they follow the approach given in [15]; i.e., they write the integral as the sum of m new integrals which are approximated by means of a n-point Gauss-Legendre quadrature rule. In this way, they get a linear system of size nm that, multiplied by suitable diagonal matrices, is equivalent to a new linear system which is solved by using a preconditioned conjugate gradient method, being the matrix of coefficients symmetric, positive definitive, and having a Toeplitz block structure.

In this paper, we consider the more general equation:

$$ f(y)+\mu {\int}_{-1}^{1} \frac{\omega^{-1}}{(x-y)^{2}+\omega^{-2}} f(x) w(x) dx=g(y), \qquad |y|<1, $$
(1.3)

where w(x) = vα(x) = (1 − x)α(1 + x)β, α,β> − 1, f is the unknown function, g is a known right-hand side, μ ∈IR and 0 < ω ∈IR.

Such equation includes (1.1) (in the case when gw ≡ 1 and \(\mu =\pm \frac {1}{\pi }\)) and, at the same time, the presence of the weight w leads to the case when the unknown function has algebraic singularities at the endpoints of [− 1,1]. So, the method we propose and the theoretical results we prove can be applied not only in the special case of the Love equation but also in the general one in which the involved functions are singular at ± 1. This general case could be interesting in the future for other possible physical models.

The method we propose is a Nyström method based on a mixed quadrature formula. This is a product rule whose coefficients are computed by using a quadrature scheme. In fact, following an idea presented in [2, 15], we approximate such coefficients by using a “dilation” quadrature formula that we prove to be stable and convergent. Such an idea which consists of a preliminary dilation of the domain, that “relax” in some sense the pathological behavior of the kernel of the integral, allows us to get very accurate results when ω is large, better than those in [7, 15]. The proposed method, whose convergence and stability are proved in suitable weighted spaces, leads to a well-conditioned linear system, the dimension of which is greatly reduced w.r.t. the ones involved in [7, 15]. The size of the system we solve does not depend on the magnitude of the parameter ω and the error is of the order of the best polynomial approximation of the unknown function, independently of the value of ω. Moreover, we underline that mutatis mutandis the numerical approach could be applied to different Fredholm integral equations with other types of “nearly singular” kernels.

The outline of the paper is as follows. At first, in Section 2, we introduce the functional spaces in which we will analyze our method. In Section 3, we remind some well-known quadrature rule (Section 3.1), we study a “dilation” quadrature formula (Section 3.2), and we propose a new mixed quadrature scheme (Section 3.3) which is used in the Nyström method presented in Section 4. Section 5 shows the efficiency of the proposed procedure by means of several numerical tests, and finally, Section 6 contains the proofs of our main results.

2 Functional spaces and notations

Let us consider the following Jacobi weight function with parameters γ,δ ≥ 0

$$ \sigma(x):=v^{\gamma,\delta}(x)=(1-x)^{\gamma} (1+x)^{\delta}, $$
(2.1)

and let us define the space of weighted continuous functions defined as:

$$ C_{\sigma}=\left\{f\in C({(-1,1)}): \lim_{x\rightarrow \pm 1^{\mp}}(f \sigma)(x)=0\right\}, $$

in the case when γ,δ > 0. In the case γ = 0 (respectively δ = 0), Cσ consists of all functions which are continuous on (− 1,1] (respectively [− 1,1)) and such that \(\displaystyle \lim _{x\rightarrow -1^{+}}(f \sigma )(x)=0\) \(\left (\text {respectively} \displaystyle \lim _{x\rightarrow 1^{-}}(f \sigma )(x)=0\right )\). Moreover, if γ = δ = 0 we set Cσ = C0([− 1,1]). We equip the space Cσ with the weighted uniform norm

$$ \|f\|_{C_{\sigma}}=\|f\sigma\|_{\infty}=\max_{x\in [-1,1] }|(f\sigma)(x)| $$

and we remark that Cσ endowed with such a weighted norm is a Banach space.

For smoother functions, we introduce the following Sobolev-type space

$$ \mathscr{W}^{r}_{\sigma}=\left\{f \in C_{\sigma}: \left\|f^{(r)}\varphi^{r} \sigma\right\|_{\infty} <\infty\right\}, $$

where the superscript (r) denotes the r th derivative of the function f with r a positive integer and \(\varphi (x)=\sqrt {1-x^{2}}\). We equip \({\mathscr{W}}^{r}_{\sigma }\) with the norm:

$$ \|f\|_{\mathscr{W}^{r}_{\sigma}}=\|f\sigma\|_{\infty}+\left\|f^{(r)}\varphi^{r} \sigma\right\|_{\infty}. $$

Finally, for any \(p \in \mathbb {N}_{0} \cup \{ \infty \}\), we denote by Cp([− 1,1]) the set of all continuous functions having p continuous derivatives.

Throughout the whole paper, we will denote by \({\mathscr{C}}\) a positive constant which will have different meanings in different formulas. We will write \({\mathscr{C}} \neq {\mathscr{C}}(a,b,\ldots )\) in order to say that \({\mathscr{C}}\) is a positive constant independent of the parameters a,b,…, and \({\mathscr{C}} = {\mathscr{C}}(a,b,\ldots )\) to say that \({\mathscr{C}}\) depends on a,b,…. Moreover, if A,B > 0 are quantities depending on some parameters, we write \(A \sim B,\) if there exists a constant \(0<{\mathscr{C}}\neq {\mathscr{C}}(A,B)\) s.t. \(\frac {B}{{\mathscr{C}}} \leq A \leq {\mathscr{C}} B\).

3 Integration formulae

3.1 Classical quadrature formulae

In this subsection, we remind two well-known quadrature rules [10] and we mention the related convergence results which will be essential for our aims.

Let us introduce the Jacobi weight of parameters α,β> − 1, i.e.:

$$ w(x):=v^{\alpha,\upbeta}(x)=(1-x)^{\alpha}(1+x)^{\upbeta}, \quad x\in (-1,1) $$
(3.1)

and let us denote by \(\left \{p_{m}\left (w\right )\right \}_{m=0}^{\infty }\) the corresponding sequence of orthonormal polynomials with positive leading coefficients, i.e.:

$$ p_{m}(w,x)=\gamma_{m}(w)x^{m}+ \ \textrm{ terms of lower degree},\quad \gamma_{m}(w)>0. $$
(3.2)

Then, we recall the Gauss-Jacobi quadrature rule [10]:

$$ {\int}_{-1}^{1} f(x)w(x)\ dx = \sum\limits_{i=1}^{m} {\lambda_{i}^{w}} f({\xi_{i}^{w}})+ \mathscr{R}_{m}(f), $$
(3.3)

where \(\{{\lambda _{i}^{w}}\}_{i=1}^{m}\) denote the Christoffel numbers w.r.t. the weight function w, \(\{{\xi _{i}^{w}}\}_{i=1}^{m}\) are the zeros of pm(w,x) and \({\mathscr{R}}_{m}\) stands for the remainder term.

About the latter, it is possible to estimate it (see, for instance, [10]) in terms of the weighted error of best polynomial approximation of fCσ, i.e.:

$$ \displaystyle E_{m}(f)_{\sigma}=\inf_{P\in \mathbb{P}_{m}} \|(f-P)\sigma\|_{\infty}, $$

where \(\mathbb {P}_{m}\) denotes the set of all algebraic polynomials of degree at most m. In fact, for each fCσ, if the parameters of the weights w and σ are such that 0 ≤ γ < α + 1 and 0 ≤ δ <β + 1, then

$$ \left|\mathscr{R}_{m}(f)\right| \leq \mathscr{C} E_{2m-1}(f)_{\sigma} , \qquad \mathscr{C}\neq \mathscr{C}(m,f), $$
(3.4)

whereas if fC2m([− 1,1]) then

$$ \left|\mathscr{R}_{m}(f)\right| \leq \frac{\|f^{(2m)}\|_{\infty}}{ (2m)! {\gamma^{2}_{m}}(w) }, $$
(3.5)

where γm(w) is the positive leading coefficient of pm(w,x) introduced in (3.2).

Now, we introduce a product rule in order to evaluate the integrals of the type

$$ {\int}_{-1}^{1} h(x,y) f(x) w(x) dx $$

where fCσ, h is a known kernel function and w is the weight defined in (3.1).

Such rule reads as [10]:

$$ {\int}_{-1}^{1} h(x,y) f(x) w(x) dx= {\sum}_{j=1}^{m} A_{j}(y) f\left( {\xi_{j}^{w}}\right)+\mathscr{E}_{m}(f,y), $$
(3.6)

where \({\mathscr{E}}_{m}(f,y)\) denotes the remainder term and the coefficients \(\{A_{j}\}_{j=1}^{m}\) are defined as:

$$ A_{j}(y)={\int}_{-1}^{1} h(x,y) {\ell_{j}^{w}}(x) w(x) dx, $$
(3.7)

with

$$ {\ell_{j}^{w}}(x)=\displaystyle \frac{p_{m}(w,x)}{p^{\prime}_{m}(w,{\xi_{j}^{w}})(x -{\xi_{j}^{w}})} $$
(3.8)

the j th fundamental Lagrange polynomial.

About the stability and the convergence of the above formula, it is possible to prove, by using a well-known Nevai theorem (see, for instance, [10, 13]), that if

$$ \begin{array}{@{}rcl@{}} \underset{|y|\leq 1}{\sup}{\int}_{-1}^{1} \frac{\left|h(x,y)\right|w(x)}{\sigma(x)}\log\left( 2+\frac{\left|h(x,y)\right|w(x)}{\sigma(x)}\right) dx, \quad \underset{|y|\leq 1}{\sup}{\int}_{-1}^{1} \frac{\left|h(x,y)\right|\sigma(x)}{\sqrt{w(x)\varphi(x)}} dx <+\infty,\!\!\!\!\!\\ \end{array} $$
(3.9)

and the weights w and σ are such that their exponents satisfy the following inequalities:

$$ \begin{array}{@{}rcl@{}} &&\max\left\{0,\frac{\alpha}{2}-\frac{3}{4}\right\}<\gamma<\min\left\{\alpha+1,\frac{\alpha}{2}+\frac{5}{4}\right\}, \end{array} $$
(3.10)
$$ \begin{array}{@{}rcl@{}} &&\max\left\{0,\frac{\upbeta}{2}-\frac{3}{4}\right\}<\delta<\min\left\{\upbeta+1,\frac{\upbeta}{2}+\frac{5}{4}\right\} , \end{array} $$
(3.11)

then, the rule (3.6) is stable and convergent and the following convergence estimate holds true:

$$ \sup_{|y|\leq 1} |\mathscr{E}_{m}(f,y)|\le \mathscr{C} E_{m-1}(f)_{\sigma}, \quad \mathscr{C} \neq \mathscr{C}(m,f). $$
(3.12)

Remark 3.1

Let us remark that the kernel function appearing in the Love (1.3) satisfies (3.9). Moreover, if \(\alpha , \upbeta <\frac {3}{2}\), then the parameters γ and δ can also be chosen equal to zero.

3.2 A dilation formula

In this subsection, we present a quadrature formula in order to approximate the integrals of the type:

$$ I(F,y,\omega)={\int}_{-1}^{1} k(x,y,\omega) F(x) w(x) dx, $$
(3.13)

where F is a given function, w is as in (3.1), and k(x,y,ω) is a known kernel which is close to be singular if ω− 1 → 0. This is the case of the kernel function appearing in the Love equation (1.3).

In order to construct such kind of formula, we follow the approach proposed in [2, 15] for the unweighted case.

First, in order to “relax” the “too fast” behavior of the kernel function when ω grows, we introduce in (3.13) the change of variables \(x=\frac {\eta }{\omega }, y=\frac {\theta }{\omega }\), with η,𝜃 ∈ [−ω,ω]. In this way, (3.13) is equivalent to the following integral having a dilated domain of integration [−ω,ω]:

$$ \begin{array}{@{}rcl@{}} &&\!\! I(F,y,\omega)=\frac{1}{\omega} {\int}_{-\omega}^{\omega} k\left( \frac{\eta}{\omega},\frac{\theta}{\omega},\omega\right) F\left( \frac{\eta}{\omega}\right) w\left( \frac{\eta}{\omega}\right) d\eta \\ &&\!\! =:\frac{1}{\omega} {\int}_{-\omega}^{\omega} \kappa\left( \eta,\theta,\omega\right) F\left( \frac{\eta}{\omega}\right) w\left( \frac{\eta}{\omega}\right) d\eta = \frac{1}{\omega} {\int}_{-\omega}^{\omega} \kappa\left( \eta, \omega y,\omega\right) F\left( \frac{\eta}{\omega}\right) w\left( \frac{\eta}{\omega}\right) d\eta. \end{array} $$

Then, we split the new integration interval [−ω,ω] into S subintervals of size \(2 \leq d \in \mathbb {R}\) s.t. \(S= \frac {2 \omega }{d} \in \mathbb {N}\), namely \([-\omega , \omega ]= \displaystyle \bigcup _{i=1}^{S} [-\omega +(i-1)d, -\omega +id],\) getting

$$ I\left( F, y,\omega\right) = \frac{1}{\omega} \sum\limits_{i=1}^{S} {\int}_{-\omega+(i-1)d}^{-\omega+id} \kappa\left( \eta,\omega y ,\omega\right) F\left( \frac{\eta}{\omega}\right) w\left( \frac{\eta}{\omega}\right) d\eta. $$
(3.14)

Now, we want to remap each integral into [− 1,1]. To this end, we introduce the invertible linear maps Ψi : [−ω + (i − 1)d,−ω + id] → [− 1,1] defined as:

$$ \begin{array}{@{}rcl@{}} x={\Psi}_{i}(\eta)=\frac{2}{d}(\eta+\omega)-(2i-1) \end{array} $$

and in (3.14) we make the change of variable:

$$ \begin{array}{@{}rcl@{}} \eta={\Psi}^{-1}_{i}(x)= \left( \frac{x+1}{2} \right)d -\omega+(i-1)d. \end{array} $$
(3.15)

In this way, we get:

$$ \begin{array}{@{}rcl@{}} I\left( F,y,\omega\right)=\frac{d}{2\omega} \sum\limits_{i=1}^{S} {\int}_{-1}^{1} k_{i}(x, \omega y,\omega) F_{i}\left( x\right) u_{i}(x) dx, \end{array} $$
(3.16)

where \(F_{i}(x):=F\left (\frac {{\Psi }^{-1}_{i}(x)}{\omega }\right )\), ui are the new weight functions:

$$ \begin{array}{@{}rcl@{}} u_{i}(x):= \begin{cases} v^{0,\upbeta}(x), & i=1 \\ v^{0,0}(x), & 2 \leq i \leq S-1, \\ v^{\alpha,0}(x), & i=S \\ \end{cases} \end{array} $$
(3.17)

and ki are the new kernel functions:

$$ \begin{array}{@{}rcl@{}} k_{i}(x,\omega y ,\omega):= \begin{cases} \left( \frac{d}{2\omega}\right)^{\upbeta} \kappa\left( {\Psi}^{-1}_{i}(x), \omega y ,\omega\right) v^{\alpha,0}\left( \frac{{\Psi}^{-1}_{i}(x)}{\omega}\right), & i=1 \\ \kappa \left( {\Psi}^{-1}_{i}(x), \omega y,\omega\right) v^{\alpha,\upbeta}\left( \frac{{\Psi}^{-1}_{i}(x)}{\omega}\right), & 2 \leq i \leq S-1,\\ \left( \frac{d}{2 \omega}\right)^{\alpha} \kappa\left( {\Psi}^{-1}_{i}(x), \omega y,\omega\right) v^{0,\upbeta}\left( \frac{{\Psi}^{-1}_{i}(x)}{\omega}\right), & i=S \end{cases} \end{array} $$

or, equivalently, in terms of the original kernel k,

$$ \begin{array}{@{}rcl@{}} {} k_{i}(x,\omega y ,\omega)\! :=\! \begin{cases} \! \left( \frac{d}{2\omega}\right)^{\upbeta} k\!\left( \! \frac{{\Psi}^{-1}_{i}(x)}{\omega},\! y,\! \omega\! \right) v^{\alpha,0}\! \left( \frac{{\Psi}^{-1}_{i}(x)}{\omega}\right), & i\! =\! 1 \\ k\!\left( \frac{{\Psi}^{-1}_{i}(x)}{\omega},\! y,\! \omega\right) v^{\alpha,\upbeta}\left( \frac{{\Psi}^{-1}_{i}(x)}{\omega}\right), & 2 \! \leq\! i\! \leq\! S\! -\! 1\\ \! \left( \! \frac{d}{2 \omega}\! \right)^{\alpha}k\! \left( \! \frac{{\Psi}^{-1}_{i}(x)}{\omega},\! y,\! \omega\! \right) v^{0,\upbeta}\! \left( \frac{{\Psi}^{-1}_{i}(x)}{\omega}\right), & i\! =\! S \end{cases}. \end{array} $$
(3.18)

Let us underline that the advantage of the change of variable we introduced in (3.13) and the splitting of the dilated interval [−ω,ω] into S parts of length d is that we reduce the computation of the integral into the computation of a sum of integrals and in each of them the kernel function ki(x,ωy,ω) has complex poles sufficiently far from the real axis. In fact, the distance from these poles to the x axis is greater than \(\frac {2}{d}\).

By approximating each integral appearing in (3.16) by means of the Gauss-Jacobi quadrature rule (3.3) with ui in place of w and kiFi instead of f, we have the following “dilation” quadrature formula:

$$ I(F, y,\omega) =\frac{d}{2 \omega} \sum\limits_{i=1}^{S} \sum\limits_{j=1}^{n} \lambda_{j}^{u_{i}} k_{i}(\xi^{u_{i}}_{j}, \omega y,\omega) F_{i}(\xi_{j}^{u_{i}}) + {\Lambda}_{n}(F,\omega y,\omega), $$
(3.19)

where Λn is the remainder term.

Next results state the stability of the previous formula and give an error estimate for Λn in the case when \(F\in {\mathscr{W}}^{r}_{\sigma }\) or FC2n([− 1,1]).

Theorem 3.1

Let FCσ be with σ as in (2.1) and let w be as in (3.1). If \(0\! \leq \! \gamma \! <\! \min \limits \{1,\alpha + 1\}\), \(0 \! \leq \! \delta <\min \limits \{1,\upbeta +1\}\) and k is s.t. \(\displaystyle \max \limits _{|y| \leq 1} \|k(\cdot ,\omega y,\omega )\|_{\infty } \! <\! +\infty \) then for the quadrature formula in (3.19) we have:

$$ \underset{|y| \leq 1}{\sup} \frac{d}{2 \omega} \left|\sum\limits_{i=1}^{S} \sum\limits_{j=1}^{n} \lambda_{j}^{u_{i}} k_{i}(\xi^{u_{i}}_{j}, \omega y,\omega) F_{i}(\xi_{j}^{u_{i}}) \right| \leq \mathscr{C} \| F\sigma\|_{\infty}, \qquad \mathscr{C}\neq \mathscr{C}(F,n). $$
(3.20)

Moreover, for any \(F\in {\mathscr{W}}^{r}_{\sigma }\), if

$$ \underset{|y| \leq 1}{\max} \underset{|x| \leq 1}{\max} \left | \frac{\partial^{r} k(x, y,\omega)}{\partial x^{r}} \varphi^{r}(x) \right | < +\infty $$
(3.21)

we have

$$ \begin{array}{@{}rcl@{}} \underset{|y| \leq 1}{\sup} |{\Lambda}_{n}(F, \omega y,\omega)| \le \frac{\mathscr{C}}{n^{r}} \left( \frac{d}{\omega}\right)^{r} \|F\|_{\mathscr{W}^{r}_{\sigma}}, \qquad \mathscr{C}\neq \mathscr{C}(F,n). \end{array} $$
(3.22)

Corollary 3.1

Let F,kC2n([− 1,1]) w.r.t. the variable x. Then

$$ \underset{|y|\leq 1}{\sup} |{\Lambda}_{n}(F, \omega y,\omega)| \leq \frac{\mathscr{C}}{n^{2n+\frac{1}{2}}} \left( \frac{d}{\omega}\right)^{2n} e^{\frac{48n^{2}+1}{24n}} 2^{n-1} \left[ \|F\|_{\infty}+\|F^{(2n)}\|_{\infty} \right] $$
(3.23)

with \( {\mathscr{C}} \neq {\mathscr{C}}(n,\omega ,d)\).

Remark 3.2

We outline that the quantity \(\displaystyle \frac {d}{\omega }\) appearing in both the estimates (3.22) and (3.23) is a quantity ≪ 1 if we divide the interval [−ω,ω] into a very large number of subintervals S, being \(\displaystyle \frac {d}{\omega }=\displaystyle \frac {2}{S}\). Moreover, in accordance with what proven in [14], we have experimented that an optimal choice is \(d \sim 2\). We also remark that the r-derivative of the kernel function appearing in the assumption (3.21) depends on the real parameter ω and its asymptotic behavior (with respect to such a parameter) goes like ω− 1.

3.3 A mixed quadrature formula

In this subsection, we want to propose a mixed quadrature rule which will be essential for our method. It consists in applying an m-point product rule (3.6) in order to approximate the integral:

$$ {\int}_{-1}^{1} k(x,y,\omega) f(x) w(x) dx $$

and in approximating the coefficients Aj of such a product rule (defined in (3.7) with h(x,y) = k(x,y,ω)) by means of the n-point “dilation” quadrature formula (3.19).

Then, the mixed quadrature formula is the following:

$$ \begin{array}{@{}rcl@{}} {\int}_{-1}^{1} k(x,y,\omega) f(x) w(x) dx&= \sum\limits_{j=1}^{m} {A^{n}_{j}}(y,\omega) f\left( {\xi_{j}^{w}}\right)+\mathscr{E}^{n}_{m}(f,y,\omega) \\ &=: {K^{n}_{m}}(f,y,\omega)+\mathscr{E}^{n}_{m}(f,y,\omega) \end{array} $$
(3.24)

where \({\mathscr{E}}^{n}_{m}\) is the remainder term and:

$$ {A^{n}_{j}}(y,\omega)= \frac{d}{2\omega} \sum\limits_{i=1}^{S} \sum\limits_{\nu=1}^{n} \lambda_{\nu}^{u_{i}} k_{i}(\xi^{u_{i}}_{\nu}, \omega y,\omega) \ell^{w}_{j,i}(\xi^{u_{i}}_{\nu}), $$

with ki and ui as in (3.18) and (3.17), respectively, and \(\ell _{j,i}^{w}\left (\xi _{\nu }^{u_{i}}\right ):={\ell _{j}^{w}}\left (\frac {{\Psi }_{i}^{-1}\left (\xi _{\nu }^{u_{i}}\right )}{\omega }\right )\) being \({\ell _{j}^{w}}\) and \({\Psi }_{i}^{-1}\) defined as in (3.8) and (3.15), respectively.

Next theorem gives an error estimate for \({\mathscr{E}}^{n}_{m}\) in the case when n = m.

Theorem 3.2

Let w and σ be defined in (3.1) and (2.1), respectively with:

$$ \begin{array}{@{}rcl@{}} &&\max\left\{0,\frac{\alpha}{2}+\frac{1}{4}\right\}<\gamma<\min\left\{1,\alpha+1,\frac{\alpha}{2}+\frac{5}{4}\right\}, \end{array} $$
(3.25)
$$ \begin{array}{@{}rcl@{}} &&\max\left\{0,\frac{\upbeta}{2}+\frac{1}{4}\right\}<\delta<\min\left\{1,\upbeta+1,\frac{\upbeta}{2}+\frac{5}{4}\right\}. \end{array} $$
(3.26)

If fCσ and the kernel function k satisfies the conditions (3.9) and the assumptions given in Theorem 3.1, the following error estimate holds true:

$$ \begin{array}{@{}rcl@{}} |\mathscr{E}^{m}_{m}(f,\omega)| &\le & \mathscr{C} \left[E_{m}(f)_{\sigma}+ \left( \frac{d}{\omega}\right)^{m-1} \log{m} \|f \sigma\|_{\infty} \right], \qquad \mathscr{C}\neq\mathscr{C}(m,\omega). \end{array} $$

Remark 3.3

Let us remark that if \(\alpha ,\upbeta < -\frac {1}{2}\), then the parameters of the weight σ could also be chosen equal to zero. Moreover, in Theorem 3.2, for the sake of simplicity, we considered the case m = n. Nevertheless in practice in the numerical test we can use n fixed. Indeed, according with (3.23), the error decreases exponentially and, for instance, for n = 20, d = 2, and ω = 102, the quantity before the square brackets is of the order 10− 98. Hence, the error of the mixed quadrature formula is, in practice, of the same order of the error of best approximation of f. We emphasize that the larger the ω is, the more the second term goes to zero quickly. In any case, whatever the magnitude of ω is, the first term of the estimate stated in Theorem 3.2 prevails on the second.

4 The numerical method

In this section, we propose a numerical method for the Love integral equation (1.3) which can be rewritten in operatorial form as:

$$ \left( I+\mu K\right)f=g, $$
(4.1)

where I is the identity operator and

$$ (Kf)(y,\omega)={\int}_{-1}^{1} k(x,y,\omega) f(x) w(x) dx $$
(4.2)

with

$$ k(x,y,\omega)=\frac{\omega^{-1}}{(x-y)^{2}+\omega^{-2}}. $$
(4.3)

The next proposition shows the mapping properties of the operator K.

Proposition 4.1

Let σ and w be defined in (2.1) and (3.1), respectively s.t. the parameters γ,δ,α and β satisfy 0 ≤ γ < 1 + α and 0 ≤ δ < 1 +β. Then \(K:C_{\sigma }\!\rightarrow \! C_{\sigma }\) is continuous, bounded and compact. Moreover, ∀fCσ, \(K f\in {\mathscr{W}}^{r}_{\sigma }, \forall r\in \text {I\text {\hskip -1.9pt N}}\).

Remark 4.1

We remark that according to Proposition 4.1 and in virtue of the Fredholm Alternative Theorem, under the assumption Ker{I + μK} = {0}, (4.1) has a unique solution fCσ.

The proposed numerical strategy is a Nyström method based on the mixed quadrature formula \({K_{m}^{n}}f\) introduced in (3.24). Then, we consider the functional equation:

$$ \left( I+\mu {K^{n}_{m}}\right){f^{n}_{m}}=g, $$
(4.4)

where \({f^{n}_{m}}\) is unknown. We multiply both sides of (4.4) by the weight function σ and we collocate each equation at the points \(\{{\xi _{i}^{w}}\}_{i=1}^{m}\). In this way, we have that the quantities \(a_{i}=({f^{n}_{m}} \sigma )({\xi _{i}^{w}}) \) are the unknowns of the following m × m linear system:

$$ \sum\limits_{j=1}^{m} \left[\delta_{ij}+\mu \sigma({\xi_{i}^{w}}) \frac{{A^{n}_{j}}({\xi_{i}^{w}},\omega)}{\sigma({\xi_{j}^{w}})}\right] a_{j}= (g \sigma )({\xi_{i}^{w}}), \qquad i=1,...,m, $$
(4.5)

where δij is the Kronecker symbol. In terms of matrices the system is \(\mathbb {A}_{m}\mathbbm {a}= \mathbbm {b},\) where \(\mathbb {A}_{m}= \mathbb {I}+ \mu \mathbb {B}_{m}\), with \(\mathbb {I}\) the identity matrix of order m and

$$ \mathbb{B}_{m}= \left[\frac{\sigma(\xi_{i}^{w})}{\sigma(\xi_{j}^{w})} A^{n}_{j}(\xi_{i}^{w},\omega)\right]_{i,j=1}^{m}, $$

while

$$ \mathbbm{b}= \left[(g \sigma)(\xi_{i}^{w})\right]_{i=1}^{n} \qquad \text{and} \qquad \mathbbm{a}=[a_{i}]_{i=1}^{m}. $$

Once (4.5) is solved, its solution \([\mathbbm {a}^{*}]_{i=1}^{m}=a^{*}_{i}\) allows us to construct the following weighted Nyström interpolant:

$$ ({f^{n}_{m}} \sigma)(y)=(g \sigma)(y)-\mu \sigma(y)\sum\limits_{j=1}^{m} \frac{{A^{n}_{j}}(y,\omega)}{\sigma({\xi_{j}^{w}})}a^{*}_{j}, $$
(4.6)

which will approximate the unknown solution fCσ.

Next theorem states that the above described Nyström method is stable and convergent, as well as that the condition number in infinity norm of the matrix \(\mathbb {A}_{m}\); i.e., \(cond(\mathbb {A}_{m})=\|\mathbbm {A}_{m}\|_{\infty } \|\mathbbm {A}^{-1}_{m}\|_{\infty } \) is bounded by a constant which does not depend on m.

Theorem 4.1

Let w and σ be defined in (3.1) and (2.1), respectively with parameters satisfying (3.25) and (3.26), and let us assume that Ker{I + μK} = {0} in Cσ. Then, if \(g\in {\mathscr{W}}^{r}_{\sigma }\), r > 1, for m sufficiently large, the operators \(\left (I+\mu {K^{m}_{m}}\right )^{-1}\) exist and are uniformly bounded. Moreover, system (4.5) is well conditioned, since \( cond(\mathbb {A}_{m})\leq {\mathscr{C}}\) with \({\mathscr{C}}\neq {\mathscr{C}}(m)\) and the following estimate holds true:

$$ \|[f-{f^{m}_{m}}]\sigma\|_{\infty}\leq \mathscr{C} \left[\frac{1}{m^{r}}+ \left( \frac{d}{\omega} \right)^{m-1} \log{m} \right] \|f\|_{\mathscr{W}^{r}_{\sigma}}, \qquad \mathscr{C} \neq \mathscr{C}(m,f). $$
(4.7)

Remark 4.2

Let us remark that the first step of our method is the natural and immediate approximation of the integral by means of the mixed quadrature formula. In doing this, we “isolate” in some sense the parameter ω in the computation of the coefficients of the rule. The collocation is independent of ω and this produces the advantage that the size of the system does not depend on it, as instead happens in other methods available in the literature [7, 15]. Moreover, as stated in estimate (4.7), the proposed global approximation method allows us to find the solution of the equation with a convergence order which is again independent of the magnitude of ω. Indeed, the error is of the order of the best polynomial approximation of f and being \(f \in {\mathscr{W}}^{r}_{\sigma }\) one has:

$$ E_{m}(f)_{\sigma} \leq \frac{\mathscr{C}}{m^{r}} \|f^{(r)} \varphi^{r} \sigma\|_{\infty}. $$

However, the function f naturally depends on ω, being the solution of an equation in which such a parameter appears (see also the analytical expression of the solution recalled in the “Introduction”). Consequently, the norm of the r th derivative (appearing at the right-hand side of the above estimate) depends on and becomes large for increasing values of ω. This is the only reason why for a large value of ω we need to increase the dimension of the system in order to get high precision (see Table 1).

Table 1 Numerical results for Example 5.1

Remark 4.3

Let us underline that the numerical method we propose leads to a not structured linear system of order m in which each element of the matrix involves S × n evaluation of a polynomial of degree m − 1. However, the fact that the matrix is not structured does not lead to any problem. In fact, as we can see in Section 5, the method produces very accurate results for small values of m and, as stated in the previous theorem, the matrix is also well-conditioned. This is also the reason why we do not propose a suitable and more performing numerical method in order to solve the linear system we get.

5 Numerical tests

In this section, we show by some numerical tests the performance of the method described in the previous section. Specifically, we first test the proposed approach on the classical Love integral equation (Example 5.1) and then we show its effectiveness on other two generalized Love’s equations (Examples 5.2 and 5.3). In all the numerical tests, the solution f is very smooth and we expect a fast convergence according to estimate (4.7).

We approximate the solutions of the test equations by means of the Nyström interpolants given by (4.6) and we compute the absolute errors:

$$ err^{n}_{M,m}(x) = | ({f^{n}_{M}}(x)- {f^{n}_{m}}(x))\sigma(x)|, $$
(5.1)

in different points x ∈ [− 1,1]. In (5.1) \({f^{n}_{m}}\) is the solution assumed to be exact which is obtained with a fixed value m = M.

Moreover, in the tables, the CPU time average of the complete procedure (construction and solution of the linear system+construction and evaluation of the Nyström interpolant) is given. The numerical evidence is that the needed CPU time depends linearly on m and ω. Indeed, when m is doubled also is the CPU time. And if we see at the CPU time for ω = 102,103,104, it is clear that for the same value of m, if t is the time spent in the case ω = 102, then it will be about 10 ∗ t for ω = 103 and 100 ∗ t for ω = 104.

All the numerical experiments were performed in double precision arithmetic on an IntelCore i7 system (4 cores), running the Mac OS operating system and using Matlab R2018a.

Example 5.1

Let us consider the classical Love integral (1.3) in the space Cσ with σ ≡ 1 and \(\mu =\frac {1}{\pi }\). In Table 1 we report the results we get for different choices of ω. By comparing them with those presented in [7, Table 1, Table 3, and Table 5], we can see that, in the case when ω = 102 by solving a square system of m = 256 equations we get an error of the order of the machine precision (M.P.), instead of 10− 5 as shown in [7, Table 1]. If ω = 103, by solving a system of order 700, we get the machine precision, accuracy that in [7, Table 3] is reached with a system of 16384 equations. Similarly, the method gives accurate results also in the case when ω = 104. The first graph of Fig. 1 shows the approximated solution \(f^{20}_{700}\) with ω = 102.

Fig. 1
figure 1

From left to right, the approximated solution \(f_{m}^{20}\) of Example 5.1 with m = 700 and ω = 102, \(f_{512}^{20}v^{\frac 12, \frac 12}\) of Example 5.2 and \(f_{512}^{20}v^{\frac 14, \frac 14}\) of Example 5.3

Example 5.2

Let us test our method on the equation:

$$ f(y)+\frac{1}{\pi}{\int}_{-1}^{1} \frac{10^{-2}}{(x-y)^{2}+10^{-4}} f(x) v^{\frac{1}{2},\frac{1}{2}}(x) dx=e^{y}, $$

namely, a generalized Love integral equation with ω = 102 and \(\mu =\frac {1}{\pi }\). Table 2 shows the errors (5.1) that we get with \(\sigma (x)=v^{\frac {1}{2},\frac {1}{2}}(x)\), n = 20 and M = 350 for increasing value of m. As we can see by solving a linear system of order m = 256, we get the machine precision (M.P.). The second graph of Fig. 1 displays the approximate solution \(f^{20}_{512}\sigma \).

Table 2 Numerical results for Example 5.2

Example 5.3

Let us consider the following generalized Love’s integral equation with ω = 103:

$$ f(y)+\frac{1}{\pi}{\int}_{-1}^{1} \frac{10^{-3}}{(x-y)^{2}+10^{-6}} f(x) v^{-\frac{1}{2},-\frac{1}{2}}(x) dx=\frac{1+y^{3}}{y^{2}+9} $$

in the space Cσ with \(\sigma (x)=v^{\frac {1}{4},\frac {1}{4}}(x)\) and where \(\mu =\frac {1}{\pi }\). Table 3 contains the accurate results we get also in this case and the last graph of Fig. 1 shows the approximated solution \(f^{20}_{512}\sigma \).

Table 3 Numerical results for Example 5.3

6 Proofs

Proof Proof of Theorem 3.1

First, let us prove the stability of the formula, i.e., estimate (3.20). We can write:

$$ \begin{array}{@{}rcl@{}} \frac{d}{2 \omega}\! \left| \sum\limits_{i=1}^{S} \sum\limits_{j=1}^{n} \frac{\lambda_{j}^{u_{i}}}{\sigma\left( \!\xi_{j}^{u_{i}}\!\right)}\! \left( F_{i} \sigma\right)\!\left( \!\xi_{j}^{u_{i}}\!\right)\! k_{i}\!\left( \xi^{u_{i}}_{j},\omega y,\omega\right) \right|\! \leq\! \frac{d}{2 \omega } \|F \sigma\|_{\infty} \! \sum\limits_{i=1}^{S} \|k_{i}\left( \cdot,\omega y,\omega\right)\|_{\infty}\! \sum\limits_{j=1}^{n} \frac{\lambda_{j}^{u_{i}}}{\sigma\left( \xi_{j}^{u_{i}}\right)}. \end{array} $$

Then (3.20) follows taking into account the definition of ki given in (3.18), the first assumption on the kernel, and by considering that in virtue on the assumptions on the parameters of the weights, we have:

$$ \sum\limits_{j=1}^{n} \frac{\lambda_{j}^{u_{i}}}{\sigma(\xi_{j}^{u_{i}})} \leq {\int}_{-1}^{1} \frac{u_{i}(x)}{\sigma(x)} dx \leq \mathscr{C}. $$
(6.1)

In order to prove (3.22), we can note that by (3.4), we have:

$$ \begin{array}{@{}rcl@{}} |{\Lambda}_{n}(F,\omega y,\omega)| \leq \sum\limits_{i=1}^{S} |\mathscr{R}_{n}(F_{i} k_{i},\omega y,\omega)| \leq \mathscr{C} \sum\limits_{i=1}^{S} E_{2n-1}(F_{i} k_{i})_{\sigma}, \end{array} $$

so that by using the well-known estimate [10]:

$$E_{2n-1}(h_{1} h_{2})_{\sigma} \leq \|h_{1} \sigma\|_{\infty} E_{[\frac{2n-1}{2}]}(h_{2})+ 2 \|h_{2}\|_{\infty} E_{[\frac{2n-1}{2}]}(h_{1})_{\sigma},$$

we can write:

$$ \begin{array}{@{}rcl@{}} |{\Lambda}_{n}(F,\omega y,\omega)| \leq \mathscr{C} \sum\limits_{i=1}^{S} \left( \|F_{i} \sigma\|_{\infty} E_{[\frac{2n-1}{2}]}(k_{i}) +\|k_{i}(\cdot,\omega y,\omega)\|_{\infty} E_{[\frac{2n-1}{2}]}(F_{i})_{\sigma} \right). \end{array} $$

Then, taking into account that by the assumptions \(\|k_{i}(\cdot ,\omega y,\omega )\|_{\infty }< {\mathscr{C}}\) and by applying the Favard inequality [10]:

$$ E_{n}(h)_{v}\leq \frac{\mathscr{C}}{{n^{r}}} \left\|h^{(r)}\varphi^{r} v\right\|_{\infty}, \qquad \mathscr{C}\neq \mathscr{C}(n,h), \qquad \forall h \in \mathscr{W}^{r}_{v}, $$
(6.2)

once with the Jacobi weight v = 1, and then with v = σ, we deduce:

$$ \begin{array}{@{}rcl@{}} |{\Lambda}_{n}(F,\omega y,\omega)|\! \leq\! \frac{\mathscr{C}}{n^{r}} \sum\limits_{i=1}^{S} \left( \|F_{i} \sigma\|_{\infty} \sup_{|x| \leq 1}\left| \frac{\partial^{r}}{\partial x^{r}} k_{i}(x,\omega y,\omega) \varphi^{r}(x)\right| + \|F_{i}^{(r)} \varphi^{r} \sigma\|_{\infty} \right). \end{array} $$

Now, let us note that the functions ki defined in (3.18) can be rewritten as:

$$ k_{i}(x,\omega y,\omega)= k\left( \frac{{\Psi}^{-1}_{i}(x)}{\omega }, y,\omega\right) U_{i}(x,\omega,d) $$

where the functions Ui, defined as:

$$ \begin{array}{@{}rcl@{}} U_{i}(x,\omega,d):= \begin{cases} \left( \frac{d}{2\omega}\right)^{\upbeta} v^{\alpha,0}\left( \frac{{\Psi}^{-1}_{i}(x)}{\omega}\right), & i=1 \\ v^{\alpha,\upbeta}\left( \frac{{\Psi}^{-1}_{i}(x)}{\omega}\right), & 2 \leq i \leq S-1, \\ \left( \frac{d}{2 \omega}\right)^{\alpha} v^{0,\upbeta}\left( \frac{{\Psi}^{-1}_{i}(x)}{\omega}\right) , & i=S \end{cases} \end{array} $$
(6.3)

are bounded functions such that:

$$ \underset{|x| \leq 1}{\sup} |U^{(r)}_{i}(x,\omega,d) \varphi^{r}(x) | \leq \mathscr{C} \left( \frac{d}{2 \omega}\right)^{r}, \qquad \forall i=1, ... S. $$

Hence, being for each \(i=1, \dots , S\)

$$ \begin{array}{@{}rcl@{}} \left|\frac{\partial^{r} k_{i}(x,\omega y,\omega)}{\partial x^{r}} \varphi^{r}(x) \right| = \left|\sum\limits_{j=0}^{r} \binom{r}{j} \frac{\partial^{j}}{\partial x^{j}} k \left( \frac{{\Psi}^{-1}_{i}(x)}{\omega }, y,\omega\right) \varphi^{j}(x) U^{(r-j)}_{i}(x,\omega,d) \varphi^{r-j}(x) \right|, \end{array} $$

by using (3.21), we get:

$$ \underset{|y| \leq 1}{\sup}\underset{|x| \leq 1}{\sup}\left|\frac{\partial^{r} k_{i}(x,\omega y,\omega)}{\partial x^{r}} \varphi^{r}(x) \right| \leq \mathscr{C} \sum\limits_{j=0}^{r} \binom{r}{j} \left( \frac{d}{2 \omega}\right)^{r-j} \left( \frac{d}{2 \omega} \right)^{j} = \mathscr{C} \left( \frac{d}{\omega}\right)^{r} $$

and therefore

$$ \begin{array}{@{}rcl@{}} \underset{|y| \leq 1}{\sup} |{\Lambda}_{n}(F,\omega y,\omega)| \leq \frac{\mathscr{C}}{n^{r}} \left( \frac{d}{\omega}\right)^{r} \|F\|_{\mathscr{W}^{r}_{\sigma}}. \end{array} $$

Proof Proof of Corollary 3.1

Taking into account the error estimate (3.5), we have:

$$ \begin{array}{@{}rcl@{}} |{\Lambda}_{n}(F,\omega y,\omega)| & \leq & \sum\limits_{i=1}^{S} |\mathscr{R}_{n}(F_{i} k_{i},\omega y,\omega)|\\ &\leq & \frac{1}{(2n)! \gamma_{n}(w)} \sum\limits_{i=1}^{S} \sup_{|x|\leq 1} \left |\frac{\partial^{2n}}{\partial x^{2n}}[F_{i}(x) k_{i}(x, \omega y, \omega)] \right|. \end{array} $$

Then, by applying the Leibnitz rule, we get:

$$ \begin{array}{@{}rcl@{}} \left |\frac{\partial^{2n}}{\partial x^{2n}}[F_{i}(x) k_{i}(x, \omega y, \omega)] \right| & \leq& \sum\limits_{j=0}^{2n} \binom{2n}{j} \left|\left[F\left( \frac{{\Psi}_{i}^{-1}(x)}{\omega}\right)\right]^{(2n-j)}\right| \left| \frac{\partial^{j}}{\partial x^{j}} k_{i} \left( x,\omega y,\omega\right)\right| \\ & \leq& \sum\limits_{j=0}^{2n} \binom{2n}{j} \left( \frac{d}{2 \omega}\right)^{2n-j} \|F^{(2n-j)}\|_{\infty} \sup_{|x| \leq 1} \left| \frac{\partial^{j}}{\partial x^{j}} k_{i} \left( x,\omega y,\omega\right)\right|, \end{array} $$

from which being [3] \(\|F^{(2n-j)}\|_{\infty } \leq {\mathscr{C}}\left [\frac {\|F\|_{\infty }}{2^{2n-j}}+2^{j} \|F^{(2n)}\|_{\infty }\right ]\), we get:

$$ \begin{array}{@{}rcl@{}} \left |\frac{\partial^{2n}}{\partial x^{2n}}[F_{i}(x) k_{i}(x, \omega y, \omega)] \right| & \leq& \|F\|_{\infty} \sum\limits_{j=0}^{2n} \binom{2n}{j} \left( \frac{d}{4 \omega}\right)^{2n-j} \left| \frac{\partial^{j}}{\partial x^{j}} k_{i} \left( x,\omega y,\omega\right)\right| \\ &&+ \|F^{(2n)}\|_{\infty} \sum\limits_{j=0}^{2n} \binom{2n}{j} \left( \frac{d}{2 \omega}\right)^{2n-j} 2^{j} \left| \frac{\partial^{j}}{\partial x^{j}} k_{i} \left( x,\omega y,\omega\right)\right|.\\ \end{array} $$
(6.4)

By the definitions (3.18) of the kernels ki and taking into account the form of the functions Ui given in (6.3), we can write:

$$ \begin{array}{@{}rcl@{}} \left| \frac{\partial^{j}}{\partial x^{j}} k_{i} \left( x,\omega y,\omega\right)\right| & \leq& \sum\limits_{\ell=0}^{j} \binom{j}{\ell} \left| \frac{\partial^{j-\ell}}{\partial x^{j-\ell}} \left[k \left( \frac{{\Psi}^{-1}_{i}(x)}{\omega }, y,\omega\right)\right] \right| \left| [U_{i}(x,\omega,d)]^{(\ell)} \right| \\ & \leq& \mathscr{C} \sum\limits_{\ell=0}^{j} \binom{j}{\ell} \left( \frac{d}{2 \omega}\right)^{j-\ell} \left( \frac{d}{2 \omega}\right)^{\ell} \sup_{|x| \leq 1} \left| \frac{\partial^{j-\ell}}{\partial x^{j-\ell}} k \left( \frac{{\Psi}^{-1}_{i}(x)}{\omega }, y,\omega\right) \right| \end{array} $$

and being [3]

$$ \begin{array}{@{}rcl@{}} \underset{|x| \leq 1}{\sup} \left| \frac{\partial^{j-\ell}}{\partial x^{j-\ell}} k \left( \frac{{\Psi}^{-1}_{i}(x)}{\omega }, y,\omega\right) \right| &\leq& \mathscr{C} \left[ \left( \frac{1}{2}\right)^{j-\ell}\underset{|x| \leq 1}{\sup} \left| k \left( \frac{{\Psi}^{-1}_{i}(x)}{\omega }, y,\omega\right) \right| \right. \\ && \left.+2^{2n-j+\ell} \underset{|x| \leq 1}{\sup} \left| \frac{\partial^{2n}}{\partial x^{2n}} k \left( \frac{{\Psi}^{-1}_{i}(x)}{\omega }, y,\omega\right) \right|\right], \end{array} $$

in virtue of the assumptions on the kernel k, we have:

$$ \begin{array}{@{}rcl@{}} \left| \frac{\partial^{j}}{\partial x^{j}} k_{i} \left( x,\omega y,\omega\right)\right| & \leq \mathscr{C} 2^{2n} \left( \frac{3d}{4 \omega} \right)^{j}. \end{array} $$

Thus, by replacing the above estimate in (6.4), we have:

$$ \begin{array}{@{}rcl@{}} \left |\frac{\partial^{2n}}{\partial x^{2n}}[F_{i}(x) k_{i}(x, \omega y, \omega)] \right| & \leq \mathscr{C} \left( \frac{2d}{\omega}\right)^{2n} \left[\|F\|_{\infty} + \|F^{(2n)}\|_{\infty} \right], \end{array} $$

from which we deduce

$$ |{\Lambda}_{n}(F,\omega y,\omega)| \leq \frac{1}{(2n)! \gamma_{n}(w)} \left( \frac{2d}{\omega} \right)^{2n} \left[ \|F\|_{\infty}+\|F^{(2n)}\|_{\infty} \right]. $$

Therefore, by using the well-known Stirling formula:

$$ \left( \frac{n}{e} \right)^{n} \sqrt{2 \pi n} e^{-\frac{1}{12 n}} \leq n! \leq \left( \frac{n}{e} \right)^{n} \sqrt{2 \pi n} e^{-\frac{1}{12 n+1}}, $$

and, taking into account that [10] \(\gamma _{n}(w) \sim 2^{n}\), we get the thesis. □

Proof Proof of Theorem 3.2

By (3.24), we can write:

$$ \begin{array}{@{}rcl@{}} |\mathscr{E}_{m}^{m}(f,y,\omega)|& \leq& \left|{\int}_{-1}^{1} {\kern-0.2cm} k(x,y,\omega) (fw)(x) dx - \sum\limits_{j=1}^{m} A_{j}(y) f({\xi_{j}^{w}})\right|{\kern-0.1cm} +{\kern-0.1cm} \left| \sum\limits_{j=1}^{m} \left( (A_{j}-{A_{j}^{m}})(y)\right) f({\xi_{j}^{w}})\right| \\ & \leq& |\mathscr{E}_{m}(f,y,\omega)| + \|f \sigma\|_{\infty} \sum\limits_{j=1}^{m} \frac{|{\Lambda}_{m}({\ell_{j}^{w}},\omega y,\omega)|}{\sigma({\xi_{j}^{w}})} . \end{array} $$

The first term can be estimated by using (3.12) since (3.25) and (3.26) include (3.10) and (3.11). Let us now estimate the last one. By using (3.22) with r = m − 1, we can have:

$$ |{\Lambda}_{m}({\ell_{j}^{w}},\omega y,\omega)| \leq \frac{\mathscr{C}}{m^{m-1}} \left( \frac{d}{\omega}\right)^{m-1} \|{\ell^{w}_{j}}\|_{\mathscr{W}^{r}_{\sigma}} $$

and thus, by applying the weighted Bernstein inequality (see, for instance [10, p. 170]) which leads to state that \(\|{\ell ^{w}_{j}}\|_{{\mathscr{W}}^{r}_{\sigma }} \leq {\mathscr{C}} m^{m-1} \|{\ell ^{w}_{j}} \sigma \|_{\infty },\) we get:

$$ |{\Lambda}_{m}({\ell_{j}^{w}},\omega y,\omega)| \leq \mathscr{C} \left( \frac{d}{\omega}\right)^{m-1} \|{\ell^{w}_{j}} \sigma\|_{\infty}. $$

Therefore,

$$ \sum\limits_{j=1}^{m} \frac{|{\Lambda}_{m}({\ell_{j}^{w}},\omega y,\omega)|}{\sigma({\xi_{j}^{w}})} \leq \left( \frac{d}{\omega}\right)^{m-1} \sum\limits_{j=1}^{m} \frac{\|{\ell^{w}_{j}} \sigma\|_{\infty}}{\sigma({\xi_{j}^{w}})} \leq \mathscr{C} \left( \frac{d}{\omega}\right)^{m-1} \log m $$

being [10], in virtue of (3.10) and (3.11):

$$ \underset{|x| \leq 1}{\max}\sum\limits_{j=1}^{m} \frac{|{\ell^{w}_{j}}(x)|}{\sigma({\xi_{j}^{w}})} \sigma(x) \simeq \log m, $$
(6.5)

and the proof is completed. □

Proof Proof of Proposition 4.1

First, let us note that the kernel k given in (4.3) satisfies the following conditions:

$$ \underset{|x|\leq 1}{\max}\left\|k(x,\cdot,\omega)\sigma\right\|_{\infty}<\infty, \quad \quad \underset{|x|\leq 1}{\max} \left\|\frac{\partial^{r}}{\partial y^{r}}k(x,\cdot,\omega)\varphi^{r} \sigma\right\|_{\infty}<\infty, \qquad r\geq 1. $$
(6.6)

By the definition (4.2), and taking into account the conditions on the parameters of the weights, we have:

$$ \begin{array}{@{}rcl@{}} \left|(Kf)(y)\sigma(y)\right|& \leq &\left\|f\sigma\right\|_{\infty} {\int}_{-1}^{1} \left|k(x,y,\omega)\sigma(y)\right| \frac{w(x)}{\sigma(x)} dx\\ &\leq & \mathscr{C} \left\|f\sigma\right\|_{\infty} \max_{|x|\leq 1}\left\|k(x,\cdot,\omega)\sigma\right\|_{\infty} \end{array} $$

from which, by using (6.6), we can deduce that the operator K is continuous and bounded. In order to prove its compactness, we remind that [18] if K satisfies the following condition:

$$ \underset{m\rightarrow \infty}{\lim} \underset{\left\|f\sigma\right\|_{\infty}=1}{\sup} E_{m}(Kf)_{\sigma}=0 $$
(6.7)

then K is compact. We note that:

$$ \left|(Kf)^{(r)}(y) (\varphi^{r} \sigma)(y)\right| \leq \left\|f\sigma\right\|_{\infty} \max_{|x|\leq 1} \left\|\frac{\partial^{r}}{\partial y^{r}}k(x,\cdot,\omega)\varphi^{r}\sigma\right\|_{\infty}{\int}_{-1}^{1} \frac{w(x)}{\sigma(x)} dx. $$

Hence, \(Kf\in {\mathscr{W}}^{r}_{\sigma }\) for each fCσ, and by using the Favard inequality (6.2) with m instead of n, Kf in place of H and σ in place of v, we deduce (6.7). □

Proof Proof of Theorem 4.1.

The goal of the proof is to prove that

  1. 1.

    \(\|(K-{K^{m}_{m}})f \sigma \|\) tends to zero for any fCσ;

  2. 2.

    The set of the operators {Km}m is collectively compact.

In fact, by condition 1, in virtue of the principle of uniform boundedness, we can deduce that \(\displaystyle \sup _{m} \|{K^{m}_{m}}\|< \infty \) and, by condition 2, we can deduct that \(\|(K-{K^{m}_{m}}){K^{m}_{m}}\|\) tends to zero [1, Lemma 4.1.2]. Consequently, under all these conditions, we can claim that for m sufficiently large, the operator \((I+\mu {K^{m}_{m}})^{-1}\) exists and it is uniformly bounded since:

$$ \begin{array}{@{}rcl@{}} \|(I+\mu {K^{m}_{m}})^{-1}\| \leq \frac{1+\|(I+\mu K)^{-1}\| \|{K^{m}_{m}}\|}{1-\|(I+\mu K)^{-1}\| \|(K-{K^{m}_{m}}){K^{m}_{m}}\|}, \end{array} $$

i.e., the method is stable.

Condition 1 follows by Theorem 3.2. Condition 2 can be deducted by [6, Theorem 12.8] for the case γ = δ = 0. Concerning the general case, it is sufficient to prove that [18]:

$$ \underset{m\rightarrow \infty}{\lim} \underset{\left\|f\sigma\right\|_{\infty}=1}{\sup} E_{m}({K_{m}^{m}}f)_{\sigma}=0. $$
(6.8)

To this end, let us introduce S polynomials qm,i(x,y) with i = 1,...,S of degree m in each variable, and for any fCσ, let us define the univariate polynomial:

$$ (Q_{m} f)(y,\omega)=\frac{d}{2\omega} \sum\limits_{j=1}^{m} \sum\limits_{i=1}^{S} \sum\limits_{\nu=1}^{m} \lambda_{\nu}^{u_{i}} \ell_{j,i}^{w}\left( \xi_{\nu}^{u_{i}}\right) q_{m,i}(\xi_{\nu}^{u_{i}},y)f({\xi_{j}^{w}}), $$

where we recall that \(\ell _{j,i}^{w}\left (\xi _{\nu }^{u_{i}}\right ):={\ell _{j}^{w}}\left ({\Psi }_{i}^{-1}\left (\frac {\xi _{\nu }^{u_{i}}}{\omega }\right )\right )\).

Then, in virtue of the definition (3.24), we can write:

$$ \begin{array}{@{}rcl@{}} &&\left|[({K_{m}^{m}}f-Q_{m}f)(y,\omega)]\sigma(y)\right|\\ &\leq & \frac{d}{2\omega} \left\|f\sigma\right\|_{\infty}\sum\limits_{j=1}^{m} \sum\limits_{i=1}^{S} \sum\limits_{\nu=1}^{m} \frac{\lambda_{\nu}^{u_{i}}}{\sigma\left( \xi_{\nu}^{u_{i}}\right)}\\ &&\times\left| \frac{\ell_{j,i}^{w}(\xi_{\nu}^{u_{i}})}{\sigma({\xi_{j}^{w}})} \sigma\left( \xi_{\nu}^{u_{i}}\right) \left( k_{i}\left( \xi_{\nu}^{u_{i}},\omega y,\omega\right)-q_{m,i}\left( \xi_{\nu}^{u_{i}},y\right)\right)\sigma(y)\right|\\ &\leq & \frac{d}{2\omega} \left\|f\sigma\right\|_{\infty}\sum\limits_{j=1}^{m} \max_{|x|\leq 1} \left| \frac{{\ell_{j}^{w}}(x)}{\sigma({\xi_{j}^{w}})}\right| \sigma(x) \sum\limits_{i=1}^{S} \max_{|x|\leq 1} \left|\left( k_{i}\left( x,\omega y,\omega\right)-q_{m,i}\left( x,y\right)\right)\sigma(y)\right|\\ &&\times\sum\limits_{\nu=1}^{m} \frac{\lambda_{\nu}^{u_{i}}}{\sigma\left( \xi_{\nu}^{u_{i}}\right)} , \end{array} $$

from which by applying (6.1), (6.5), and taking into account the assumptions on the parameters of the weights, we get:

$$ \left|[({K_{m}^{m}}f-Q_{m}f)(y,\omega)]\sigma(y)\right| \leq \mathscr{C} \frac{d}{2\omega} \log m \left\|f\sigma\right\|_{\infty} {\sum}_{i=1}^{S} \max_{|x|\leq 1} E_{m}\left( k_{i}(x,\cdot,\omega)\right)_{\sigma}. $$

The only point remaining is to estimate the quantity \(E_{m}\left (k_{i}(x,\cdot ,\omega )\right )_{\sigma }\). To this end, taking into account the definition of ki given in (3.18) and (6.6), by using the Favard inequality (6.2), we get \(E_{m}\left (k_{i}(x,\cdot ,\omega )\right )_{\sigma }\leq \frac {{\mathscr{C}}}{m^{r}} \left (\frac {d}{\omega }\right )^{r},\) i.e., (6.8).

About the well-conditioning of the matrix \(\mathbb {A}_{m}\), it is sufficient to prove that:

$$ cond(\mathbb{A}_{m}) \leq cond(I+\mu {K^{m}_{m}})=\|I+\mu {K^{m}_{m}}\| \|(I+\mu {K^{m}_{m}})^{-1}\|. $$

To this end, we can use the same arguments in [1, p. 113] only by replacing the usual infinity norm with the weighted uniform norm of Cσ. Finally, estimate (4.7) follows taking into account that:

$$ \|(f-{f^{m}_{m}})\sigma\|_{\infty} \leq \|(I+\mu {K^{m}_{m}})^{-1}\| \|(K-{K^{m}_{m}})f\|_{\infty} $$

and by applying Theorem 3.2 to the last term. □