1 Introducation

We consider the following second-kind Volterra integral equation (VIE) with weakly singular kernel:

$$\begin{aligned} u(t) = g(t) + \int _0^t \left( t - s \right) ^{- \alpha } K(t, s) u(s) \, d s, \quad t \in I := [0, T], \ 0< \alpha < 1, \end{aligned}$$
(1)

where g and K are continuous functions on their respective domains, and \(K(t, t) \ne 0\) for \(t \in I\). In [1], it is shown that on uniform meshes, the convergence order of piecewise polynomial collocation methods is only \(1 - \alpha \). In order to improve the convergence order, graded meshes are employed to overcome the lower regularity at the initial time \(t = 0\). However, in [2], it is said that “the commonly used graded meshes may cause serious round-off error problems due to its use of extremely nonuniform partitions and the sensitivity of such time-dependent equations to round-off errors”, and in order to avoid this problem, a kind of hybrid collocation methods is presented, but the original singularity has to be considered for carefully designing the mesh.

In this paper, at the mesh point \(t_n\), a fine error estimation with order \(t_n^{ - \alpha } h^{2 - \alpha } + t_n^{1 - m - \alpha } h^m\) for piecewise polynomial collocation methods on uniform meshes is obtained, where m is the degree of the piecewise polynomial. In particular, at the endpoint, the convergence order is \(\min \{ 2 - \alpha , m \}\); for \(m = 1\) and \(\alpha \le 0.5\), at the collocation point, the convergence order is always 1, which is not affected by the initial singularity. In order to improve the convergence order, the general iterated collocation methods are presented for \(m = 1\), and it is shown that for the k-th iterated collocation method, the convergence order is \(t_n^{k - 1 - k \alpha }h^{2 - \alpha }\) at the mesh point \(t_n\).

The outline of this paper is as follows. In Sect. 2, the classical piecewise polynomial collocation method on uniform meshes is recalled. In Sect. 3, fine error estimations at mesh points for VIEs with \(m = 1\) and \(K(t, s) \equiv 1\) are investigated, and the error estimations for \(m \ge 2\) and general kernels are given in Sect. 4. The iterated collocation methods and the convergence are analyzed in Sect. 5. A typical numerical example is given to illustrate the obtained theoretical results in Sect. 6.

2 Collocation Methods on Uniform Meshes

Let \(N \ge 2\) be a positive integer, and \(I_h := \left\{ t_{n} := n h: \; n = 0,1, \ldots , N \;\, \left( t_{N} := T \right) \right\} \) be a given mesh on \(I = [0, T]\), with \(\sigma _n := \left( t_n, t_{n + 1} \right] \) and mesh diameter \(h := T/N\).

We seek a collocation solution \(u_h\) for (1) in the piecewise polynomial collocation space

$$\begin{aligned} S_{m - 1}^{(- 1)}(I_{h}) := \left\{ v : \; v|_{\sigma _n} \in \pi _m = \pi _m( \sigma _n ) \; ( 0 \le n \le N - 1 ) \right\} , \end{aligned}$$

where \(\pi _m\) denotes the space of all (real) polynomials of degree not exceeding m. For a prescribed set of collocation points

$$\begin{aligned} X_h := \left\{ t = t_n + c_i h: \ 0< c_1< \cdots < c_m \le 1 \ ( 0 \le n \le N - 1 ) \right\} , \end{aligned}$$
(2)

\(u_h\) is defined by the collocation equation

$$\begin{aligned} u_h(t) = g(t) + \int _0^t \left( t - s \right) ^{- \alpha } K(t, s) u_h(s) \, d s, \;\, t \in X_h. \end{aligned}$$
(3)

In [1], it is shown that the collocation solution \(u_h\) converges to the exact solution u, with order \(1 - \alpha \), i.e.,

$$\begin{aligned} \left\| u - u_h \right\| _{\infty } := \sup _{t \in I} \left| u(t) - u_h(t) \right| = O( h^{1 - \alpha } ). \end{aligned}$$
(4)

In this paper, we will show that at the mesh points, especially at the endpoint, a better convergence result can be expected.

The following lemma, coming from [1, Lemma 6.2.10], is useful.

Lemma 1

Let \(I_h\) be a uniform mesh on \(I = [0, T]\). If \(\{ c_i \}\) satisfy \(0 \le c_1< \cdots < c_m \le 1\), then, for \(0 \le l < n \le N - 1\) and \(\nu \in \mathbb {N}_0\),

$$\begin{aligned} \int _0^1 \left( n + c_i - l - s \right) ^{- \alpha } s^{\nu } \, d s \le \gamma (\alpha ) \left( n - l \right) ^{- \alpha }, \ i = 1, 2, \ldots , m, \end{aligned}$$

where \(\gamma (\alpha ) := \frac{2^{\alpha }}{1-\alpha }\).

3 Fine Error Estimations for \(m = 1\) and Constant Kernels at Mesh Points

In order to obtain the first insight, in this section, we assume that \(m = 1\) and \(K(t, s) \equiv 1\).

Let \(e_h := u - u_h\). On the first mesh interval \([ t_0, t_1 ] = [ 0, h ]\), by [1, Theorem 6.2.9], we know that there exists a constant \(C_1\), which is independent of h and N, such that

$$\begin{aligned} \left| e_h( t_0 + v h ) \right| \le C_1 h^{1 - \alpha }, \ 0 < v \le 1. \end{aligned}$$
(5)

For \(1 \le n \le N - 1\), the collocation error on \(\left( t_{n}, t_{n + 1} \right] \) has the local Lagrange representation

$$\begin{aligned} e_h( t_{n} + v h ) = \varepsilon _{n, 1} + h R_{n}(v), \end{aligned}$$
(6)

where \(\varepsilon _{n, 1} := e_h( t_{n, 1} ), t_{n, 1} := t_n + c_1 h\) and

$$\begin{aligned} R_{n}(v) = u'( \xi _n (v) ) \left( v - c_1 \right) , \ t_n< \xi _n (v) < t_{n + 1}. \end{aligned}$$

By [2] (see also [1, Theorem 6.1.6]), there exists a constant \(C_2\), such that

$$\begin{aligned} \left| R_{n}(v) \right| \le C_2 t_n^{-\alpha } = C_2 \left( n h \right) ^{- \alpha }. \end{aligned}$$
(7)

By (1), (3) and (6), we have

$$\begin{aligned} \varepsilon _{n, 1} =\,&e_h(t_{n, 1}) = \int _0^{t_{n, 1}} \left( t_{n, 1} - s \right) ^{- \alpha } e_h(s) \, d s\\ =\,&h^{1 - \alpha } \int _0^{c_1} \left( c_1 - s \right) ^{- \alpha } e_h( t_{n} + s h ) \, d s + h^{1 - \alpha } \sum _{l = 0}^{n - 1} \int _0^{1} \left( \frac{t_{n, 1} - t_l}{h} - s \right) ^{- \alpha } e_h( t_l + s h ) \, d s\\ =\,&h^{1 - \alpha } \int _0^{c_1} \left( c_1 - s \right) ^{- \alpha } \bigg [ \varepsilon _{n, 1} + h R_{n}(s) \bigg ] \, d s + h^{1 - \alpha } \int _0^{1} \left( n + c_1 - s \right) ^{- \alpha } e_h( t_0 + s h ) \, d s\\&+ h^{1 - \alpha } \sum _{l = 1}^{n - 1} \int _0^{1} \left( n + c_1 - l - s \right) ^{- \alpha } \bigg [ \varepsilon _{l, 1} + h R_{l}(s) \bigg ] \, d s\\ =\,&h^{1 - \alpha } \int _0^{c_1} \left( c_1 - s \right) ^{- \alpha } \, d s \varepsilon _{n, 1} + h^{1 - \alpha } \sum _{l = 1}^{n - 1} \int _0^{1} \left( n + c_1 - l - s \right) ^{- \alpha } \, d s \varepsilon _{l, 1} + r_{n}(\alpha ), \end{aligned}$$

i.e.,

$$\begin{aligned} \Big ( 1 - h^{1 - \alpha } a_0 \Big ) \varepsilon _{n, 1} - h^{1 - \alpha } \sum _{l = 1}^{n - 1} a_{n - l} \varepsilon _{l, 1} = r_{n}(\alpha ), \end{aligned}$$
(8)

where

$$\begin{aligned} \begin{aligned} r_{n}(\alpha ) :=\,&h^{1 - \alpha } \int _0^{1} \left( n + c_1 - s \right) ^{- \alpha } e_h( t_0 + s h ) \, d s + h^{2 - \alpha } \int _0^{c_1} \left( c_1 - s \right) ^{- \alpha } R_{n}(s) \, d s \\&+ h^{2 - \alpha } \sum \limits _{l = 1}^{n - 1} \int _0^{1} \left( n + c_1 - l - s \right) ^{- \alpha } R_{l}(s) \, d s, \end{aligned} \end{aligned}$$
(9)

and

$$\begin{aligned} a_0 = a_0(c_1; \alpha ) := \int _0^{c_1} \left( c_1 - s \right) ^{- \alpha } \, d s, \ \ a_k = a_k(c_1; \alpha ) := \int _0^{1} \left( k + c_1 - s \right) ^{- \alpha } \, d s \end{aligned}$$

defined as in [5].

Therefore, for \(1 \le n \le N - 1\), (8) can be written as

$$\begin{aligned} {\left( \begin{array}{ccccc} 1 - h^{1 - \alpha } a_0 &{}\quad &{}\quad &{}\quad &{} \quad \\ &{}\quad &{}\quad &{}\quad &{} \quad \\ - h^{1 - \alpha } a_1 &{} 1 - h^{1 - \alpha } a_0 &{} \quad &{}\quad &{} \quad \\ &{}\quad &{}\quad &{}\quad &{} \quad \\ - h^{1 - \alpha } a_2 &{}\quad - h^{1 - \alpha } a_1 &{}\quad 1 - h^{1 - \alpha } a_0 &{}\quad &{} \quad \\ \vdots &{}\quad \vdots &{} \quad \ddots &{}\quad \ddots &{} \quad \\ &{} \quad &{} \quad &{} \quad &{}\quad \\ - h^{1 - \alpha } a_{n - 1} &{}\quad - h^{1 - \alpha } a_{n - 2} &{}\quad \dots &{}\quad - h^{1 - \alpha } a_1 &{}\quad 1 - h^{1 - \alpha } a_0 \\ \end{array} \right) \left( \begin{array}{c} \varepsilon _{1, 1} \\ \varepsilon _{2, 1} \\ \varepsilon _{3, 1} \\ \vdots \\ \varepsilon _{n, 1} \\ \end{array} \right) =\left( \begin{array}{c} r_{1}(\alpha ) \\ r_{2}(\alpha ) \\ r_{3}(\alpha ) \\ \vdots \\ r_{n}(\alpha ) \\ \end{array} \right) .} \end{aligned}$$
(10)

Let \(\varvec{\varepsilon }_{n} := \left( \begin{array}{c} \varepsilon _{1, 1} \\ \varepsilon _{2, 1} \\ \varepsilon _{3, 1} \\ \vdots \\ \varepsilon _{n, 1} \\ \end{array}\right) \) and \(\mathbf {r}_{n}(\alpha ):=\left( \begin{array}{c} r_{1}(\alpha ) \\ r_{2}(\alpha ) \\ r_{3}(\alpha ) \\ \vdots \\ r_{n}(\alpha ) \\ \end{array}\right) \). Then

$$\begin{aligned} \Big (\mathbf {I}_{n} - h^{1 - \alpha } \mathbf {T}_{n} \Big ) \varvec{\varepsilon }_{n} = \mathbf {r}_{n}(\alpha ), \end{aligned}$$
(11)

where \(\mathbf {I}_{n}\) denotes the identity in \(L(\mathbb {R}^{n})\) and \(\mathbf {T}_{n}\) is the lower triangular Toeplitz matrix (see [5]).

It is easy to prove the following lemma.

Lemma 2

Let \(r \in \mathbb {N}\), and

$$\begin{aligned} \mathbf {A} := \left( \begin{array}{ccccc} a_{1, 1} &{} \quad &{} \quad &{} \quad &{} \quad \\ a_{2, 1} &{}\quad a_{2, 2} &{} \quad &{} \quad &{}\quad \\ a_{3, 1} &{}\quad a_{3, 2} &{} \quad a_{3, 3} &{} \quad &{} \quad \\ \vdots &{}\quad \vdots &{} \quad &{} \quad \ddots &{}\quad \\ a_{r, 1} &{}\quad a_{r, 2} &{}\quad \dots &{}\quad \dots &{}\quad a_{r, r} \\ \end{array}\right) \end{aligned}$$

be a lower triangular matrix with \(a_{i, i} \ne 0 \ (i = 1, 2, \ldots , r)\). Then \(\mathbf {A}\) is invertible, and the inverse matrix \(\mathbf {A}^{- 1}\) is also a lower triangular matrix, with the elements

$$\begin{aligned}&w_{i, i} = \frac{1}{a_{i, i}},\quad i = 1, 2, \ldots , r,\\&w_{i, j} = - \frac{1}{a_{i, i}} \sum _{l = 0}^{i - j - 1} a_{i, j + l} w_{j + l, j},\quad 1 \le j < i \le r. \end{aligned}$$

Denote the inverse of the matrix \(\mathbf {I}_{n} - h^{1 - \alpha } \mathbf {T}_{n}\) as \(\mathbf {B}_{n}\) with the element \(b_{i, j}\), and \(\bar{a}_{0} = \bar{a}_{0}(c_1; \alpha ) := 1 - h^{1 - \alpha } a_{0}(c_1; \alpha )\). By Lemma 2, we easily obtain the following corollary.

Corollary 1

For \(1 \le n \le N - 1\), the matrix \(\mathbf {I}_{n} - h^{1 - \alpha } \mathbf {T}_{n}\) is invertible for sufficiently small h, and \(\mathbf {B}_{n} = \left( \mathbf {I}_{n} - h^{1 - \alpha } \mathbf {T}_{n} \right) ^{- 1}\) is also a lower triangular matrix, with the elements

$$\begin{aligned}&b_{i, i} = \frac{1}{\bar{a}_{0}},\quad i = 1, 2, \ldots , n,\\&b_{i, j} = h^{1 - \alpha } \frac{1}{\bar{a}_{0}} \sum _{l = 0}^{i - j - 1} a_{i - j - l} b_{j + l, j}, \quad 1 \le j < i \le n. \end{aligned}$$

Lemma 3

For \(1\le n\le N\),

$$\begin{aligned} \sum _{l = 1}^{n} l^{- \alpha } \le \frac{n^{1 - \alpha }}{1 - \alpha }, \end{aligned}$$

and

$$\begin{aligned} \sum _{l = 1}^{n - 1} \left( n - l \right) ^{- \alpha } l^{- \alpha } \le \frac{2^{2 \alpha }}{1 - \alpha } n^{1 - 2 \alpha }. \end{aligned}$$

Proof

The first part follows from [4, Lemma 5.6], and the second part follows from [3, Lemma 6.1]. \(\square \)

Lemma 4

For \(1 \le n \le N - 1, 1 \le i \le n, 1 \le k \le n - i\), \(b_{i + k, k}\) has the same value as \(b_{i + 1, 1}\), which is independent of k, i.e.,

$$\begin{aligned} b_{i + k, k} = b_{i + 1, 1}. \end{aligned}$$

In addition, there exists a constant \(C_3\), which is independent of h and N, such that

$$\begin{aligned} \left| b_{i + k, k} \right| \le C_3 h^{1 - \alpha } i^{- \alpha }. \end{aligned}$$

Proof

We use the argument of the mathematical induction. First, for \(i = 1, 1 \le k \le n - i\),

$$\begin{aligned} b_{1 + k, k} = h^{1 - \alpha } \frac{1}{\bar{a}_{0}} \sum _{l = 0}^{1 + k - k - 1} a_{1 + k - k - l} b_{k + l, k} = h^{1 - \alpha } \frac{1}{\bar{a}_{0}} a_1 b_{k, k} = h^{1 - \alpha } \frac{a_1}{\bar{a}_{0}^2}, \end{aligned}$$

so the values of \(b_{1 + k, k}\) are same, i.e., \(b_{1 + k, k} = b_{2, 1}\).

We assume that for \(1 \le i \le n - 1, 1 \le k \le n - i\) the values of \(b_{i + k, k}\) are same, which implies \(b_{i + k, k} = b_{i + 1, 1}\). Then by Corollary 1,

$$\begin{aligned} b_{i + 1 + k, k}&= h^{1 - \alpha } \frac{1}{\bar{a}_{0}} \sum _{l = 0}^{i + 1 + k - k - 1} a_{i + 1 + k - k - l} b_{k + l, k}\\&= h^{1 - \alpha } \frac{1}{\bar{a}_{0}} \sum _{l = 0}^{i} a_{i + 1 - l} b_{l + k, k} = h^{1 - \alpha } \frac{1}{\bar{a}_{0}} \sum _{l = 0}^{i} a_{i + 1 - l} b_{l + 1, 1}, \end{aligned}$$

which is independent of k with \(1 \le k \le n - (i + 1)\), i.e., the values of \(b_{i + 1 + k, k}\) are same, and \(b_{i + 1 + k, k} = b_{i + 2, 1}\). The proof of the first part is complete.

In addition, for sufficiently small h, there exists a constant \(D_0\), which is independent of h and N, such that

$$\begin{aligned} \left| \frac{1}{\bar{a}_{0}} \right| = \left| \frac{1}{1 - h^{1 - \alpha } \int _0^{c_1} \left( c_1 - s \right) ^{- \alpha } \, d s} \right| \le D_0. \end{aligned}$$

So by Corollary 1 and Lemma 1, we have,

$$\begin{aligned} \left| b_{i + 1, 1} \right| =\,&h^{1 - \alpha } \left| \frac{1}{\bar{a}_{0}} \sum _{l = 0}^{i - 1} a_{i - l} b_{l + 1, 1} \right| \\ \le \,&D_0 \gamma (\alpha ) h^{1 - \alpha } \sum _{l = 0}^{i - 1} \left( i - l \right) ^{- \alpha } \left| b_{l + 1, 1} \right| \\ \le \,&D_0 \gamma (\alpha ) h^{1 - \alpha } \sum _{l = 1}^{i - 1} \left( i - l \right) ^{- \alpha } \left| b_{l + 1, 1} \right| + D^2_0 \gamma (\alpha ) h^{1 - \alpha } i^{- \alpha }. \end{aligned}$$

By the discrete Gronwall inequality (see [1, Theorem 6.1.19]), we know that there exists a constant \(C_3\), which is independent of h and N, such that

$$\begin{aligned} \left| b_{i + 1, 1} \right| \le C_3 h^{1 - \alpha } i^{- \alpha }. \end{aligned}$$

\(\square \)

Theorem 1

Assume that \(g \in C^{1}(I), K \in C^{1}(D)\). Let u and \(u_h \in S_{0}^{(- 1)}(I_h)\) be the exact solution and the collocation solution defined by the collocation Eq. (3), respectively, for the second-kind Volterra integral Eq. (1). Then for sufficiently small h,

$$\begin{aligned} \left\| u - u_h \right\| _{n, \infty } := \sup _{t \in (t_n, \ t_{n + 1}]} \left| u(t) - u_h(t) \right| \le C t_n^{- \alpha } h, \end{aligned}$$

where C is a constant independent of h and N.

In particular, there exist constants \(\hat{C}\) and \(\bar{C}\), independent of h and N, such that at the collocation points,

$$\begin{aligned} \left| u(t_{n, 1}) - u_h(t_{n, 1}) \right| \le \hat{C} t_{n}^{1 - 2 \alpha } h, \end{aligned}$$

and at the endpoint,

$$\begin{aligned} \left| u(T) - u_h(T) \right| \le \bar{C} h. \end{aligned}$$

Proof

First, by (5), (7), (9), Lemmas 1 and 3, there exists a constant \(C_4\), such that

$$\begin{aligned}&\left| r_{n}(\alpha ) \right| \\&\quad \le C_1 \gamma (\alpha ) n^{- \alpha } h^{2 (1 - \alpha )} + C_2 \frac{c_1^{1 - \alpha }}{1 - \alpha } \left( n h \right) ^{- \alpha } h^{2 - \alpha } + C_2 \gamma (\alpha ) h^{2 - \alpha } \sum \limits _{l = 1}^{n - 1} \left( n - l \right) ^{- \alpha } \left( l h \right) ^{- \alpha }\\&\quad \le C_1 \gamma (\alpha ) n^{- \alpha } h^{2 (1 - \alpha )} + C_2 \frac{c_1^{1 - \alpha }}{1 - \alpha } n^{- \alpha } h^{2 (1 - \alpha )} + C_2 \frac{2^{2 \alpha }}{1 - \alpha } \gamma (\alpha ) n^{1 - 2 \alpha } h^{2 (1 - \alpha )} \\&\quad \le C_4 n^{1 - 2 \alpha } h^{2 (1 - \alpha )}. \end{aligned}$$

Next, by (10), Lemmas 3 and 4, we have

$$\begin{aligned} \left| \varepsilon _{n, 1} \right|&= \left| \sum _{l = 1}^{n} b_{n, l} r_l(\alpha ) \right| \le \sum _{l = 1}^{n - 1} \left| b_{n, l} \right| \left| r_l(\alpha ) \right| + \left| \frac{r_{n}(\alpha )}{\bar{a}_0} \right| \\&\le C_3 C_4 \sum _{l = 1}^{n - 1} h^{1 - \alpha } \left( n - l \right) ^{- \alpha } l^{1 - 2 \alpha } h^{2 (1 - \alpha )} + D_0 C_4 n^{1 - 2 \alpha } h^{2 (1 - \alpha )}\\&\le C_3 C_4 \frac{2^{2 \alpha }}{1 - \alpha } T^{1 - \alpha } n^{1 - 2 \alpha } h^{2 (1 - \alpha )} + D_0 C_4 n^{1 - 2 \alpha } h^{2 (1 - \alpha )}\\&\le \left( C_3 C_4 \frac{2^{2 \alpha }}{1 - \alpha } T^{1 - \alpha } + D_0 C_4 \right) n^{1 - 2 \alpha } h^{2 (1 - \alpha )}\\&=: \hat{C} n^{1 - 2 \alpha } h^{2 (1 - \alpha )}. \end{aligned}$$

In particular, by (6) and (7), there exists a constant C, such that for \(1 \le n \le N - 1\),

$$\begin{aligned} \left| e_h(t_n + v h) \right|&= \left| u(t_n + v h) - u_h(t_n + v h) \right| \\&\le \left| \varepsilon _{n, 1} \right| + h \left| R_{n}(v) \right| \\&\le \hat{C} n^{1 - 2 \alpha } h^{2 (1 - \alpha )} + C_2 h \left( n h \right) ^{- \alpha }\\&\le C n^{- \alpha } h^{1 - \alpha }. \end{aligned}$$

Further, at \(t = t_N = T\), for \(N \ge 2\),

$$\begin{aligned} \left| u(T) - u_h(T) \right| = \left| u(t_N) - u_h(t_N) \right| \le C N^{- \alpha } h^{1 - \alpha } \le C T ^{- \alpha } h. \end{aligned}$$

\(\square \)

Corollary 2

If \(\alpha \le 0.5\), the order of the error at the collocation points is always 1; i.e.

$$\begin{aligned} \max _{n} \left| u(t_{n, 1}) - u_h(t_{n, 1}) \right| = O(h). \end{aligned}$$

4 Fine Error Estimations for \(m \ge 1\) and General Kernels at Mesh Points

Let \(e_h := u - u_h\). On the first mesh interval \([t_0, t_1] = [0, h]\), by [1, Theorem 6.2.9], we know that there exists a constant \(M_1\), such that

$$\begin{aligned} \left| e_h( t_0 + v h ) \right| \le M_1 h^{1 - \alpha }, \ 0 < v \le 1. \end{aligned}$$
(12)

For \(1 \le n \le N - 1\), the collocation error on \(\left( t_{n}, t_{n + 1} \right] \) has the local Lagrange representation

$$\begin{aligned} e_h( t_{n} + v h ) = \sum _{j = 1}^m L_j(v) \varepsilon _{n, j} + h^m R_{m, n}(v), \end{aligned}$$
(13)

where \(\varepsilon _{n, j} := e_h(t_{n, j})\), \(t_{n, j} := t_n + c_j h\) and

$$\begin{aligned} R_{m, n}(v) = u^{(m)}( \eta _n(v) ) \prod _{j = 1}^m \left( v -c_j \right) , \ t_n< \eta _n(v) < t_{n + 1}. \end{aligned}$$

By [2] (see also [1, Theorem 6.1.6]), there exists a constant \(M_2\), such that

$$\begin{aligned} \left| R_{m, n}(v) \right| \le M_2 t_n^{ - (m - 1) - \alpha } = M_2 \left( n h \right) ^{1 - m - \alpha }. \end{aligned}$$
(14)

By (1), (3) and (13), we have

$$\begin{aligned} \varepsilon _{n, i} =\,&e_h(t_{n, i}) = \int _0^{t_{n, i}} \left( t_{n, i} - s \right) ^{- \alpha } K(t_{n, i}, s) e_h(s) \, d s\\ =\,&h^{1 - \alpha } \int _0^{c_i} \left( c_i - s \right) ^{- \alpha } K(t_{n, i}, t_n + s h) e_h(t_{n} + s h) \, d s\\&+ h^{1 - \alpha } \sum _{l = 0}^{n - 1} \int _0^{1} \left( \frac{t_{n, i} - t_l}{h} - s \right) ^{- \alpha } K(t_{n, i}, t_l + s h) e_h(t_l + s h) \, d s\\ =\,&h^{1 - \alpha } \int _0^{c_i} \left( c_i - s \right) ^{- \alpha } K(t_{n, i}, t_n + s h) \bigg [ \sum _{j = 1}^m L_j(s) \varepsilon _{n, j} + h^m R_{m, n}(s) \bigg ] \, d s\\&+ h^{1 - \alpha } \sum _{l = 1}^{n - 1} \int _0^{1} \left( n + c_i - l - s \right) ^{- \alpha } K(t_{n, i}, t_l + s h) \bigg [ \sum _{j = 1}^m L_j(s) \varepsilon _{l, j} + h^m R_{m, l}(s) \bigg ] \, d s\\&+ h^{1 - \alpha } \int _0^{1} \left( n + c_i - s \right) ^{- \alpha } K(t_{n, i}, s h) e_h(t_0 + s h) \, d s\\ =\,&h^{1 - \alpha } \sum _{j = 1}^m \int _0^{c_i} \left( c_i - s \right) ^{- \alpha } K(t_{n, i}, t_n + s h) L_j(s) \, d s \varepsilon _{n, j} \\&+ h^{1 - \alpha } \sum _{l = 1}^{n - 1} \sum _{j = 1}^m \int _0^{1} \left( n + c_i - l - s \right) ^{- \alpha } K(t_{n, i}, t_l + s h) L_j(s) \, d s \varepsilon _{l, j} + r_{m, n}(c_i; \alpha ), \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} r_{m, n}(c_i; \alpha ) :=\,&h^{1 - \alpha } \int _0^{1} \left( n + c_i - s \right) ^{- \alpha } K(t_{n, i}, s h) e_h(t_0 + s h) \, d s \\&+ h^{m + 1 - \alpha } \int _0^{c_i} \left( c_i - s \right) ^{- \alpha } K(t_{n, i}, t_n + s h) R_{m, n}(s) \, d s \\&+ h^{m + 1 - \alpha } \sum \limits _{l = 1}^{n - 1} \int _0^{1} \left( n + c_i - l - s \right) ^{- \alpha } K(t_{n, i}, t_l + s h) R_{m, l}(s) \, d s. \end{aligned} \end{aligned}$$
(15)

For \(1 \le n \le N - 1\) and \(1 \le l \le n - 1\), denote

$$\begin{aligned} \mathbf {A}_{n, n} = A_{n, n}(c_1, \ldots , c_m; \alpha ) :=\left( \begin{array}{c} {\displaystyle \int _0^{c_i} \left( c_i - s \right) ^{- \alpha } K(t_{n, i}, t_n + s h) L_j(s) \, d s} \\ (i, j = 1, \ldots , m) \\ \end{array}\right) ,\\ \mathbf {A}_{n, l} = \mathbf {A}_{n, l}(c_1, \ldots , c_m; \alpha ) :=\left( \begin{array}{c} {\displaystyle \int _0^{1} \left( n + c_i - l - s \right) ^{- \alpha } K(t_{n, i}, t_l + s h) L_j(s) \, d s} \\ (i, j = 1, \ldots , m) \\ \end{array}\right) ,\\ {\varvec{\varepsilon }}_n := \left( \varepsilon _{n, 1}, \ldots , \varepsilon _{n, m} \right) ^T, \ \mathbf {r}_{m, n} = \mathbf {r}_{m, n}(c_1, \ldots , c_m; \alpha ) := \left( r_{m, n}(c_1; \alpha ), \ldots , r_{m, n}(c_m; \alpha ) \right) ^T. \end{aligned}$$

Then

$$\begin{aligned} \Big ( \mathbf {I}_m - h^{1 - \alpha } \mathbf {A}_{n, n} \Big ) \varvec{\varepsilon }_{n} - h^{1 - \alpha } \sum _{l = 1}^{n - 1} \mathbf {A}_{n, l} \varvec{\varepsilon }_{l} = \mathbf {r}_{m, n}, \end{aligned}$$
(16)

and

$$\begin{aligned} {\left( \begin{array}{ccccc} \mathbf {I}_m - h^{1 - \alpha } \mathbf {A}_{1, 1}&{} &{} &{} &{} \\ &{} &{} &{} &{} \\ - h^{1 - \alpha } \mathbf {A}_{2, 1} &{}\mathbf {I}_m - h^{1 - \alpha } \mathbf {A}_{2, 2}&{} &{} &{} \\ &{} &{} &{} &{} \\ - h^{1 - \alpha } \mathbf {A}_{3, 1} &{}- h^{1 - \alpha } \mathbf {A}_{3, 2} &{}I_m - h^{1 - \alpha } \mathbf {A}_{3, 3} &{} &{}\\ \vdots &{} \vdots &{} \ddots &{} \ddots &{} \\ &{} &{} &{} &{} \\ - h^{1 - \alpha } \mathbf {A}_{n, 1} &{} - h^{1 - \alpha } \mathbf {A}_{n, 2} &{} \dots &{} - h^{1 - \alpha } \mathbf {A}_{n, n - 1} &{}\mathbf {I}_m - h^{1 - \alpha } \mathbf {A}_{n, n} \\ \end{array} \right) \left( \begin{array}{c} \varvec{\varepsilon }_{1} \\ \varvec{\varepsilon }_{2} \\ \varvec{\varepsilon }_{3} \\ \vdots \\ \varvec{\varepsilon }_{n} \\ \end{array} \right) =\left( \begin{array}{c} \mathbf {r}_{m, 1} \\ \mathbf {r}_{m, 2} \\ \mathbf {r}_{m, 3} \\ \vdots \\ \mathbf {r}_{m, n}\\ \end{array}\right) .} \end{aligned}$$
(17)

Denote

$$\begin{aligned} \bar{\mathbf {T}}_{m n} := - h^{1 - \alpha } \left( \begin{array}{ccccc} \mathbf {A}_{1, 1} &{} &{} &{} &{} \\ &{} &{} &{} &{} \\ \mathbf {A}_{2, 1} &{} \mathbf {A}_{2, 2} &{} &{} &{} \\ &{} &{} &{} &{} \\ \mathbf {A}_{3, 1} &{} \mathbf {A}_{3, 2} &{} \mathbf {A}_{3, 3} &{} &{}\\ \vdots &{} \vdots &{} \ddots &{} \ddots &{} \\ &{} &{} &{} &{} \\ \mathbf {A}_{n, 1} &{} \mathbf {A}_{n, 2} &{} \dots &{} \mathbf {A}_{n, n - 1} &{} \mathbf {A}_{n, n} \\ \end{array}\right) . \end{aligned}$$

Then the coefficient matrix can be written as \(\mathbf {I}_{m n} - h^{1 - \alpha } \bar{\mathbf {T}}_{m n}\).

It is easy to prove the following lemma.

Lemma 5

Let \(r \in \mathbb {N}\) and \(\mathbf {D}_{p, q} \ (1 \le q \le p \le r)\) be square matrices, and

$$\begin{aligned} \mathbf {D} := \left( \begin{array}{ccccc} \mathbf {D}_{1, 1} &{} &{} &{} &{} \\ \mathbf {D}_{2, 1} &{} \mathbf {D}_{2, 2} &{} &{} &{} \\ \mathbf {D}_{3, 1} &{} \mathbf {D}_{3, 2} &{} \mathbf {D}_{3, 3} &{} &{} \\ \vdots &{} \vdots &{} &{} \ddots &{} \\ \mathbf {D}_{r, 1} &{} \mathbf {D}_{r, 2} &{} \dots &{} \dots &{} \mathbf {D}_{r, r} \\ \end{array}\right) \end{aligned}$$

be a lower triangular block matrix with invertible \(\mathbf {D}_{p, p} \ (i = 1, 2, \ldots , r)\). Then \(\mathbf {D}\) is invertible, and the inverse matrix \(\mathbf {D}^{- 1}\) is also a lower triangular block matrix, with the elements

$$\begin{aligned} \mathbf {W}_{p, p}&=\mathbf {D}_{p, p}^{- 1}, p = 1, 2, \ldots , r,\\ \mathbf {W}_{p, q}&= - \mathbf {D}_{p, p}^{- 1} \sum _{l = 0}^{p - q - 1} \mathbf {D}_{p, q + l} \mathbf {W}_{q + l, q}, 1 \le q < p \le r. \end{aligned}$$

Denote the inverse of the matrix \(\mathbf {I}_{m n} - h^{1 - \alpha } \bar{\mathbf {T}}_{m n}\) as \(\bar{\mathbf {B}}_{m n}\) with the element \(\mathbf {B}_{i, j}\), and \(\bar{\mathbf {A}}_{i, i} = \bar{\mathbf {A}}_{i, i}(\alpha ) := \mathbf {I}_m - h^{1-\alpha } \mathbf {A}_{i, i}\). By Lemma 5, we easily obtain the following corollary.

Corollary 3

The matrix \(\mathbf {I}_{m n} - h^{1 - \alpha } \bar{\mathbf {T}}_{m n}\) is invertible for sufficiently small h, and the inverse matrix \(\bar{\mathbf {B}}_{m n} = \left( I_{m n} - h^{1 - \alpha } \bar{\mathbf {T}}_{m n} \right) ^{- 1}\) is also a lower triangular block matrix, with the elements

$$\begin{aligned}&\mathbf {B}_{p, p} = \bar{\mathbf {A}}_{p, p}^{- 1}, p = 1, 2, \ldots , n,\\&\mathbf {B}_{p, q} = h^{1 - \alpha } \bar{\mathbf {A}}_{p, p}^{- 1} \sum _{l = 0}^{p - q - 1} \mathbf {A}_{p, q + l} \mathbf {B}_{q + l, q}, \ 1 \le q < p \le n. \end{aligned}$$

For \(1 \le n \le N - 1, 1 \le p \le n, 1 \le k \le n - p\), it is easy to see that for non-constant kernel K(ts), the values of \(\mathbf {B}_{p + k, k}\) are usually different, which is different from the constant kernel case (see Lemma 4). But the estimation for \(\mathbf {B}_{p + k, k}\) still holds, which is described in the following lemma.

Lemma 6

Assume that \(K \in C(D)\), where \(D := \left\{ (t, s): 0 \le s \le t \le T \right\} \). Then for \(1 \le n \le N - 1, 1 \le p \le n, 1 \le k \le n - p\), there exists a constant \(M_3\), which is independent of h and N, such that

$$\begin{aligned} \left\| \mathbf {B}_{p + k, k} \right\| _1 \le M_3 h^{1 - \alpha } p^{- \alpha }. \end{aligned}$$

Proof

Denote \(\bar{K} := \max \limits _{(t, s) \in D} \left| K(t, s) \right| \) and \(\bar{L} := \max \limits _{1 \le j \le m, s \in [0,1]} \left| L_j(s) \right| \). Then by Lemma 1, we know that

$$\begin{aligned} \left| \int _0^{1} \left( n + c_i - l - s \right) ^{- \alpha } K(t_{n, i}, t_l + s h) L_j(s) \, d s \right| \le \bar{K} \bar{L} \gamma (\alpha ) \left( n - l \right) ^{- \alpha }. \end{aligned}$$

For sufficiently small h, \(\bar{\mathbf {A}}_{p, p}^{- 1}\) is uniformly bounded, which implies that there exists a constant \(\bar{D}_0\), which is independent of h and N, such that

$$\begin{aligned} \left\| \bar{\mathbf {A}}_{p, p}^{- 1} \right\| _1 \le \bar{D}_0. \end{aligned}$$

So by Corollary 3 and Lemma 1, we have

$$\begin{aligned} \left\| \mathbf {B}_{p + k, k} \right\| _1 =\,&h^{1 - \alpha } \left\| \bar{\mathbf {A}}_{p + k, p + k}^{- 1} \sum _{l = 0}^{p - 1} \mathbf {A}_{p + k, k + l} B_{l + k, k} \right\| _1\\ \le \,&\bar{D}_0 m \bar{K} \bar{L} \gamma (\alpha ) h^{1 - \alpha } \sum _{l = 0}^{p - 1} \left( p - l \right) ^{- \alpha } \left\| \mathbf {B}_{l + k, k} \right\| _1\\ \le \,&\bar{D}_0 m \bar{K} \bar{L} \gamma (\alpha ) h^{1 - \alpha } \sum _{l = 1}^{p - 1} \left( p - l \right) ^{- \alpha } \left\| \mathbf {B}_{l + k, k} \right\| _1 + \bar{D}^2_0 m \bar{K} \bar{L} \gamma (\alpha ) h^{1 - \alpha } p^{- \alpha }. \end{aligned}$$

By the discrete Gronwall inequality (see [1, Theorem 6.1.19]), we know that there exists a constant \(M_3\), which is independent of h and N, such that

$$\begin{aligned} \left\| \mathbf {B}_{p + k, k} \right\| _1 \le M_3 h^{1 - \alpha } p^{- \alpha }. \end{aligned}$$

Lemma 7

For \(1\le n\le N\), \(m \ge 2\) and \(0< \alpha < 1\),

$$\begin{aligned} \sum _{l = 1}^{n - 1} \left( n - l \right) ^{- \alpha } l^{1 - m - \alpha } \le \bar{\gamma }(\alpha ) n^{- \alpha }, \end{aligned}$$

where \(\bar{\gamma }(\alpha ) := 2 ^{\alpha } \left( 1 +\frac{1}{m - 2 + \alpha }\right) + \frac{2^{m - 2 (1 - \alpha )}}{1 - \alpha }\).

Proof

By

$$\begin{aligned} \sum _{l = 1}^{n} l^{1 - m - \alpha } \le 1 + \int _1^n s^{1 - m - \alpha } \, d s = 1 + \frac{n^{2 - m - \alpha } - 1}{2 - m - \alpha } \le 1 + \frac{1}{m - 2 + \alpha }, \end{aligned}$$

and together with Lemma 3, we obtain

$$\begin{aligned} \sum _{l = 1}^{n - 1} \left( n - l \right) ^{- \alpha } l^{1 - m - \alpha }&= \sum _{l = 1}^{\lfloor \frac{n}{2}\rfloor } \left( n - l \right) ^{- \alpha } l^{1 - m - \alpha } + \sum _{l = \lfloor \frac{n}{2}\rfloor +1}^{n - 1} \left( n - l \right) ^{- \alpha } l^{1 - m - \alpha } \\&\le \left( \frac{n}{2} \right) ^{- \alpha } \sum _{l = 1}^{\lfloor \frac{n}{2}\rfloor } l^{1 - m - \alpha } + \left( \frac{n}{2} \right) ^{1 - m - \alpha } \sum _{l = \lfloor \frac{n}{2}\rfloor +1}^{n - 1} \left( n - l \right) ^{- \alpha }\\&\le \left( \frac{n}{2} \right) ^{- \alpha } \left( 1 + \frac{1}{m - 2 + \alpha }\right) + \left( \frac{n}{2} \right) ^{1 - m - \alpha } \frac{\left( \frac{n}{2} \right) ^{1 - \alpha }}{1 - \alpha }\\&= 2 ^{\alpha } \left( 1 + \frac{1}{m - 2 + \alpha }\right) n^{- \alpha } + \frac{2^{m - 2 (1 - \alpha )}}{1 - \alpha } n^{2 (1 - \alpha ) - m} \\&\le \left[ 2 ^{\alpha } \left( 1 + \frac{1}{m - 2 + \alpha }\right) + \frac{2^{m - 2 (1 - \alpha )}}{1 - \alpha } \right] n^{- \alpha }. \end{aligned}$$

\(\square \)

Theorem 2

Assume that \(g \in C^{m}(I), K \in C^{m}(D)\), and \(u_h \in S_{m - 1}^{(- 1)}(I_h)\) is the collocation solution for the second-kind Volterra integral Eq. (1) defined by the collocation Eq. (3). Then for sufficiently small h,

$$\begin{aligned} \left\| u - u_h \right\| _{n, \infty } := \sup _{t \in (t_n, \ t_{n + 1}]} \left| u(t) - u_h(t) \right| \le M \left( t_n^{- \alpha } h^{2 - \alpha } + t_n^{1 - m - \alpha } h^{m} \right) , \end{aligned}$$

where M is a constant independent of h and N.

In particular, there exist constants \(\hat{M}\) and \(\bar{M}\), independent of h and N, such that at the collocation points,

$$\begin{aligned} \left| u(t_{n, i}) - u_h(t_{n, i}) \right| \le \hat{M} \left\{ \begin{array}{ll} t_n^{1 - 2 \alpha } h , &{} \hbox {if} \ m = 1 \\ t_n^{- \alpha } h^{2 - \alpha } , &{} \hbox {if}\ m \ge 2, \end{array} \right. \end{aligned}$$

and at the endpoint,

$$\begin{aligned} \left| u(T) - u_h(T) \right| \le \bar{M} h^{ \min \{ 2 - \alpha , m \} }. \end{aligned}$$

Proof

We divide into the following two cases.

Case I:\(m = 1\).

First, by (12), (14), (15), Lemmas 1 and 3, there exists a constant \(\hat{M}_4\), which is independent of h and N, such that

$$\begin{aligned}&\left| r_{1, n}(c_1; \alpha ) \right| \\&\quad \le M_1 \bar{K} \gamma (\alpha ) h^{2 (1 - \alpha )} n^{- \alpha } + M_2 \frac{c_1^{1 - \alpha }}{1 - \alpha } \bar{K} n^{- \alpha } h^{2 (1 - \alpha )} + M_2 \bar{K} \gamma (\alpha ) h^{2 (1 - \alpha )} \sum \limits _{l = 1}^{n - 1} \left( n - l \right) ^{- \alpha } l^{- \alpha } \\&\quad \le M_1 \bar{K} \gamma (\alpha ) n^{- \alpha } h^{2 (1 - \alpha )} + M_2 \frac{c_1^{1 - \alpha }}{1 - \alpha } \bar{K} n^{- \alpha } h^{2 (1 - \alpha )} + M_2 \bar{K} \gamma (\alpha ) \frac{2^{2 \alpha }}{1 - \alpha } n^{1 - 2 \alpha } h^{2 (1 - \alpha )}\\&\quad \le \hat{M}_4 n^{1 - 2 \alpha } h^{2 (1 - \alpha )}. \end{aligned}$$

Similar to the case of \(m = 1\) and constant kernels in Sect. 3, it is easy to obtain that there exist constants \(\hat{M}_5\) and \(\hat{M}_6\), such that

$$\begin{aligned} |\varvec{\varepsilon }_{n}| \le \hat{M}_5 n^{1 - 2 \alpha } h^{2 (1 - \alpha )}, \end{aligned}$$

and

$$\begin{aligned} \left| e_h(t_n + v h) \right| \le \hat{M}_6 t_n^{- \alpha } h. \end{aligned}$$

In particular, at \(t = t_N = T\), for \(N \ge 2\),

$$\begin{aligned} \left| u(T) - u_h(T) \right| = \left| u(t_N) - u_h(t_N) \right| \le \hat{M}_6 t_N^{- \alpha } h = \hat{M}_6 T^{- \alpha } h, \end{aligned}$$

which completes the proof.

Case II:\(m > 1\).

First, by (12), (14), (15), Lemmas 1 and 7, there exists a constant \(M_4\), which is independent of h and N, such that

$$\begin{aligned}&\left| r_{m, n}(c_i; \alpha ) \right| \\&\quad \le M_1 \bar{K} \gamma (\alpha ) h^{2 (1 - \alpha )} n^{- \alpha } + M_2 \frac{c_i^{1 - \alpha }}{1 - \alpha } \bar{K} \left( n h \right) ^{1 - m - \alpha } h^{m + 1 - \alpha }\\&\qquad + M_2 \bar{K} \gamma (\alpha ) h^{m + 1 - \alpha } \sum \limits _{l = 1}^{n - 1} \left( n - l \right) ^{- \alpha } \left( l h \right) ^{1 - m - \alpha }\\&\quad \le M_1 \bar{K} \gamma (\alpha ) n^{- \alpha } h^{2 (1 - \alpha )} + \frac{M_2}{1 - \alpha } \bar{K} n^{1 - m - \alpha } h^{2 (1 - \alpha )} + M_2 \bar{K} \gamma (\alpha ) \bar{\gamma }(\alpha ) n^{- \alpha } h^{2 (1 - \alpha )}\\&\quad \le M_4 n^{- \alpha } h^{2 (1 - \alpha )}. \end{aligned}$$

Next, by (17), Lemmas 3 and 6, we have

$$\begin{aligned} \begin{aligned} \Vert \varvec{\varepsilon }_{n}\Vert _1 =\,&\left\| \sum _{l = 1}^{n} B_{n, l} r_{m, l} \right\| _1 \le \sum _{l = 1}^{n - 1} \left\| B_{n, l} \right\| _1 \left\| r_{m, l}(\alpha ) \right\| _1 + \left\| \bar{A}_{n, n}^{- 1}\Vert _1 \Vert r_{m, n}(\alpha ) \right\| _1 \\ \le \,&m M_3 M_4 \sum _{l = 1}^{n - 1} h^{1 - \alpha } \left( n - l \right) ^{- \alpha } l^{- \alpha } h^{2 (1 - \alpha )} + m \bar{D}_0 M_4 n^{- \alpha } h^{2 (1 - \alpha )} \\ \le \,&\left( m M_3 M_4 T^{1 - \alpha } \frac{2^{2 \alpha }}{1 - \alpha } + m \bar{D}_0 M_4 \right) n^{- \alpha } h^{2 (1 - \alpha )} \\ =:&M_5 n^{- \alpha } h^{2 (1 - \alpha )}. \end{aligned} \end{aligned}$$
(18)

By (13) and (14), there exists a constant \(M_6\), such that

$$\begin{aligned} \left| e_h(t_n + v h) \right|&= \left| u(t_n + v h) - u_h(t_n + v h) \right| \\&\le \left| \sum _{j = 1}^m L_j(v) \varepsilon _{n, j} \right| + h^m \left| R_{m, n}(v) \right| \\&\le \bar{L} M_5 n^{- \alpha } h^{2 (1 - \alpha )} + M_2 h^m \left( n h \right) ^{1 - m - \alpha }\\&\le M_6 \left( t_n^{- \alpha } h^{2 - \alpha } + t_n^{1 - m - \alpha } h^{m}\right) . \end{aligned}$$

In particular,

$$\begin{aligned} \left| u(T) - u_h(T) \right| =\,&\left| u(t_N) - u_h(t_N) \right| \\ \le \,&M_6 \left( T^{- \alpha } h^{2 - \alpha } + T^{1 - m - \alpha } h^{m} \right) \\ \le \,&M_6 \left( T^{- \alpha } + T^{- 1} \right) h^{2 - \alpha }, \end{aligned}$$

which completes the proof. \(\square \)

Corollary 4

For the general kernel, if \(m = 1\) and \(\alpha \le 0.5\), the order of the error at the collocation points is always 1; i.e.

$$\begin{aligned} \max _{n} \left| u(t_{n, 1}) - u_h(t_{n, 1}) \right| = O(h). \end{aligned}$$

5 Iterated Collocation Methods for \(m = 1\)

In the following, we investigate the iterated collocation methods for \(m = 1\) to obtain some further superconvergence results.

5.1 The First Iterated Collocation Method

Let

$$\begin{aligned} u_h^{it, 1}(t) := g(t) + \int _0^t ( t - s )^{- \alpha } K(t, s) u_h(s) \ d s, \ t \in I \end{aligned}$$

be the first iterated collocation method. It is obvious that

$$\begin{aligned} u_h^{it, 1}(t) = u_h(t), \ \hbox {for all} \ t \in X_h. \end{aligned}$$

Let

$$\begin{aligned} \delta _h(t) := -u_h(t) + g(t) + \int _0^t ( t - s )^{- \alpha } K(t, s) u_h(s) \ d s, \ t \in I \end{aligned}$$

with \(\delta _h(t) = 0\) whenever \(t \in X_h\). Then

$$\begin{aligned} \delta _h(t) = e_h(t) - \int _0^t ( t - s )^{- \alpha } K(t, s) e_h(s) \ d s, \ t \in I. \end{aligned}$$

At \(t = t_n + v h\), by Lemmas 1, 3, and Theorem 2, there exists a constant \(\tilde{E}_0\), such that

$$\begin{aligned} \left| \delta _h(t_n + v h) \right| \le \,&\left| e_h(t_n + v h) \right| + h^{1 - \alpha } \left| \int _0^v ( v - s )^{- \alpha } K(t_n + v h, t_n + s h) e_h(t_n + s h) \ d s \right| \\&+ h^{1 - \alpha } \sum _{l = 0}^{n - 1} \left| \int _0^1 (n - l + v - s )^{- \alpha } K(t_n + v h, t_l + s h) e_h(t_l + s h) \ d s \right| \\ \le \,&2M t_n^{ - \alpha } h + 2\bar{K} M t_n^{ - \alpha } h \frac{h^{1 - \alpha }}{1 - \alpha } + 2\bar{K} M \gamma (\alpha ) h^{1 - \alpha } \sum _{l = 0}^{n - 1} ( n - l )^{- \alpha } t_l^{ - \alpha } h \\ \le \,&\tilde{E}_0 t_n^{ - \alpha } h. \end{aligned}$$

By (13) and (14), for \(1 \le n \le N - 1\), and \(t \in \left( t_{n}, t_{n + 1} \right] \), there exists a constant \(E_1\), such that

$$\begin{aligned} \left| e'_h(t_n + v h) \right| =\,&\left| R'_{1, n}(v) \right| = \left| \frac{d}{dv}\left[ u^{'}( \eta _n(v) ) ( v - c_1 ) \right] \right| \\ =\,&\left| h u^{''}( \eta _n(v) ) \left( v - c_1 \right) + u^{'}( \eta _n(v) ) \right| \\ \le \,&h M_2 \left( n h \right) ^{- 1 - \alpha } + M_2 \left( n h \right) ^{ - \alpha }\\ \le \,&E_1 t_n^{ - \alpha }. \end{aligned}$$

Similarly, there exists a constant \(E_2\), such that

$$\begin{aligned} \left| e''_h(t_n + v h) \right| =\,&\left| h^{- 1} R''_{1, n}(v) \right| \le h^{-1} \left| \frac{d^2}{dv^2}\left[ u^{'}(\eta _n(v)) (v - c_1) \right] \right| \\ =\,&h^{ - 1} \left| h^2 u^{'''}(\eta _n(v)) (v - c_1) + 2 h u^{''}(\eta _n(v)) \right| \\ \le \,&h M_2 \left( n h \right) ^{- 2 - \alpha } + 2 M_2 \left( n h \right) ^{ - 1 - \alpha } \\ \le \,&E_2 t_n^{ - 1 - \alpha }. \end{aligned}$$

In addition,

$$\begin{aligned} \delta _h(t) =\,&e_h(t) + \frac{1}{ 1 - \alpha } \int _0^t K(t, s) e_h(s) \ d ( t - s )^{1 - \alpha }\\ =\,&e_h(t) + \frac{1}{ 1 - \alpha } \left[ - K(t, 0) e_h(0) t^{1 - \alpha } - \int _0^t ( t - s )^{1 - \alpha } \frac{\partial }{\partial s} \Big ( K(t, s) e_h(s) \Big ) \ d s \right] \\ =\,&e_h(t) - \frac{1}{ 1 - \alpha } \Big ( K(t, 0) t^{1 - \alpha } \Big ) e_h(0) - \frac{1}{ (1 - \alpha )(2 - \alpha )}\frac{\partial }{\partial s} \Big ( K(t, s) e_h(s) \Big ) \Big |_{s = 0} t^{2 - \alpha } \\&- \frac{1}{ (1 - \alpha )(2 - \alpha )} \int _0^t ( t - s )^{2 - \alpha } \frac{\partial ^2}{\partial s^2} \Big ( K(t, s) e_h(s) \Big ) \ d s, \end{aligned}$$

therefore,

$$\begin{aligned} \delta '_h(t) =\,&e'_h(t) - \frac{1}{ 1 - \alpha }\frac{d}{dt} \Big ( K(t, 0) t^{1 - \alpha } \Big ) e_h(0) - \int _0^t ( t - s )^{ - \alpha } \left( \frac{\partial K(t, s)}{\partial s} e_h(s) + K(t, s) e'_h(s) \right) \ ds\\&- \frac{1}{ 1 - \alpha } \int _0^t ( t - s )^{ 1 - \alpha } \left( \frac{\partial ^2 K(t, s)}{\partial t \partial s} e_h(s) + \frac{\partial K(t, s)}{\partial t} e'_h(s) \right) \ d s, \end{aligned}$$
$$\begin{aligned} \delta ''_h(t) =\,&e''_h(t) - \frac{1}{ 1 - \alpha }\frac{d^2}{dt^2} \Big ( K(t, 0) t^{1 - \alpha } \Big ) e_h(0) - \frac{1}{ (1 - \alpha )(2 - \alpha )}\frac{\partial ^2}{\partial t^2} \left[ \frac{\partial }{\partial s} \Big ( K(t, s) e_h(s) \Big ) \Big |_{s = 0} t^{2 - \alpha } \right] \\&- \int _0^t ( t - s )^{ - \alpha } \left( \frac{\partial ^2 K(t, s)}{\partial s^2} e_h(s) + 2 \frac{\partial K(t, s)}{\partial s} e'_h(s) + K(t, s) e''_h(s) \right) \ d s\\&- \frac{2}{ 1 - \alpha } \int _0^t ( t - s )^{ 1 - \alpha } \left( \frac{\partial ^3 K(t, s)}{\partial t \partial s^2} e_h(s) + 2 \frac{\partial ^2 K(t, s)}{\partial t \partial s} e'_h(s) + \frac{\partial K(t, s)}{\partial t} e''_h(s)\right) \ d s\\&- \frac{1}{ (1 - \alpha )(2 - \alpha )} \int _0^t ( t - s )^{ 2 - \alpha } \left( \frac{\partial ^4 K(t, s)}{\partial t^2 \partial s^2} e_h(s) + 2 \frac{\partial ^3 K(t, s)}{\partial t^2 \partial s} e'_h(s) + \frac{\partial ^2 K(t, s)}{\partial t^2} e''_h(s)\right) \ d s, \end{aligned}$$

and by Lemmas 1, 3 and 7, there exist constants \(\tilde{E}_1\) and \(\tilde{E}_2\), such that

$$\begin{aligned} \left| \delta '_h(t_n + vh) \right| \le \,&\left| e'_h(t_n + vh) \right| + \left[ \frac{\bar{K}_1}{ 1 - \alpha } \left( t_n + vh \right) ^{1 - \alpha } + \bar{K} \left( t_n + vh \right) ^{ - \alpha } \right] \left| e_h(0) \right| \\&+ \int _0^{t_n + vh} ( t_n + vh - s )^{ - \alpha } \Big [ \bar{K}_1 \left| e_h(s) \right| + \bar{K} \left| e'_h(s) \right| \Big ] \ d s\\&+ \frac{1}{ 1 - \alpha } \int _0^{t_n + vh} ( t_n + vh - s )^{ 1 - \alpha } \Big [ \bar{K}_2 \left| e_h(s) \right| + \bar{K}_1 \left| e'_h(s) \right| \Big ] \ d s \\ \le \,&\tilde{E}_1 t_n^{ - \alpha }, \end{aligned}$$

and

$$\begin{aligned}&\left| \delta ''_h(t_n + vh) \right| \\&\quad \le \left| e''_h(t_n + vh) \right| + \left[ \frac{\bar{K}_2}{1 - \alpha } (t_n + vh)^{1 - \alpha } + 2 \bar{K}_1 (t_n + vh)^{ - \alpha } + \alpha \bar{K} (t_n + vh)^{ - \alpha - 1} \right] \left| e_h(0) \right| \\&\qquad + \frac{\bar{K}_3 \left| e_h(0) \right| + \bar{K}_2 \left| e'_h(0) \right| }{ (1 - \alpha )(2 - \alpha )} (t_n + vh)^{ 2 - \alpha } + 2 \frac{\bar{K}_2 \left| e_h(0) \right| + \bar{K}_1 \left| e'_h(0) \right| }{ 1 - \alpha } (t_n + vh)^{ 1 - \alpha } \\&\qquad + \Big [ \bar{K}_1 \left| e_h(0) \right| + \bar{K} \left| e'_h(0) \right| \Big ] (t_n + vh)^{ - \alpha }\\&\qquad + \int _0^{t_n + vh} ( t_n + vh - s )^{ - \alpha } \Big [ \bar{K}_2 \left| e_h(s) \right| + 2 \bar{K}_1 \left| e'_h(s) \right| + \bar{K} \left| e''_h(s) \right| \Big ] \ d s\\&\qquad + \frac{2}{1 - \alpha } \int _0^{t_n + vh} ( t_n + vh - s )^{ 1 - \alpha } \Big [ \bar{K}_3 \left| e_h(s) \right| + 2 \bar{K}_2 \left| e'_h(s) \right| + \bar{K}_1 \left| e''_h(s) \right| \Big ] \ d s\\&\qquad + \frac{1}{ (1 - \alpha )(2 - \alpha )} \int _0^{t_n + vh} ( t_n + vh - s )^{ 2 - \alpha } \Big [ \bar{K}_4 \left| e_h(s) \right| + 2 \bar{K}_3 \left| e'_h(s) \right| + \bar{K}_2 \left| e''_h(s) \right| \Big ] \ d s\\&\quad \le \tilde{E}_2 \left( t_n^{ - 1 - \alpha } + t_n^{- \alpha } h^{- \alpha } \right) , \end{aligned}$$

where \(\bar{K}_j := \max \limits _{0 \le s \le t \le T} \sum \limits _{i = 0}^j \left| \frac{\partial ^j K(t, s)}{\partial t^i \partial s^{j - i}}\right| \ (j \in \mathbb {N})\).

Denote \(e^{it, 1}_h := u - u^{it, 1}_h\). Then by [1, Theorem 6.1.2],

$$\begin{aligned} e^{it, 1}_h(t) = \int _0^t R_{\alpha } (t, s) \delta _h(s) \ d s, \ t \in I, \end{aligned}$$

where \(R_{\alpha } (t, s) := \left( t - s \right) ^{- \alpha } Q(t, s; \alpha )\), \(Q(t, s; \alpha ) := \sum \limits _{n = 1} ^{\infty } \left( t - s \right) ^{(n - 1) (1 - \alpha )} \Phi _n (t, s; \alpha )\), and the functions \(\Phi _n\) are defined recursively by

$$\begin{aligned} \Phi _n (t, s; \alpha ) := \int _0^1 \left( 1 - z \right) ^{-\alpha } z^{(n - 1)(1 - \alpha ) - 1} K(t, s + ( t - s) z) \Phi _{n - 1} (s + (t - s) z, s; \alpha ) \ d z \end{aligned}$$

\(( n \ge 2)\), with \(\Phi _1 (t, s; \alpha ) := K(t, s)\) and \(\Phi _n (\cdot , \cdot ; \alpha ) \in C(D)\).

Therefore, at the first interval \([0, t_1]\), there exists a constant \(E_3\), such that

$$\begin{aligned} \left| e^{it, 1}_h(v h) \right| =\,&\left| \int _0^{vh} \left( vh - s \right) ^{- \alpha } Q(vh, s; \alpha ) \delta _h(s) \ d s \right| \\ =\,&h^{1 - \alpha } \left| \int _0^{v} \left( v - s \right) ^{- \alpha } Q(vh, sh; \alpha ) \delta _h(sh) \ d s \right| \\ \le \,&\tilde{E}_0 \bar{Q} h^{1 - \alpha } \frac{h^{1 - \alpha }}{1 - \alpha } \le E_3 h^{2(1 - \alpha )}, \end{aligned}$$

where \(\bar{Q} := \max \limits _{0 \le s \le t \le T, 0< \alpha < 1} \left| Q(t, s; \alpha ) \right| \).

For \( 1 \le n \le N - 1\),

$$\begin{aligned} e^{it, 1}_h(t_n + v h) =\,&\int _0^{t_n + v h} R_{\alpha } (t_n + v h, s) \delta _h(s) \ d s \\ =\,&h^{1 - \alpha } \int _0^{v} (v - s)^{- \alpha } Q (t_n + v h, t_n + s h; \alpha ) \delta _h(t_n + s h) \ d s \\&+ h^{1 - \alpha } \sum _{l = 0}^{n - 1} \int _0^{1} (n + v - l - s)^{- \alpha } Q (t_n + v h, t_l + s h; \alpha ) \delta _h(t_l + s h) \ d s. \end{aligned}$$

Since

$$\begin{aligned}&\int _0^{1} \left( n + v - l - s \right) ^{- \alpha } Q (t_n + v h, t_l + s h; \alpha ) \delta _h(t_l + s h) \ d s \\ =\,&\int _0^{1} \Big [ \left( n + v - l - s \right) ^{- \alpha } Q (t_n + v h, t_l + s h; \alpha ) \delta _h(t_l + s h) \\&- \left( n + v - l - c_1 \right) ^{- \alpha } Q (t_n + v h, t_{l, 1}; \alpha ) \delta _h(t_{l, 1}) \Big ] \ d s \\ =\,&h \int _0^{1} \left[ \left( n + v - l - s \right) ^{- \alpha } Q (t_n + v h, t_l + s h; \alpha ) \delta _h(t_l + s h) \right] ^{'}|_{s = c_1} \left( s - c_1 \right) \ d s\\&+ h^2 \int _0^{1} \left[ \left( n + v - l - \xi _l \right) ^{- \alpha } Q (t_n + v h, t_l + \xi _l h; \alpha ) \delta _h(t_l + \xi _l h) \right] ^{''} \frac{(s - c_1)^2}{2!}\ d s, \end{aligned}$$

where \(\xi _l \in (0, 1)\), so if the orthogonality condition \(\int _0^1 (s - c_1) \ d s =0\) holds, by the proof of [1, Theorem 6.2.13], there exists a constant \(C_1^{it}\), such that

$$\begin{aligned} \left| e^{it, 1}_h(t_n + v h) \right| \le C_1^{it} t_n^{ - \alpha } h^{2 - \alpha }. \end{aligned}$$

Therefore, we have proved the following theorem.

Theorem 3

Assume that \(g \in C^{2}(I), K \in C^{4}(D)\), and \(u_h \in S_{0}^{(- 1)}(I_h)\) is the collocation solution for the second-kind Volterra integral Eq. (1) defined by the collocation Eq. (3), with the corresponding first iterated collocation solution \(u_h^{it, 1}\). The collocation parameter satisfies

$$\begin{aligned} J_0 := \int _0^1 (s - c_1) \ d s =0 \ ( i.e., c_1 = \frac{1}{2} ). \end{aligned}$$

Then for sufficiently small h,

$$\begin{aligned} \left\| u - u^{it, 1}_h \right\| _{n, \infty } := \sup _{t \in (t_n, \ t_{n + 1}]} \left| u(t) - u^{it, 1}_h(t) \right| \le C_1^{it} t_n^{- \alpha } h^{2 - \alpha }, \end{aligned}$$

where \(C_1^{it}\) is a constant independent of h and N.

In particular, there exists a constant \(\bar{C}_1^{it}\), which is independent of h and N, such that

$$\begin{aligned} \left| u(T) - u^{it, 1}_h(T) \right| \le \bar{C}_1^{it} h^{2 - \alpha }. \end{aligned}$$

5.2 The Second Iterated Collocation Method

Let

$$\begin{aligned} u_h^{it, 2}(t) := g(t) + \int _0^t \left( t - s \right) ^{- \alpha } K(t, s) u^{it,1}_h(s) \ d s, \ t \in I \end{aligned}$$

be the second iterated collocation method.

Denote \(e^{it, 2}_h := u - u^{it, 2}_h\). Then

$$\begin{aligned} e^{it, 2}_h(t) =\,&\int _0^t \left( t - s \right) ^{- \alpha } K(t, s) e^{it,1}_h(s) \ d s \\ =\,&\int _0^t ( t - s )^{- \alpha } K(t, s) \left[ \int _0^s ( s - v )^{- \alpha } Q (s, v; \alpha ) \delta _h(v) \ d v \right] \ d s \\ =\,&\int _0^t ( t - s )^{1 - 2 \alpha } \tilde{Q}(t, s; \alpha ) \delta _h(s) \ d s, \end{aligned}$$

where \(\tilde{Q}(t, s; \alpha ) := \int _0^1 \left( 1 - x \right) ^{- \alpha } x^{- \alpha } K(t, s + x (t - s)) Q(s + x (t - s), s; \alpha )\ d x\).

Therefore, at the first interval \([ 0, t_1 ]\), there exists a constant \(\hat{E}_3\), such that

$$\begin{aligned} \left| e^{it, 2}_h(v h) \right| =\,&\left| \int _0^{vh} \left( v h - s \right) ^{1 - 2 \alpha } \tilde{Q}(vh, s; \alpha ) \delta _h(s) \ d s \right| \\ =\,&h^{2 (1 - \alpha )} \left| \int _0^{v} \left( v - s \right) ^{1 - 2 \alpha } \tilde{Q}(vh, s h; \alpha ) \delta _h(s h) \ d s \right| \\ \le \,&\hat{E}_3 h^{3(1 - \alpha )}. \end{aligned}$$

For \( 1 \le n \le N - 1\),

$$\begin{aligned}&e^{it, 2}_h(t_n + v h) \\&\quad = \int _0^{t_n + v h} ( t_n + v h - s )^{1 - 2 \alpha } \tilde{Q}(t_n + v h, s; \alpha ) \delta _h(s) \ d s \\&\quad = h^{2(1 - \alpha )} \int _0^{v} (v - s)^{1 - 2\alpha } \tilde{Q}(t_n + v h, t_n + s h; \alpha ) \delta _h(t_n + s h)\ d s \\&\qquad + h^{2(1 - \alpha )} \sum _{l = 0}^{n - 1} \int _0^{1} (n + v - l - s)^{1 - 2\alpha } \tilde{Q}(t_n + v h, t_l + s h; \alpha ) \delta _h(t_l + s h) \ d s, \end{aligned}$$

since

$$\begin{aligned}&\int _0^{1} (n + v - l - s)^{1 - 2\alpha } \tilde{Q} (t_n + v h, t_l + s h; \alpha ) \delta _h(t_l + s h) \ d s\\&\quad = \int _0^{1} \Big [ (n + v- l - s)^{1 - 2\alpha } \tilde{Q} (t_n + v h, t_l + s h; \alpha ) \delta _h(t_l + s h) \\&\qquad - \left( n + v- l - c_1 \right) ^{1 - 2\alpha } \tilde{Q} (t_n + v h, t_l + c_1 h; \alpha ) \delta _h(t_{l,1}) \Big ] \ d s\\&\quad = h\int _0^{1} \left[ (n + v - l - s)^{1 - 2\alpha } \tilde{Q} (t_n + v h, t_l + s h; \alpha ) \delta _h(t_l + s h) \right] '|_{s = c_1} \left( s - c_1 \right) \ d s\\&\qquad + h^2 \int _0^{1} \left[ (n + v - l - \xi '_l)^{1 - 2\alpha } \tilde{Q} (t_n + v h, t_l + \xi '_l h; \alpha ) \delta _h(t_l + \xi '_l h) \right] ^{''} \frac{( s - c_1 )^2}{2!} \ d s, \end{aligned}$$

where \(\xi '_l \in (0, 1)\), so if the orthogonality condition \(\int _0^1 (s - c_1) \ d s =0\) holds, by the proof of [1, Theorem 6.2.13], there exists a constant \(C^{it}_2\), such that

$$\begin{aligned} \left| e^{it, 2}_h(t_n + v h) \right| \le C^{it}_2 t_n^{1 - 2 \alpha } h^{2 - \alpha }. \end{aligned}$$

Therefore, we have proved the following theorem.

Theorem 4

Assume that \(g \in C^{2}(I), K \in C^{4}(D)\), and \(u_h \in S_{0}^{(- 1)}(I_h)\) is the collocation solution for the second-kind Volterra integral Eq. (1) defined by the collocation Eq. (3), with the corresponding second iterated collocation solution \(u_h^{it2}\). The collocation parameter satisfies

$$\begin{aligned} J_0 := \int _0^1 \left( s - c_1 \right) \ d s = 0 \ (i.e., c_1 = \frac{1}{2}). \end{aligned}$$

Then for sufficiently small h,

$$\begin{aligned} \left\| u - u^{it, 2}_h \right\| _{n, \infty } := \sup _{t \in ( t_n, \ t_{n + 1} ]} \left| u(t) - u^{it, 2}_h(t) \right| \le C^{it}_2 t_n^{1 - 2 \alpha } h^{2 - \alpha }, \end{aligned}$$

where \(C^{it}_2\) is a constant independent of h and N.

In particular, there exists a constant \(\bar{C}^{it}_2\), which is independent of h and N, such that

$$\begin{aligned} \left| u(T) - u^{it, 2}_h(T) \right| \le \bar{C}^{it}_2 h^{2 - \alpha }. \end{aligned}$$

Corollary 5

If \(\alpha \le 0.5\), the order of the error for the second iterated collocation solution is always \(2 - \alpha \); i.e.

$$\begin{aligned} \left\| u - u^{it2}_h \right\| _{n, \infty } = O(h^{2 - \alpha }). \end{aligned}$$

5.3 The k-th Iterated Collocation Method

Let

$$\begin{aligned} u_h^{it, k}(t) := g(t) + \int _0^t \left( t - s \right) ^{- \alpha } K(t, s) u^{it, k - 1}_h(s) \ d s, \ t \in I \end{aligned}$$

be the k-th iterated collocation method.

Similarly, we have the following theorem.

Theorem 5

Assume that \(g \in C^{2}(I), K \in C^{4}(D)\), and \(u_h \in S_{0}^{( - 1)}(I_h)\) is the collocation solution for the second-kind Volterra integral Eq. (1) defined by the collocation Eq. (3), with the corresponding k-th iterated collocation solution \(u_h^{it, k}\). The collocation parameter satisfies

$$\begin{aligned} J_0 := \int _0^1 \left( s - c_1 \right) \ d s = 0 \ (i.e., c_1 = \frac{1}{2}). \end{aligned}$$

Then for sufficiently small h,

$$\begin{aligned} \left\| u - u^{it, k}_h \right\| _{n, \infty } := \sup _{t \in ( t_n, \ t_{n + 1} ]} \left| u(t) - u^{it, k}_h(t) \right| \le C^{it}_k t_n^{k - 1 - k \alpha }h^{2 - \alpha }, \end{aligned}$$

where \(C^{it}_k\) is a constant independent of h and N.

In particular, there exists a constant \(\bar{C}^{it}_k\), which is independent of h and N, such that

$$\begin{aligned} \left| u(T) - u^{it, k}_h(T) \right| \le \bar{C}^{it}_k h^{2 - \alpha }. \end{aligned}$$

Corollary 6

If \(\alpha \le \frac{k - 1}{k}\), the order of the error for the k-iterated collocation solution is always \(2 - \alpha \); i.e.

$$\begin{aligned} \left\| u - u^{it, k}_h \right\| _{n, \infty } = O(h^{2 - \alpha }). \end{aligned}$$

6 Numerical Results

Example 1

In (1) let \(T = 1\), \(K(t, s) = \frac{1}{10 \Gamma (1 - \alpha )}\) and \(g(t) = 1\) such that the exact solution \(u(t) = E_{1 - \alpha , 1}(\frac{t^{1 - \alpha }}{10})\), where the Mittag-Leffler function \(E_{\mu , \theta }\) is defined by

$$\begin{aligned} E_{\mu , \theta }(z) := \sum _{k = 0}^{\infty } \frac{z^k}{\Gamma ( \mu k + \theta )} \ \hbox {for} \ \mu , \theta , z \in \mathbb {R}\ with \ \mu > 0. \end{aligned}$$

In Tables 1, 2, 3, 4, 5, 6 and 7, we take \(m = 1\) for \(\alpha = 0.3, 0.5, 0.7\), respectively. From these tables, we observe that the numerical results agree with our theoretical analysis.

At the mesh points, in Tables 1, 3 and 6, we observe that the order is \(\min \{ 2 ( 1 - \alpha ), 1\}\) for \(c_1 = 1\). The reason is that for this case, the mesh point \(t_n = t_{n - 1} + c_1 h\) is also a collocation point. In Tables 8 and 10, the similar phenomena appear for Rauda IIA , \((\frac{1}{2}, 1)\) for \(m = 2\), and Rauda IIA , \((\frac{1}{3}, \frac{1}{2}, 1)\) for \(m = 3\). At collocation points, in Table 5, we observe that the order for \(\alpha = 0.5\) and \(m = 1\) is 1.

Table 1 The maximum error at the mesh points with \(m = 1\) and \(\alpha = 0.3\)
Table 2 The errors at the endpoint with \(m = 1\) and \(\alpha = 0.3\)
Table 3 The maximum error at the mesh points with \(m = 1\) and \(\alpha = 0.5\)
Table 4 The errors at the endpoint with \(m = 1\) and \(\alpha = 0.5\)
Table 5 The maximum error at the collocation points with \(m = 1\) and \(\alpha = 0.5\)
Table 6 The maximum error at the mesh points with \(m = 1\) and \(\alpha = 0.7\)
Table 7 The errors at the endpoint with \(m = 1\) and \(\alpha = 0.7\)

In Tables 8, 9, 10 and11, we take \(\alpha = 0.5\) and \(m=2, 3\), respectively. From these tables, we observe that the numerical results also agree with our theoretical analysis.

Table 8 The maximum error at the mesh points with \(m = 2\) and \(\alpha = 0.5\)
Table 9 The errors at the endpoint with \(m = 2\) and \(\alpha = 0.5\)
Table 10 The maximum error at the mesh points with \(m = 3\) and \(\alpha = 0.5\)
Table 11 The errors at the endpoint with \(m = 3\) and \(\alpha = 0.5\)

In Tables 12, 13, 14, 15, 16 and 17, we take \(m = 1, c_1 = 0.5\) and \(\alpha = 0.3, 0.5, 0.7\), respectively, for the first, second and third iterated collocation methods. From these tables, we see that the numerical results are again consistent with our theoretical analysis.

Table 12 The maximum error of the first iterated collocation at the mesh points with \(m = 1\) and \(c_1 = 0.5\)
Table 13 The errors of the first iterated collocation at the endpoint with \(m = 1\) and \(c_1 = 0.5\)
Table 14 The maximum error of the second iterated collocation at the mesh points with \(m = 1\) and \(c_1 = 0.5\)
Table 15 The errors of the second iterated collocation at the endpoint with \(m = 1\) and \(c_1 = 0.5\)
Table 16 The maximum error of the third iterated collocation at the mesh points with \(m = 1\) and \(c_1 = 0.5\)
Table 17 The errors of the third iterated collocation at the endpoint with \(m = 1\) and \(c_1 = 0.5\)