1 Introduction

Nonlinear differential (DEs) and integro-differential equations (IDEs) have a great importance in modeling of many phenomena in physics and engineering [117]. Fractional differential equations involving the Caputo and other fractional derivatives, which are a generalization of classical differential equations, have attracted widespread attention [1825]. In the last decade or so, several studies have been carried out to develop numerical schemes to deal with fractional integro-differential equations (FIDEs) of both linear and nonlinear type. The successive approximation methods such as Adomian decomposition [26], He’s variational iteration technique [8], HPM [5], He’s HPM [27], modified HPM [28], finite difference method [29], a modified reproducing kernel discretization method [30], and differential transformation method [31] were used to deal with FIDEs. Spectral methods with different basis were also applied to FIDEs, Chebyshev and Taylor collocation, Haar wavelet, Tau and Walsh series schemes, etc. [3239] as an example. The collocation method is one of the powerful spectral methods which are widely used for solving fractional differential and integro-differential equations [4044]. Further, the numerical solution of delay and advanced DEs of arbitrary order has been reported by many researchers [4558]. Differential equations of advanced argument had fewer contributions in mathematics research compared to delay differential equations, which had a great development in the last decade [59, 60]. Monotone iterative technique was introduced with Riemann–Liouville fractional derivative to deal with FIDEs with advanced arguments [61], while the collocation method with Bessel polynomials treated linear Fredholm integro-differential-difference equations [62]. In our previous work, Tau method with the Chebyshev polynomials was employed to deal with linear fractional differential equations with linear functional arguments [63]; therefore, the Chebyshev collocation method was extended to fractional differential equations with delay [64]. The equations with functional form of argument represent mixed type equations delay, proportional delay, and advanced differential equations. All reported works considered a generalization of equations with functional argument with integer order derivative or with fractional derivative in the linear case.

In this work, we introduce a general form of nonlinear fractional integro-differential equations (GNFIDEs) with linear functional arguments, which is a more general form of nonlinear fractional pantograph and Fredholm–Volterra integro-differential equations with linear functional arguments [6569]. The spectral collocation method is used with Chebyshev polynomials of the first kind as a matrix discretization method to treat the proposed equations. An operational matrix for derivatives is presented. The introduced operational matrix of derivatives includes fractional order derivatives and the operational matrix of ordinary derivative as a special case. No other studies have discussed this point.

The proposed GNFIDEs with linear functional arguments are presented as follows:

$$\begin{aligned} &\sum_{k=0}^{n_{1}}\sum _{i=0}^{n_{2}}Q_{k,i}(x)y^{k}(x) y^{(\nu _{i})}(p_{i}x+\xi _{i})+\sum _{h=1}^{n_{3}}\sum_{j=0}^{n_{4}}P_{h,j}(x)y^{(h)}(x) y^{(\alpha _{j})}(q_{j}x+\zeta _{j}) \\ &\quad=f(x)+ \int _{a}^{b}\sum_{d=0}^{n_{5}}K_{d}(x,t) y^{(\upsilon _{d})}(t) \,dt+ \int _{a}^{\phi (x)}\sum_{c=0}^{n_{6}}V_{c}(x,t) y^{(\beta _{c})}(t) \,dt, \end{aligned}$$
(1)

where \(x\in [a,b]\), \(Q_{k,i}(x), P_{h,j}(x)\), \(f(x)\), \(V_{c}(x,t), K_{d}(x,t)\) are well-defined functions, and \(a,b,p_{i},\xi _{i},q_{j}, \zeta _{j}\in \Re \) where \(p_{i},q_{j}\neq 0\), \(\nu _{i} \geq 0\), \(\alpha _{j} \geq 0\), \(\upsilon _{d}\geq 0\), \(\beta _{c}\geq 0\) and \(i - 1 < \nu _{i} \leq i \), \(j - 1 < \alpha _{j} \leq j \), \(d - 1 < \upsilon _{d}\leq d\), \(c - 1 < \beta _{c}\leq c\), \(n_{i} \in \mathbb{N}\), under the conditions

$$ y^{(i)}(\eta _{i})=\mu _{i}, \quad i=0,1,2, \ldots,m-1, $$
(2)

where \(\eta _{i}\in [a,b]\), and m is the greatest integer order derivative, or the highest integer order greater than the fractional derivative. The general form (1) contains at least three different arguments, then the following corollary defines the interval that the independent variable x belongs to. Chebyshev polynomials of the first kind are used in this work to approximate the solution of suggested equation (1). The Chebyshev polynomials are characterized on \([-1, 1]\).

Corollary 1.1

The independent variable x of (1) belongs to \([a,b]\), which is the intersection of the intervals of the different arguments and \([-1,1]\)i.e. \(x\in [a,b]=[\frac{-1+\xi _{i}}{p_{i}},\frac{1+\xi _{i}}{p_{i}}] \cap [\frac{-1+\zeta _{j}}{q_{j}},\frac{1+\zeta _{j}}{q_{j}}] \cap [-1,1]\).

2 General notations

In this section, some definitions and properties for the fractional derivative and Chebyshev polynomials are listed [63, 64, 70, 71].

2.1 The Caputo fractional derivative

The Caputo fractional derivative operator \(D^{\gamma }_{t}\) of order γ is characterized in the following form:

$$ D^{\gamma }_{t}\varPsi (x) = \frac{1}{\varGamma (n-\gamma )} \int _{0}^{x} \frac{\varPsi ^{(n)}(t)}{(x-t)^{\gamma -n+1}}\,dt,\quad \gamma > 0, $$
(3)

where \(x > 0\), \(n-1 < \gamma \leq n, n \in \mathbb{N}_{0}\), and \(\mathbb{N}_{0} = \mathbb{N}-\{0\}\).

  • \(D^{\gamma }_{t} \sum_{i=0}^{m}\lambda _{i} \varPsi _{i}(x)= \sum_{i=0}^{m} \lambda _{i} D^{\gamma }_{t}\varPsi _{i}(x)\), where \(\lambda _{i}\) and γ are constants.

  • The Caputo fractional differentiation of a constant is zero.

  • \(D^{\gamma }_{t} x^{k}= \bigl\{ \scriptsize{ \begin{array}{l@{\quad}l} 0 & \text{for } k\in \mathbb{N}_{0}\text{ and } k<\lceil \gamma \rceil, \\ \frac{\varGamma (k+1) x^{k-\gamma }}{\varGamma (k+1-\gamma )} & \text{for } k\in \mathbb{N}_{0} \text{ and } k\geq \lceil \gamma \rceil, \end{array}}\)

where \(\lceil \gamma \rceil \) denotes to the smallest integer greater than or equal to γ.

2.2 Chebyshev polynomials

The Chebyshev polynomials \(T_{n}(x)\) of the first kind are defined as follows: orthogonal polynomials in x of degree n are defined on \([-1, 1]\) such that

$$ T_{n}(x)=\cos n\theta , $$

where \(x=\cos \theta \) and \(\theta \in [0, \pi ]\). The polynomials \(T_{n}(x)\) are generated by using the following recurrence relations:

$$ T_{n+1}(x)=2xT_{n}(x)-T_{n-1}(x), $$

with initials

$$ T_{0}(x)=1, \qquad T_{1}(x)=x,\quad n=1,2,\ldots . $$

Corollary 2.1

The Chebyshev polynomials \(T_{n} (x)\)are explicitly expressed in terms of \(x^{n} \)in the following form:

$$ T_{n} (x)=\sum_{k=0}^{[n/2]}w_{k}^{(n)} x^{n-2k} , $$
(4)

where

$$ w_{k}^{(n)} =(-1)^{k} 2^{n-2k-1} \frac{n}{n-k} \begin{pmatrix} {n-k} \\ k \end{pmatrix},\quad 2k\le n. $$

3 Procedure solution using the collocation method

The solution \(y(x)\) of (1) may be expanded by Chebyshev polynomial series of the first kind as follows [64]:

$$ y(x)=\sum_{n=0}^{\infty }c_{n}T_{n}(x). $$
(5)

By truncating series (5) to \(N<\infty \), the approximate solution is expressed in the following form:

$$\begin{aligned} y(x)&\cong \sum_{n=0}^{N}c_{n}T_{n}(x) \\ &=T(x)C, \end{aligned}$$
(6)

where \(T(x) \) and C are matrices given by

$$ T(x)= \begin{bmatrix} {T_{0} (x)} & {T_{1} (x)} & {\cdots } & {T_{N} (x)} \end{bmatrix},\qquad C=\biggl[\frac{1}{2}c_{0},c_{1},c_{2}, \ldots,c_{N}\biggr]^{T}. $$

Now, using (4), relation (6) may written in the following form:

$$ y(x)=X(x) W^{T} C, $$
(7)

where W is a square lower triangle matrix with size \((N+1)\times (N+1) \) given by

W i j = { 1 if  i = j = 0 , ( 1 ) k 2 i 2 k 1 i i k ( i k k ) if  i + j  even and  j i , 0 if  j > i , i + j  odd,
(8)

where

$$ k=\textstyle\begin{cases} \frac{i}{2},\ldots,1,0 & \text{for even $i$,} \\ \frac{i-1}{2},\ldots,1,0 & \text{for odd $i$,} \end{cases}\displaystyle \quad i,j=0, 1, 2, \ldots, N. $$

For example,

$$ W= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ -1 & 0 & 2 & 0 & 0 \\ 0 & -3 & 0 & 4 & 0 \\ 1 & 0 & -8 & 0 & 8 \end{pmatrix}_{N=4},\qquad W= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ -1 & 0 & 2 & 0 & 0 & 0 \\ 0 & -3 & 0 & 4 & 0 & 0 \\ 1 & 0 & -8 & 0 & 8 & 0 \\ 0 & 5 & 0 & -20 & 0 & 16 \end{pmatrix}_{N=5} . $$

Then, by substituting from (6) in (1), we get

$$\begin{aligned} &\sum_{k=0}^{n_{1}}\sum _{i=0}^{n_{2}}Q_{k,r}(x){\bigl(T(x) C \bigr)}^{k} D^{\nu _{i}}T(p_{i}x+\xi _{i}) C \\ &\quad{}+\sum_{h=1}^{n_{3}}\sum _{j=0}^{n_{4}}P_{h,j}(x) \bigl(T^{(h)}(x) C\bigr) D^{ \alpha {j}}T(q_{j}x+\zeta _{j}) C \\ &\quad{}- \int _{a}^{b}\sum_{d=0}^{n_{5}}K_{d}(x,t) T^{(\upsilon _{d})}(t) C\,dt- \int _{a}^{\phi (x)}\sum_{c=0}^{n_{6}}V_{c}(x,t) T^{(\beta _{c})}(t) C \,dt=f(x). \end{aligned}$$
(9)

We can write (9) as follows:

$$\begin{aligned} & \Biggl[ \sum_{k=0}^{n_{1}}\sum _{i=0}^{n_{2}}Q_{k,r}(x){\bigl(T(x) C \bigr)}^{k} D^{\nu _{i}}T(p_{i}x+\xi _{i}) \\ &\quad{}+ \sum_{h=1}^{n_{3}}\sum _{j=0}^{n_{4}}P_{h,j}(x) \bigl(T^{(h)}(x) C\bigr) D^{ \alpha _{j}}T(q_{j}x+\zeta _{j}) \\ &\quad{}- \int _{a}^{b}\sum_{d=0}^{n_{5}}K_{d}(x,t) D^{\upsilon _{d}} T(t) \,dt- \int _{a}^{\phi (x)}\sum_{c=0}^{n_{6}}V_{c}(x,t) D^{\beta _{c}}T(t) \,dt \Biggr]C=f(x). \end{aligned}$$
(10)

The collocation points are defined in the following form:

$$ x_{l}=lh+a, $$
(11)

where

$$ h=\frac{b-a}{N},\quad l=0,1,2,\ldots,N. $$

By substituting the collocation points (11) in (10), we get

$$\begin{aligned} & \Biggl[ \sum_{k=0}^{n_{1}}\sum _{i=0}^{n_{2}}Q_{k,i}(x_{l}){ \bigl(T(x_{l}) C\bigr)}^{k} D^{\nu _{i}}T(p_{i}x_{l}+ \xi _{i}) \\ &\quad{}+ \sum_{h=1}^{n_{3}}\sum _{j=0}^{n_{4}}P_{h,j}(x_{l}) \bigl(T^{(h)}(x_{l}) C\bigr) D^{\alpha _{j}}T(q_{j}x_{l}+ \zeta _{j}) \\ &\quad{}- \int _{a}^{b}\sum_{d=0}^{n_{5}}K_{d}(x_{l},t) D^{\upsilon _{d}} T(t) \,dt- \int _{a}^{\phi (x_{l})}\sum_{c=0}^{n_{6}}V_{c}(x_{l},t) D^{ \beta _{c}}T(t) \,dt \Biggr]C=f(x_{l}). \end{aligned}$$
(12)

In the following theorem we introduce a general form of operational matrix of the row vector \(T(x)\) in the representation as (7), such that the process includes the fractional order derivatives, and ordinary operational matrix given as a special case when \(\alpha _{i}\rightarrow \lceil \alpha _{i}\rceil \).

Theorem 1

Assume that the Chebyshev row vector \(T(x)\)is represented as (7), then the fractional order derivative of the vector \(D^{\alpha _{i}}T(x)\)is

$$ D^{\alpha _{i}}T(x)=X_{\alpha _{i}}(x)B_{\alpha _{i}}W^{T}, $$
(13)

where

$$ X_{\alpha _{i}}(x)=\bigl[x^{{-\alpha _{i}+i}}\ x^{{1-\alpha _{i}+i}}\ x^{{2- \alpha _{i}+i}}\ \cdots\ x^{{N-1-\alpha _{i}+i}}\bigr],\quad i-1 < \alpha _{i} \leqslant i, $$
(14)

where \(B_{\alpha _{i}}\)is an \((N+1)\times (N+1)\)square upper diagonal matrix, the elements \(b_{r,s}\)of \(B_{\alpha _{i}}\)can be written as follows:

$$ \textstyle\begin{cases} b_{r,r+i}=\frac{\varGamma (r+i+1)}{\varGamma (r+i-\alpha _{i})} &r,s=0, 1, 2, \ldots, N, \\ 0& \textit{otherwise,} \end{cases} $$
(15)

where \(i-1 < \alpha _{i}\leqslant i, N\geqslant \lceil \alpha _{i} \rceil \).

Proof

Since

$$\begin{aligned} D^{\alpha _{i}}T(x)&= D^{\alpha _{i}}\bigl[1\ x \ x^{2}\ \cdots\ x^{N}\bigr] W^{T} \\ &=X_{\alpha _{i}} B_{\alpha _{i}} W^{T}, \end{aligned}$$
(16)

if \(0 < \alpha _{1}\leqslant 1\), using Caputo’s fractional properties, we get

$$\begin{aligned} & X_{\alpha _{1}}=\bigl[x^{1-{\alpha _{1}}}\ x^{2-{\alpha _{1}}}\ x^{3-{ \alpha _{1}}}\ \cdots\ x^{N+1-{\alpha _{1}}}\bigr], \end{aligned}$$
(17)
$$\begin{aligned} & B_{\alpha _{1}}= \begin{pmatrix} 0 &\frac{2}{\varGamma (2-\alpha _{1})} &0 \cdots &0 \\ 0 &0 &\frac{\varGamma (3)}{\varGamma (3-\alpha _{1})} \cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 & 0 & 0 \cdots &\frac{\varGamma (N)}{\varGamma (N-\alpha _{1})} \\ 0 & 0 & 0 \cdots &0 \end{pmatrix}. \end{aligned}$$
(18)

As \(\alpha _{1} \longrightarrow 1\), the system reduces to the ordinary case \((B_{\alpha _{1}}\longrightarrow B)\) (see [64]).

Also \(1 < \alpha _{2}\leqslant 2\), then

$$\begin{aligned} & X_{\alpha _{2}}=\bigl[x^{2-{\alpha _{2}}}\ x^{3-{\alpha _{2}}}\ x^{4-{ \alpha _{2}}}\ \cdots\ x^{N+2-{\alpha _{2}}}\bigr], \end{aligned}$$
(19)
$$\begin{aligned} &B_{\alpha _{2}}= \begin{pmatrix} 0 &0 &\frac{3}{\varGamma (3-\alpha _{2})} \cdots &0&0 \\ 0 &0 &0& \frac{\varGamma (4)}{\varGamma (4-\alpha _{2})}\cdots &0 \\ \vdots &\vdots &\vdots &\vdots &\vdots \\ 0 & 0 & 0 \cdots &0&\frac{\varGamma (N)}{\varGamma (N-\alpha _{2})} \\ 0 & 0 & 0 \cdots &0&0 \\ 0 & 0 & 0 \cdots &0&0 \end{pmatrix}. \end{aligned}$$
(20)

As \(\alpha _{2}\longrightarrow 2\), the system reduces to the ordinary case \((B_{\alpha _{2}}\longrightarrow B^{2})\) (see [64]).

By the same way, if we take \(2 < \alpha _{3}\leqslant 3\), then

$$\begin{aligned} & X_{\alpha _{3}}=\bigl[x^{3-{\alpha _{3}}}\ x^{4-{\alpha _{3}}}\ x^{5-{ \alpha _{3}}}\ \cdots\ x^{N+3-{\alpha _{3}}}\bigr], \end{aligned}$$
(21)
$$\begin{aligned} & B_{\alpha _{3}}= \begin{pmatrix} 0 &0 &0&\frac{4}{\varGamma (4-\alpha _{3})} \cdots &0&0 \\ 0 &0 &0& 0&\frac{\varGamma (5)}{\varGamma (5-\alpha _{3})}&0 \\ \vdots &\vdots &\vdots &\vdots &\vdots &\vdots \\ 0 & 0 & 0 \cdots &0&0&\frac{\varGamma (N)}{\varGamma (N-\alpha _{3})} \\ 0 & 0 & 0 \cdots &0&0&0 \\ 0 & 0 & 0 \cdots &0&0&0 \\ 0 & 0 & 0 \cdots &0&0&0 \end{pmatrix}. \end{aligned}$$
(22)

As \(\alpha _{3}\longrightarrow 3\), the system reduces to the ordinary case \((B_{\alpha _{3}}\longrightarrow B^{3})\) (see [64]).

By induction, \(i-1 < \alpha _{i}\leqslant i\), then

$$ X_{\alpha _{i}}(x)=\bigl[x^{{-\alpha _{i}+i}}\ x^{{1-\alpha _{i}+i}}\ x^{{2- \alpha _{i}+i}}\ \cdots\ x^{{N-1-\alpha _{i}+i}}\bigr],\quad i-1 < \alpha _{i} \leqslant i, $$
(23)

and \(B_{\alpha _{i}}\) as in (15), where the proposed operational matrix represents a kind of unification of ordinary and fractional case. □

Now, we give the matrix representation for all terms in (12) as representation (13).

∗ The first term in (12) can be written as follows:

$$\begin{aligned} &\sum_{k=0}^{n_{1}} \sum_{i=0}^{n_{2}}Q_{k,i}(x_{l}){ \bigl(T(x_{l}) C\bigr)}^{k} D^{\nu _{i}}T(p_{i}x_{l}+ \xi _{i}) \\ &\quad=\sum_{k=0}^{n_{1}}\sum _{i=0}^{n_{2}}Q_{k,i}(x_{l}) { \bigl(\bar{X} \bar{W}^{T} \bar{C}\bigr)}^{k} X_{\nu _{i}}B_{\nu _{i}}H_{p_{i}}E_{ \xi _{i}} W^{T} C, \end{aligned}$$
(24)

where

$$\begin{aligned} &\bar{X}= \begin{pmatrix} X(x_{0})&0&0\cdots &0 \\ 0&X(x_{1})&0\cdots &0 \\ 0 & 0 &X(x_{2})\cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 &0&0\cdots &X(x_{N}) \end{pmatrix} ,\\ & \bar{W}^{T}= \begin{pmatrix} {W}^{T}&0&0\cdots &0 \\ 0&{W}^{T}&0\cdots &0 \\ 0 & 0 &{W}^{T}\cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 &0&0\cdots &{W}^{T} \end{pmatrix}, \qquad \bar{C}= \begin{pmatrix} C&0&0\cdots &0 \\ 0&C&0\cdots &0 \\ 0 & 0 &C\cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 &0&0\cdots &C \end{pmatrix}. \end{aligned}$$

In addition, \(H_{p_{i}}\) is a square diagonal matrix of the coefficients for the linear argument, and the elements of \(H_{p_{i}}\) can be written as follows:

$$ h_{rs}=\textstyle\begin{cases} 0 & \text{if $r\neq s$;} \\ p_{i}^{r} & \text{if $r=s$.} \end{cases} $$

Moreover, \(E_{\xi _{i}}\) is a square upper triangle matrix for the shift of the linear argument, and the form of \(E_{\xi _{i}}\) is

E ξ i = ( ( 0 0 ) ( ξ i ) 0 ( 1 0 ) ( ξ i ) 1 0 ( 2 0 ) ( ξ i ) 2 0 ( N 0 ) ( ξ i ) N 0 0 ( 1 1 ) ( ξ i ) 1 1 ( 2 1 ) ( ξ i ) 2 1 ( N 1 ) ( ξ i ) N 1 0 0 ( 2 2 ) ( ξ i ) 2 2 ( N 2 ) ( ξ i ) N 2 0 0 0 ( N N ) ( ξ i ) N N ) .

∗ The second term in (12) can be written as follows:

$$\begin{aligned} &\sum_{h=1}^{n_{3}} \sum_{j=0}^{n_{4}}P_{h,j}(x_{l}) \bigl(T^{(h)}(x_{l}) C\bigr) D^{\alpha _{j}}T(q_{j}x_{l}+ \zeta _{j}) \\ &\quad=\sum_{h=1}^{n_{3}}\sum _{j=0}^{n_{4}}P_{h,j}(x_{l}) \bigl( \bar{X} \bar{B}_{h} \bar{W}^{T}\bar{C}\bigr) X_{\alpha _{j}} B_{\alpha _{j}} H_{Pi}E_{\zeta j}{W}^{T}C, \end{aligned}$$
(25)

where

$$ \bar{B}_{h}= \begin{pmatrix} 0 & B_{h} &0 \cdots &0 \\ 0 &0 & B_{h} \cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 & 0 & 0 \cdots &B_{h} \\ 0 & 0 & 0 \cdots &0 \end{pmatrix}, $$

and \(B_{h}\) is the same as \(B_{\alpha _{i}} \) when \(h=\lceil \alpha _{i}\rceil \).

The matrix representation for the variable coefficients takes the form

$$ Q_{i,j}= \begin{pmatrix} Q_{i,j}(x_{0})&0&0& \ldots &0 \\ 0&Q_{i,j}(x_{1})&0& \ldots &0 \\ \vdots &\vdots &\vdots &\vdots &\vdots \\ 0&0&0& \ldots & Q_{i,j}(x_{N}) \end{pmatrix}. $$

∗ Matrix representation for integral terms: Now, we try to find the matrix form corresponding to the integral term. Assume that \(K_{d} (x,t)\) can be expanded to univariate Chebyshev series with respect to t as follows:

$$ K_{d} (x,t)\cong \sum_{r=0}^{N}u_{d,r}(x)T_{r}(t). $$
(26)

Then the matrix representation of the kernel function \(K_{d} (x,t) \) is given by

$$ K_{d} (x,t)\cong U_{d}(x)T^{T}(t), $$
(27)

where

$$ U_{d}(x)=\bigl[u_{d,0}(x)\ u_{d,1}(x)\ \cdots\ u_{d,N}(x)\bigr]. $$

Substituting relations (13) and (27) in the present integral part, we obtain

$$\begin{aligned} & \int _{a}^{b}K_{d}(x,t) y^{(\upsilon _{d})}(t) \,dt \\ &\quad = \int _{a}^{b}U_{d}(x)T^{T}(t) T^{(\upsilon _{d})}(t) C \,dt \\ &\quad= \int _{a}^{b}U_{d}(x)W X^{T}(t) X_{\upsilon _{d}}(t) B_{ \upsilon _{d}} W^{T} C \,dt \\ &\quad= \int _{a}^{b}U_{d}(x) W \bigl[t^{0}\ t^{1}\ \cdots\ t^{N} \bigr]^{T} \bigl[t^{{0-\upsilon _{d}+d}}\ t^{{1-\upsilon _{d}+d}}\ t^{{2- \upsilon _{d}+d}}\ \cdots\ t^{{N-1-\upsilon _{d}+d}}\bigr] B_{\upsilon _{d}} W^{T} C \,dt \\ &\quad=U_{d}(x) W \biggl( \int _{a}^{b} t^{p} t^{q-\upsilon _{d}+d} \,dt \biggr) B_{\upsilon _{d}} W^{T} C \\ &\quad=U_{d}(x) W \biggl( \int _{a}^{b} t^{p+q-\upsilon _{d}+d} \,dt \biggr) B_{\upsilon _{d}} W^{T} C \\ &\quad=U_{d}(x) W Z_{\upsilon _{d}} B_{\upsilon _{d}} W^{T} C,\quad p,q=0,1,\ldots,N, \end{aligned}$$
(28)

where

$$ Z_{d}= \int _{a}^{b} t^{p+q-\upsilon _{d}+d} \,dt,\quad p,q=0,1, \ldots,N, $$

or

$$ Z_{d}=[z_{pq}]= \frac{b^{p+q-\upsilon _{d}+d+1}-a^{p+q-\upsilon _{d}+d+1}}{p+q-\upsilon _{d}+d+1},\quad p,q=0,1,\ldots,N. $$

So, the present integral term can be written as:

$$\begin{aligned} \int _{a}^{b}\sum_{d=0}^{n_{5}}K_{d}(x_{l},t) y^{( \upsilon _{d})}(t) \,dt&=\sum_{d=0}^{n_{5}} U_{d}(x_{l}) W Z_{d} B_{ \upsilon _{d}} W^{T} C \\ &=\sum_{d=0}^{n_{5}} U_{d} W Z_{d} B_{\upsilon _{d}} W^{T} C, \end{aligned}$$
(29)

where

$$ U_{d}= \begin{pmatrix} U_{d}(x_{0}) \\ U_{d}(x_{1}) \\ \vdots \\ U_{d}(x_{N}) \end{pmatrix}. $$

∗ Matrix representation for integral terms: Now, we try to find the matrix form corresponding to the integral term. By the same way \(V_{c} (x,t)\) can be expanded as (26)

$$ V_{c} (x,t)\cong \sum_{r=0}^{N}g_{c,r}(x)T_{r}(t). $$
(30)

Then the matrix representation of the kernel function \(V_{c} (x,t) \) is given by

$$ V_{c}(x,t)\cong G_{c}(x)T^{T}(t), $$
(31)

where

$$ G_{c}(x)=\bigl[g_{c,0}(x)\ g_{c,1}(x)\ \cdots\ g_{c,N}(x)\bigr]. $$

Substituting relations (13) and (31) in the present integral part, we obtain

$$\begin{aligned} & \int _{a}^{\phi (x)}V_{c}(x,t) y^{(\beta _{c})}(t) \,dt \\ &\quad = \int _{a}^{\phi (x)}U_{c}(x)T^{T}(t) D^{\beta _{c}}T(t) C \,dt \\ &\quad= \int _{a}^{\phi (x)}U_{c}(x)W X^{T}(t) X_{\beta _{c}}(t) B_{ \beta _{c}} W^{T} C \,dt \\ &\quad= \int _{a}^{\phi (x)}U_{c}(x) W \bigl[t^{0}\ t^{1}\ \cdots\ t^{N} \bigr]^{T} \bigl[t^{{0-\beta _{c}+c}}\ t^{{1-\beta _{c}+c}}\ t^{{2- \beta _{c}+c}}\ \cdots\ t^{{N-1-\beta _{c}+c}}\bigr] B_{\beta _{c}} W^{T} C \,dt \\ &\quad=U_{c}(x) W \biggl( \int _{a}^{\phi (x)} t^{p} t^{q-\beta _{c}+c} \,dt \biggr) B_{\beta _{c}} W^{T} C \\ &\quad=U_{c}(x) W \biggl( \int _{a}^{\phi (x)} t^{p+q-\beta _{c}+c} \,dt \biggr) B_{\beta _{c}} W^{T} C \\ &\quad=U_{c}(x) W Z_{\beta _{c}}(x) B_{\beta _{c}} W^{T} C,\quad p,q=0,1,\ldots,N, \end{aligned}$$
(32)

where

$$ Z_{\beta _{c}}(x)= \int _{a}^{\phi (x)} t^{p+q-\beta _{c}+c} \,dt,\quad p,q=0,1, \ldots,N, $$

or

$$ Z_{\beta _{c}}(x)=\bigl[z_{pq}(x)\bigr]= \frac{\phi (x)^{(p+q-\beta _{c}+c+1)}-a^{p+q-\beta _{c}+c+1}}{p+q-\beta _{c}+c+1},\quad p,q=0,1,\ldots,N. $$

So, the present integral term can be written as follows:

$$\begin{aligned} \int _{a}^{\phi (x)}\sum_{c=0}^{n_{6}}V_{c}(x_{l},t) y^{( \beta _{c})}(t) \,dt&=\sum_{c=0}^{n_{6}} G_{c}(x_{l}) W Z_{{\beta _{c}}}(x_{l}) B_{\beta _{c}} W^{T} C \\ & =\sum_{c=0}^{n_{6}} \bar{G_{c}} \bar{W} \bar{Z_{\beta _{c}}} \bar{B_{\beta _{c}}} \bar{W^{T}} \bar{C}, \end{aligned}$$
(33)

where

$$\begin{aligned} &\bar{G_{c}}= \begin{pmatrix} G_{c}(x_{0})&0&0\cdots &0 \\ 0&G_{c}(x_{1})&0\cdots &0 \\ 0 & 0 &G_{c}(x_{2})\cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 &0&0\cdots &G_{c}(x_{N}) \end{pmatrix} ,\\ &\bar{Z_{\beta _{c}}}= \begin{pmatrix} {Z_{c}}(x_{0})&0&0\cdots &0 \\ 0&{Z_{c}}(x_{1})&0\cdots &0 \\ 0 & 0 &{Z_{c}}(x_{2})\cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 &0&0\cdots &{Z_{c}}(x_{N}) \end{pmatrix}. \end{aligned}$$

Now, by substituting equations (24), (25), and (29) into (12), we have the fundamental matrix equation

$$\begin{aligned} & \Biggl[ \sum_{k=0}^{n_{1}} \sum_{i=0}^{n_{2}}Q_{k,i}(x) \bigl( \bar{X} \bar{W}^{T}\bar{C}\bigr)^{k}X_{\nu i}B_{\nu i}H_{Pi} D_{\xi _{i}}{W}^{T}C \\ &\quad{}+\sum_{h=1}^{n_{3}}\sum _{j=0}^{n_{4}}P_{h,j}(x) \bigl(\bar{X} \bar{B}_{h} \bar{W}^{T}\bar{C}\bigr) X_{\alpha j}B_{\alpha _{j}}H_{qj}E_{ \zeta j}{W}^{T}C \\ &\quad{}-\sum_{d=0}^{n_{5}} U_{d} W Z_{d} B_{\upsilon _{d}} W^{T} C-\sum _{c=0}^{n_{6}} \bar{G_{c}} \bar{W} \bar{Z_{c}} \bar{B_{\beta _{c}}} \bar{W^{T}} \bar{C} \Biggr]=F. \end{aligned}$$
(34)

We can write (34) in the form

$$ OC=F \quad\text{or}\quad [O;F], $$
(35)

where

$$\begin{aligned} \begin{aligned} O= {}&\sum_{k=0}^{n_{1}} \sum_{i=0}^{n_{2}}Q_{k,i}(x) \bigl( \bar{X} \bar{W}^{T}\bar{C}\bigr)^{k}X_{\nu i}B_{\nu i}H_{Pi} E_{\xi _{i}}{W}^{T} \\ &{}+\sum_{h=1}^{n_{3}}\sum _{j=0}^{n_{4}}P_{h,j}(x) \bigl(\bar{X} \bar{B}_{h} \bar{W}^{T}\bar{C}\bigr) X_{\alpha j}B_{\alpha _{j}}H_{qj}E_{ \zeta j}{W}^{T} \\ &{} -\sum_{d=0}^{n_{5}} U_{d} W Z_{d} B_{\upsilon _{d}} W^{T} C-\sum _{c=0}^{n_{6}} \bar{G_{c}} \bar{W} \bar{Z_{c}} \bar{B_{\beta _{c}}} \bar{W^{T}} \bar{C} , \\ F={}& \begin{pmatrix} f(x_{1}) \\ f(x_{2}) \\ \vdots \\ f(x_{N}) \end{pmatrix}. \end{aligned} \end{aligned}$$
(36)

Corollary 3.1

Suppose \(k\geqslant 2\), then the matrix representation for the terms free of derivatives in (1), by using (6), we obtain

$$ y^{k}(x)=y^{k-1}(x) y (x) = \bigl(X(x) W^{T} C\bigr)^{k-1} X(x)W^{T} C. $$
(37)

We can achieve the matrix form of (37) by using the collocation points as follows:

$$ y^{k}(x)=\bigl(\bar{X} \bar{W}^{T} \bar{C} \bigr)^{k-1} X W^{T} C. $$
(38)

∗ We can achieve the matrix form for conditions (2) by using (6) on the form

$$ X(\eta _{i})B_{i}W^{T}C=\mu _{i}, \quad i=0,1,2,\ldots,m-1, $$
(39)

or

$$ M_{i}C=[\mu _{i}], $$
(40)

where

$$ M_{i}=X(\eta _{i})B_{i}W^{T}. $$

Consequently, replacing m rows of the augmented matrix \([O;F]\) by rows of the matrix \([M_{i};\mu _{i}]\), we have \([\bar{O};\bar{F}]\) or

$$ \bar{O}C=\bar{F}. $$

System (34), together with conditions, gives \((N+1)\) nonlinear algebraic equations which can be solved for the unknown \(c_{n}\), \(n = 0, 1, 2, \ldots,N\). Consequently, \(y(x)\) given as equation (6) can be calculated.

4 Numerical examples

In this section, several numerical examples are given to illustrate the accuracy and the effectiveness of the method.

4.1 Error estimation

if the exact solution of the proposed problem is known, then the absolute error will be estimated from the following:

$$ e_{N}(x)= \bigl\vert y_{\mathrm{exact}}(x)- y_{\mathrm{approximate}}(x) \bigr\vert , $$
(41)

where \(y_{\mathrm{Exact}}(x)\) is the exact solution and \(y_{\mathrm{Approximate}}(x)\) is the achieved solution at some N. The calculation of \(L_{2} \) error norm also can obtained as follows:

$$ l_{2} =\sqrt{h\sum _{I=0}^{I} \bigl( \bigl\vert y_{\mathrm{exact}}^{I}(x) -y_{\mathrm{approximate}}^{I}(x) \bigr\vert \bigr)^{2} }, $$
(42)

where h is the step size along the given interval. We can easily check the accuracy of the suggested method by the residual error. When the solution \(y_{\mathrm{Approximate}}(x)\) and its derivatives are substituted in (1), the resulting equation must be satisfied approximately, that is, for \(x\in [a,b]\), \(l=0,1,2,\ldots \)

$$\begin{aligned} e_{N}= {}&\Biggl\vert \sum _{k=0}^{n_{1}}\sum_{I=0}^{n_{2}}q_{k,I}(x_{l})y^{k}(x_{l}) y^{(\nu _{I})}(p_{I}x_{l}+\xi _{I})+\sum _{h=1}^{n_{3}}\sum_{j=0}^{n_{4}}p_{h,j}(x_{l})y^{(h)}(x_{l}) y^{(\alpha _{j})}(q_{j}x_{l}+\zeta _{j}) \\ & {} -F(x_{l})- \int _{a}^{b}\sum_{d=0}^{n_{5}}k_{d}(x_{l},t) y^{( \upsilon _{d})}(t) \,dt- \int _{a}^{\phi (x_{l})}\sum_{C=0}^{n_{6}}v_{C}(x_{l},t) y^{(\beta _{C})}(t) \,dt \Biggr\vert , \end{aligned}$$
(43)

where \(E_{N}\leq 10^{-\pounds}\) (£ positive integer) and \(y(x)\) considered as \(y_{\mathrm{Approximate}}(x)\).

Example 1

Consider the following NFIDE with linear functional argument:

$$\begin{aligned} &y^{2}(x)D^{\nu _{2}}y(x)+y^{4}(x)y'(2x+1)+y^{4}(x)+y'(x)D^{ \alpha _{3}}y(x) \\ &\quad=f(x)+ \int ^{x}_{0}(3t-2x)y^{(1.5)}(t) \,dt+ \int ^{1}_{0}t e^{x}y^{(1.8)}(t) \,dt,\quad x\in [0,1]. \end{aligned}$$
(44)

The ICs are \(y(1)=2\), \(y^{\prime }(1)=2\), and \(y^{\prime \prime }(1)=2\) and the exact solution is \(y(x)=x^{2}+x\) at \(\nu _{2}=1.5\), \(\alpha _{3}=2.5\), \(\upsilon _{2}=1.5\), \(\beta _{2}=1.8 \), where \(f(x)=- 0.990113 e^{x} + 0.300901 x^{2.5} + 2.25676 x^{0.5} (x + x^{2})^{2} + (x + x^{2})^{4} + (x + x^{2})^{4} (2 + 4 (1 + 2 x))\). We apply the suggested method with \(N = 4\), and by the fundamental matrix equation of the problem defined by (34), we have

$$\begin{aligned} & \bigl[Q_{2,2} \bigl(\bar{X} \bar{W}^{T}\bar{C}\bigr)^{2} X_{ \nu _{2}}B_{\nu _{2}} W^{T}C+Q_{2,0} \bigl(\bar{X} \bar{W}^{T}\bar{C} \bigr)^{4} X B_{1} H_{2} E_{1} (W)^{T} C \\ &\quad{} +Q _{3,0}\bigl(\bar{X} \bar{W}^{T} \bar{C}\bigr)^{3} X {W}^{T}C+P_{1,3} \bar{X} \bar{B}_{1} \bar{W}^{T}\bar{C} X_{\alpha _{3}}B_{\alpha _{3}} (W)^{T}C \\ &\quad{} -\bar{G_{2}} \bar{W}\bar{Z_{\beta _{2}}} \bar{B_{\beta _{2}}} \bar{W^{T}} \bar{C}-U_{2} W Z_{\upsilon _{2}} B_{\upsilon _{2}} W^{T} C \bigr]=F, \end{aligned}$$
(45)

where

$$\begin{aligned} &X= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 1 & \frac{1}{4} & \frac{1}{16} & \frac{1}{64} & \frac{1}{256} \\ 1 & \frac{1}{2} & \frac{1}{4} & \frac{1}{8} & \frac{1}{16} \\ 1 & \frac{3}{4} & \frac{9}{16} & \frac{27}{64} & \frac{81}{256} \\ 1 & 1 & 1 & 1 & 1 \end{pmatrix},\qquad B_{1}= \begin{pmatrix} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 0 & 4 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix},\\ & E_{1}= \begin{pmatrix} 1 & 1 & 1 & 1 & 1 \\ 0 & 1 & 2 & 3 & 4 \\ 0 & 0 & 1 & 3 & 6 \\ 0 & 0 & 0 & 1 & 4 \\ 0 & 0 & 0 & 0 & 1 \end{pmatrix}, \\ &F= \begin{pmatrix} -2.70811 \\ -1.75983 \\ 3.17448 \\ 41.4935 \\ 249.328 \end{pmatrix},\qquad X_{\alpha _{3}}= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0.125 & 0.03125 & 0.0078125 & 0.00195313 \\ 0 & 0.353553 & 0.176777 & 0.0883883 & 0.0441942 \\ 0 & 0.649519 & 0.487139 & 0.365354 & 0.274016 \\ 0 & 1 & 1 & 1 & 1 \end{pmatrix}, \\ &H_{2}= \begin{pmatrix} 2 & 0 & 0 & 0 &0 \\ 0 & 4 & 0 & 0 &0 \\ 0 & 0 & 8 & 0&0 \\ 0 & 0 & 0 & 16&0 \\ 0 & 0 & 0 & 0&32 \end{pmatrix},\\ & B_{\alpha _{3}}= \begin{pmatrix} 0 & 0 & 0 & 6.77028 & 0 \\ 0 & 0 & 0 & 0 & 18.0541 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \\ &W= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ -1 & 0 & 2 & 0 & 0 \\ 0 & -3 & 0 & 4 & 0 \\ 1 & 0 & -8 & 0 & 8 \end{pmatrix} ,\\ & B_{\nu _{2}}=B_{\upsilon _{2}}= \begin{pmatrix} 0 & 0 & 2.25676 & 0 & 0 \\ 0 & 0 & 0 & 4.51352 & 0 \\ 0 & 0 & 0 & 0 & 7.22163 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \\ &X_{\nu _{2}}= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0.5 & 0.125 & 0.03125 & 0.0078125 & 0.00195313 \\ 0.707107 & 0.353553 & 0.176777 & 0.0883883 & 0.0441942 \\ 0.866025 & 0.649519 & 0.487139 & 0.365354 & 0.274016 \\ 1 & 1 & 1 & 1 & 1 \end{pmatrix}, \\ &Z_{\upsilon _{2}}= \begin{pmatrix} 0 & 0 & 0.666667 & 0.4 & 0.285714 \\ 0 & 0 & 0.4 & 0.285714 & 0.222222 \\ 0 & 0 & 0.285714 & 0.222222 & 0.181818 \\ 0 & 0 & 0.222222 & 0.181818 & 0.153846 \\ 0 & 0 & 0.181818 & 0.153846 & 0.133333 \end{pmatrix},\qquad G_{2}= \begin{pmatrix} 0 & 3 & 0 & 0 & 0 \\ -\frac{1}{2} & 3 & 0 & 0 & 0 \\ -1 & 3 & 0 & 0 & 0 \\ -\frac{3}{2} & 3 & 0 & 0 & 0 \\ -2 & 3 & 0 & 0 & 0 \end{pmatrix}, \\ &B_{\beta _{2}}= \begin{pmatrix} 0 & 0 & 2.17825 & 0 & 0 \\ 0 & 0 & 0 & 5.44562 & 0 \\ 0 & 0 & 0 & 0 & 9.90113 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \\ & \bar{X}=\left (\textstyle\begin{array}{@{}c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{}} 1 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & \frac{1}{4} & -\frac{7}{8} & -\frac{11}{16} & \frac{17}{32} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \frac{1}{2} & - \frac{1}{2} & -1 & -\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \frac{3}{4} & \frac{1}{8} & -\frac{9}{16} & -\frac{31}{32} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 \end{array}\displaystyle \right ). \end{aligned}$$

Equation (45) and conditions present a nonlinear system of \((N + 1)\) algebraic equations in the coefficients \(c_{i}\). The solution of this system at \(N=4\) gives the Chebyshev coefficients as follows:

$$ c_{0}=\frac{1}{2} ,\qquad c_{1}=1 , \qquad c_{2}= \frac{1}{2},\qquad c_{3}= 0,\qquad c_{4}=0 . $$

Therefore, the approximate solution of this example using (6) is given by

$$ y_{4}(x) =\frac{1}{2}T_{0}(x)+T_{1}(x)+ \frac{1}{2}T_{2}(x)=x^{2}+x, $$
(46)

which is the exact solution of problem (44).

Example 2

Consider the following nonlinear fractional integro-differential equation:

$$\begin{aligned} &y^{\prime \prime }(x) D^{\alpha _{2}}y(x)+y^{3}(x) D^{\nu _{2}}y(x-1)+y^{\prime }(x) \\ &\quad=f(x)+ \int _{-1}^{0}t e^{x} y'(t) \,dt+ \int _{-1}^{x}(3t+2x) y(t) \,dt,\quad x\in [-1,0]. \end{aligned}$$
(47)

The ICs are \(y(0)=1\), \(y^{\prime }(0)=1\), and the exact solution is \(y(x)=x^{3}+1\) at \(\alpha _{2}=1.8\), \(\nu _{2}=2\), where \(f(x)=\frac{9}{10} + \frac{3 e^{x}}{4}- \frac{3 x}{4} - \frac{x^{2}}{2}+ 32.6737 x^{2.2} -\frac{11 x^{5}}{10} + 6 (-1 + x) (1 + x^{3})^{3}\).

The matrix representation of equation (47) is

$$\begin{aligned} & \bigl[P_{2,2} \bar{X} \bar{B_{2}} \bar{W}^{T}\bar{C} X_{ \alpha _{2}}B_{\alpha _{2}} W^{T}C+Q_{3,2} \bigl(\bar{X} \bar{W}^{T} \bar{C} \bigr)^{3} X_{\nu _{2}} B_{\nu _{2}}H_{1} E_{-1} W^{T} C \\ &\quad{} +Q _{0,1} X B_{1} {W}^{T}C-U_{1} W Z_{\upsilon _{1}} B_{\upsilon _{1}} W^{T}C-\bar{G_{0}} \bar{W}\bar{Z_{\beta _{0}}} \bar{B_{\beta _{0}}} \bar{W^{T}} \bar{C} \bigr]=F. \end{aligned}$$
(48)

Equation (48) and conditions present a nonlinear system of \((N + 1)\) algebraic equations in the coefficients \(c_{i}\), the solution of this system at \(N=4\) gives the Chebyshev coefficients

$$ c_{0}=1 ,\qquad c_{1}=\frac{3}{4} ,\qquad c_{2}=0 ,\qquad c_{3}= \frac{1}{4} . $$

Thus, the solution of this problem becomes

$$ y_{4}(x) =T_{0}+\frac{3}{4}T_{1}(x)+ \frac{1}{4}T_{3}(x)=x^{3}+1, $$
(49)

which is the exact solution of problem (47).

Example 3

Consider the following nonlinear fractional integro-differential equation with advanced argument:

$$\begin{aligned} &y^{3}(x) D^{\alpha _{2}}y(x)+y^{2}(x) y^{\prime \prime }(x+1)+x y^{\prime }(x+1) \\ &\quad=f(x)+ \int _{0}^{1}(5t-4x) y(t) \,dt+ \int _{0}^{1}\bigl(3t+2e^{x}\bigr) y^{(0.7)}(t) \,dt+ \int _{0}^{3x+1}(3t+2x) y^{(0.5)}(t) \,dt, \\ &\qquad x \in [0,1]. \end{aligned}$$
(50)

The subjected conditions are \(y(1)=3\), \(y^{\prime }(1)=3\), and the exact solution is \(y(x)=x^{2}+x+1\) at \(\alpha _{2}=1.6\), where \(f(x)=-8.42841 - 3.20484 e^{x} + (22 x)/3 - 1.35406 (1 + 3 x)^{2.5} - 1.28958 (1 + 3 x)^{3.5} + 2 (1 + x + x^{2})^{2} + 2.25412 x^{0.4} (1 + x + x^{2})^{3} + x (1 + 2 (1 + x)) - x ( 1.50451 (1 + 3 x)^{1.5} + 1.2036 (1+ 3 x)^{2.5})\). The fundamental matrix equation of the problem becomes as follows:

$$\begin{aligned} & \bigl[Q_{3,2} \bigl(\bar{X} \bar{W}^{T}\bar{C}\bigr)^{3} X_{ \alpha _{2}}B_{\alpha _{2}} W^{T}C+Q_{2,2} \bigl(\bar{X} \bar{W}^{T} \bar{C} \bigr)^{2} X_{\nu _{2}} B_{2}H_{1} E_{1} W^{T}C \\ &\quad{} +Q _{0,1} X_{1} B_{1} H_{1} E_{1} {W}^{T}C- U_{0} W Z_{\upsilon _{0}} W^{T}C \\ & \quad{}- U_{1} W Z_{\upsilon _{1}} B_{\upsilon _{1}} W^{T}C- \bar{G_{1}} \bar{W}\bar{Z_{\beta _{1}}} \bar{B_{\beta _{1}}} \bar{W^{T}} \bar{C} \bigr]=F. \end{aligned}$$
(51)

Equation (50) and conditions present a nonlinear system of \((N + 1)\) algebraic equations in the coefficients \(c_{i}\). The solution of this system at \(N=4\) gives the Chebyshev coefficients in the following form:

$$ c_{0}=\frac{3}{2} ,\qquad c_{1}=1 ,\qquad c_{2}= \frac{1}{2} , \qquad c_{3}=0, \qquad c_{4}=0 . $$

Thus, the solution of the proposed problem becomes

$$ y_{4}(x) =\frac{3}{2}T_{0}(x)+T_{1}(x)+ \frac{1}{2}T_{2}(x)=x^{2}+x+1, $$
(52)

which is the exact solution of problem (50).

Example 4

Consider the following linear fractional integro-differential equation with argument [65]:

$$ x^{2} D^{\nu _{2}}y(x)+x y^{\prime }(x)+y(x-1)+y(x)=f(x)+ \int _{0}^{1}\biggl(\frac{12x^{2}}{7}-2\biggr) y(t) \,dt ,\quad x\in [0,1]. $$
(53)

The ICs are \(y(0)=4\), \(y^{\prime }(0)=-4\), and the exact solution is \(y(x)=x^{2}-4x+4\) at \(\nu _{2}=2\), where \(f(x)=\frac{53}{3} - 14 x + 2 x^{2}\). We apply the suggested method with \(N = 4\), then the fundamental matrix equation of the problem becomes as follows:

$$\begin{aligned} & \bigl[Q_{0,2} X_{\nu _{2}} B_{\nu _{2}} W^{T}C+Q _{0,1} X_{1} B_{1} {W}^{T}C \\ &\quad{} +Q_{0,0} X_{\nu _{0}} B_{\nu _{0}}H_{1} E_{-1} W^{T}C - U_{0} W Z_{\upsilon _{0}} B_{\upsilon _{0}} W^{T}C \bigr]=F. \end{aligned}$$
(54)

Equation (53) and conditions present a linear system of \((N + 1)\) algebraic equations in the coefficients \(c_{i}\). The solution of this system at \(N=4\) gives the Chebyshev coefficients as follows:

$$ c_{0}=\frac{9}{2} ,\qquad c_{1}=-4 ,\qquad c_{2}= \frac{1}{2}, \qquad c_{3}=1.73868\times 10^{-16},\qquad c_{4}=-6.61509\times 10^{-18} . $$

Thus, the solution of this problem becomes

$$ y_{4}(x) =\frac{9}{4}T_{0}(x)-4T_{1}(x)+ \frac{1}{2}T_{2}(x)+1.73868 \times 10^{-16}T_{3}(x)-6.61509 \times 10^{-18}T_{4}(x). $$
(55)

In Table 1 the comparison of the absolute errors for the present scheme at \(N=4\), where \(\nu _{2}=2\), and the method of reference [65] at \(N=8, 10\) is presented. Also, Table 2 shows the numerical values of the approximate solution for various N with reference [65] and the exact solution. The residual error according to (43) is given in Tables 3 and 4 as follows: \(E_{8} \) and \(E_{10} \) for various values of \(\nu _{2} \). Figure 1 provides the comparison of \(y(x)\) for \(N = 4\) with various values of \(\nu _{2}\), where \(\nu _{2}= 2\), 1.8, 1.7, and 1.6. The same comparison is made for \(N=10\) in Fig. 2, and the comparison of the error function for the present method at \(N=4\) and [65] at \(N = 8\) and 10 is given in Fig. 3 for Example 4.

Figure 1
figure 1

Comparison of \(y(x)\) for \(N = 4\) with \(\nu _{2}= 2\), 1.8, 1.7, and 1.6 for Example 4

Figure 2
figure 2

Comparison of \(y(x)\) for \(N = 10\) with \(\nu _{2}= 2\), 1.9, 1.8, and 1.7 for Example 4

Figure 3
figure 3

Comparison of error function for the present method at \(N=4\) and [65] at \(N = 8\) and 10 for Example 4

Figure 4
figure 4

\(y(x)\) for different N and \(\nu _{4}\) values for Example 5

Table 1 Comparison of the absolute errors for Example 4 for different N values at \(\nu _{2}=2\)
Table 2 Numerical solution of Example 4 for different N values
Table 3 Residual error \(E_{8}\) at \(\nu _{2}=1.9, 1.8, 1.7\) for Example 4
Table 4 Residual error \(E_{10}\) at \(\nu _{2}=1.9, 1.8, 1.7\) for Example 4

Example 5

Let us assume the fractional integro-differential equation [68, 69]

$$ D^{\nu _{4}}y(x)-y (x)=x \bigl(1+e^{x}\bigr)+3e^{x}- \int _{0}^{x}y(t) \,dt ,\quad x\in [0,1]. $$
(56)

The subjected conditions are \(y(1)=1+e\), \(y^{\prime }(1)=2e\), \(y^{\prime \prime }(1)=3e\), \(y^{\prime \prime \prime }(1)=4e\), the exact solution of this FIDE is \(y(x)=1+xe^{x}\) when \(\nu _{4}=4\). For solving this challenge, we apply the present scheme for various values of N.

In Table 5, we contribute the numerical results \(y(x_{l})\), for \(N = 9\), of our proposed scheme together with numerical results \(y(x_{l})\), for \(N = 10\), of the Legendre collocation method (LCM) [69] and [68]. It is observed that the proposed scheme reaches the same results of [69] with lower degree of approximation. Moreover, the proposed scheme has superior results with regard to the ADM [66] as shown in [69]. In addition, the numerical results associated with our presented method LCM and generalized differential transform method (GDTM) [67] for \(N = 10\) and \(\nu _{4} = 3.75\) are given in Table 6. As shown in Table 4 of [69], the ADM has very weak approximations with regard to GDTM and LCM. Therefore, we do not consider ADM in Table 6. From this table, we can find that our achieved results are the same as those of LCM, but GDTM results are away from our proposed scheme and LCM results. Achieved evidences confirm the capability of our scheme. For showing the authenticity of the proposed scheme, we depicted the numerical solution \(y(x)\) for various values of \(\nu _{4}\) such as: 3.50, 3.75, and 4. Also, Fig. 5 compares the error function for the present method at \(N=8\), \(N = 9\) and 12 with \(\nu _{4}=4\), and the comparison of the absolute errors for different values of N at \(\nu _{4}=4\) is given in Table 7. The residual error \(E_{10} \) is given in Table 8 for different values of \(\nu _{4} \) as follows: \(3.75, 3.5\).

Figure 5
figure 5

Comparison of error function for the present method at \(N=8\), \(N = 9\) and 12 for Example 5, where \(\nu _{4}=4\)

Table 5 Numerical results of Example 5 for different N values at \(\nu _{4}=4\)
Table 6 Numerical results of Example 5 for \(\nu _{4} = 3.75\)
Table 7 Comparison of the absolute errors for Example 5 for different N values at \(\nu _{4}=4\)
Table 8 Residual error \(E_{10}\) at \(\nu _{4}=3.75, 3.5\) for Example 5

Finally, since problem (56) defines on \([0,1]\) the proposed method applied with the Chebyshev nodes (zeros of Chebyshev polynomials) as collocation points. Table 9 compares the absolute errors for different values of N at \(\nu _{4}=4\) using Chebyshev nodes collocation points, namely \(\frac{1}{2} ( 1+\cos \frac{i \pi }{N} ), i=0,1,\ldots,N \). Also, the comparison of the \(L_{2}\) error norm according to (42) using both equally spaced (11) and Chebyshev nodes collocation points is given in Table 10. Comparing Table 7 with Table 9 and the \(L_{2}\) results in Table 10, one finds that the nodes of Chebyshev fall on \([-1,1]\) and they are chosen with the collocation method as collocation points if the problem is also defined in the same interval, and better results will be obtained than any choice of other form of collocation points, and any modification in the nodes to fit the interval of the problem does not give the good results as expected than the original zeros of the Chebyshev polynomials.

Table 9 Comparison of the absolute errors for Example 5 for different N values at \(\nu _{4}=4\) using Chebyshev nodes collocation points
Table 10 Comparison of the \(L_{2}\) error norm for Example 5 using both equally spaced and Chebyshev nodes collocation points

5 Conclusion

A numerical study for a generalized form of nonlinear arbitrary order integro-differential equations (GNFIDEs) with linear functional arguments is introduced using Chebyshev series. The suggested equation with its linear functional argument represents a general form of delay, proportional delay, and advanced nonlinear fractional order Fredholm–Volterra integro-differential equations. Additionally, we have presented a general form of the operational matrix of derivatives. The fractional and ordinary order derivatives have been obtained and presented in one general operational matrix. Therefore, the proposed operational matrix represents a kind of unification of ordinary and fractional case. To the best of authors knowledge, there is no other work discussing this point. We have presented many numerical examples that greatly illustrate the accuracy of the presented study to the proposed equation and also show how that the propose scheme is very competent and acceptable.