1 Introduction

Agarwal et. al. (see [6] and references therein) introduced the so-called Lidstone boundary value problem, consisting of the general 2r-th order differential equation

$$\begin{aligned} (-1)^{r} y^{(2r)} (x)= f\left( x,\mathbf{y}(x)\right) \, , \qquad x\in \left[ 0,1\right] , \;\; r>1 \end{aligned}$$
(1)

where \(\mathbf{y}= \left( y,y',\ldots , y^{(q)}\right) \), \(0\le q\le 2r-1\) is fixed, and the boundary conditions

$$\begin{aligned} y^{\left( 2j\right) }\left( 0\right) =\alpha _j ,\qquad y^{\left( 2j\right) }\left( 1\right) =\beta _j,\qquad j=0,\ldots , r-1. \end{aligned}$$
(2)

f in (1) is continuous at least in the interior of the domain of interest. If \(f\equiv 0\) then the problem (1)–(2) has a unique solution

$$\begin{aligned} y(x) = P_r (x)=\sum _{i=0}^{r-1} \left[ \alpha _i \varLambda _i(x)+\beta _i \varLambda _i(1-x)\right] , \end{aligned}$$
(3)

where \(\left\{ \varLambda _i\right\} _i\) is the polynomial sequence, called Lidstone sequence [27], defined by

$$\begin{aligned} \left\{ \begin{aligned}&\varLambda _0(x)=x,\quad \varLambda ''_n(x)=\varLambda _{n-1}(x)\\&\varLambda _n(0)=\varLambda _n(1)=0,\quad n\ge 1. \end{aligned} \right. \end{aligned}$$

The polynomial in (3), according to (2), is called Lidstone interpolating polynomial for the function y(x).

Consequentely, (1)–(2) is called Lidstone boundary value problem. The Lidstone boundary value problem has attracted considerable attention (see [4,5,6, 16,17,18, 42] and references therein) particularly because of its special cases that frequently occur in engineering and other branches of physical sciences.

In recent years the Lidstone polynomial sequence \(\left\{ \varLambda _i\right\} _i\) has been much studied (see [4, 5, 11, 12, 14, 16,17,18]) and new polynomial sequences of the same family have been considered.

One of these is the sequence \(\left\{ \varepsilon _k\right\} _k\), with \(\varepsilon _k\) satisfying the conditions

$$\begin{aligned} \left\{ \begin{array}{ll} \varepsilon _k''(x)=(2k+1) 2k \, \varepsilon _{k-1}(x)\\ \varepsilon _k(0)=0, \;k\ge 0,\quad \varepsilon _k'(1)=0\;k\ge 1,\\ \varepsilon _0(x)=x. \end{array} \right. \end{aligned}$$
(4)

Polynomials \(\varepsilon _k\) are connected to the classic Euler polynomials \(E_k\) by the following relation [14]

$$\begin{aligned} \varepsilon _k(x)=2^{2k+1} E_{2k+1}\left( \frac{x+1}{2} \right) . \end{aligned}$$
(5)

Also, they are fundamental polynomials for the interpolation problem

$$\begin{aligned} Q_{r}^{\left( 2j\right) }\left( 0\right) =\alpha _j,\qquad Q_{r}^{\left( 2j+1\right) }\left( 1\right) =\beta _j,\qquad j=0,\ldots , r-1. \end{aligned}$$
(6)

In fact, the polynomial \(Q_r\), given by [16]

$$\begin{aligned} Q_r(x)=\sum _{i=0}^{r-1} \frac{1}{(2i+1)!}\left[ \beta _i \varepsilon _i(x)-\alpha _i \varepsilon _i'(1-x)\right] , \end{aligned}$$

is the unique polynomial that solves the interpolation problem (6).

Therefore we call \(\left\{ \varepsilon _k\right\} _k\) odd Lidstone–Euler polynomial sequence and (6) Lidstone–Euler interpolation conditions, or boundary conditions.

The sequence \(\left\{ \varepsilon _k\right\} _k\) is not known in the literature, but the polynomials

$$\begin{aligned} p_k(x)=\frac{\varepsilon '_k(x)}{2k+1}, \quad k\ge 0 \end{aligned}$$

have already been used [14, 17, 36, 38, 44]. In fact these polynomials are fundamental polynomials in the modified Abel expansion [36, 38, 44].

We can consider [16] the Lidstone–Euler boundary value problem (LEbvp in the following):

figure a

with \(0\le q\le 2r-1\) fixed, \(\alpha _i, \beta _i\), \(i=0,\ldots , r-1,\) finite real constants.

Problem (7a)–(7b), in the general case, does not appear in the literature. In [9] the following problem is considered:

$$\begin{aligned} \left\{ \begin{array}{ll} (-1)^r y^{(2r)}(x)=f\left( x,y,y'',\ldots , y^{(2(r-1))}\right) \\ y^{(2i)}(0)=y^{(2i+1)}(1)=0,\qquad i=0,\ldots , r-1, \end{array} \right. \end{aligned}$$
(8)

and in [29] the more general problem

$$\begin{aligned} \left\{ \begin{array}{ll} y^{(2r)}(x)=f\left( x,y,y',\ldots , y^{(2r-2)}\right) +r(x)\\ \alpha _i y^{(2i)}(0)-\beta _i y^{(2i+1)}(0)=0,\\ \gamma _i y^{(2i)}(1)+\tau _i y^{(2i+1)}(1)=0,\quad i=0,\ldots , r-1 \end{array} \right. \end{aligned}$$
(9)

is studied. Sufficient conditions for the existence of at least one solution for problems (8) and (9) are established under appropriate assumptions on f.

Some special cases of the LEbvp are important for their applications in numerous branches of applied sciences [31, 32, 45, 46].

This paper is intended to be a contribution to the study of Lidstone–Euler interpolation and of LEbvp.

The paper is organized as follows: in Sect. 2 we consider the Lidstone–Euler interpolation problem and we get some known and new relations that will be used in the sequel. In Sect.  3 we give some theoretical results on Lidstone–Euler boundary value problems. We provide sufficient conditions for the existence and uniqueness of the solution. Section 4 is devoted to presenting a method for the numerical solution. In Sect. 5 we propose an algorithm for effective calculation of the numerical solution of the considered boundary value problem. In Sect. 6 we consider the special case \(r=2\), and we provide illustrative numerical examples. In order to compare the proposed method with other existing ones, we apply the new method and the Modified Decomposition method presented in [43] to the numerical solution of a LEbvp of order 10. The last Section contains some conclusions and notations on future works.

2 Lidstone–Euler polynomials and related interpolation

The so-called Lidstone–Euler polynomials have been introduced in [14, 16, 17] in a broader theoretical and applicative framework. They, however, at least in the even case, are known as Abel polynomials since they are fundamental in the modified Abel series [36, 38, 44]. In this Section we will recall some known results on odd Lidstone–Euler polynomials sequence \(\left\{ \varepsilon _k\right\} _k\), and introduce new properties in order to solve the LEbvp.

The k-th odd Lidstone–Euler polynomial has degree \(2k+1\) and satisfies conditions (4).

Proposition 1

The Lidstone–Euler polynomials \(\varepsilon _r\) can be expressed at any \(x\in [0,1]\) as

$$\begin{aligned} \varepsilon _r(x)=(2r+1)!\int _0^1 K_{r} (x,t)\, t\, dt,\qquad r\ge 1, \end{aligned}$$
(10)

where

$$\begin{aligned} K_1(x,t)= & {} \left\{ \begin{array}{lll} -t &{} &{} t\le x\\ -x &{} &{} t> x, \end{array} \right. \end{aligned}$$
(11)
$$\begin{aligned} K_r(x,t)= & {} \int _0^1 K_{1} (x,s) K_{r-1} (s,t)\, ds,\qquad r\ge 2. \end{aligned}$$
(12)

Proof

The proof follows by induction. From the theory of differential equations the solution \(\varepsilon _1\) of the boundary value problem

$$\begin{aligned} \left\{ \begin{array}{ll} \varepsilon _1''(x)=6x\\ \varepsilon _1(0)=\varepsilon _1'(1)=0 \end{array} \right. \end{aligned}$$

can be written as

$$\begin{aligned} \varepsilon _1(x)=6 \int _0^1 K_1(x,t) \, t\, dt. \end{aligned}$$

Next, suppose that (10) is true for \(r\ge 1\). Then the solution \(\varepsilon _{r+1}\) of the boundary value problem

$$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle \varepsilon _{r+1}''(x)=(2r+3)!\int _0^1 K_r(x,t) \, t\, dt\\ \varepsilon _{r+1}(0)=\varepsilon _{r+1}'(1)=0 \end{array} \right. \end{aligned}$$

can be written as

$$\begin{aligned} \begin{array}{ll} \displaystyle \varepsilon _{r+1}(x) &{}= \displaystyle \int _0^1 K_1(x,s) \left[ (2r+3)! \int _0^1 K_r (s,t) \, t\, dt\right] ds\\ &{}\displaystyle = (2r+3)! \int _0^1 \left[ \int _0^1 K_1(x,s) K_r (s,t) \, ds\right] \, t\, dt\\ &{}\displaystyle = (2r+3)! \int _0^1 K_{r+1} (x,t) \, t\, dt. \end{array} \end{aligned}$$

\(\square \)

Remark 1

From (11), \(K_1(x,t)\le 0\), \(0\le x,t\le 1\). Thus, in view of (12),

$$\begin{aligned} 0\le (-1)^r K_r(x,t)=\Bigl | K_r(x,t) \Bigr |, \qquad 0\le x,t\le 1. \end{aligned}$$

Hence, from (10) we get

$$\begin{aligned} (-1)^r\varepsilon _r(x)\ge 0, \qquad 0\le x\le 1. \end{aligned}$$
(13)

Proposition 2

For the function \(K_r\) the following relations hold

  1. (i)

    \(\displaystyle K_{r}(0,t)=0 ;\)

  2. (ii)

    \(\displaystyle \left. \frac{\partial }{\partial x} K_{r}(x,t)\right| _{x=1} =0;\)

  3. (iii)

    \(\displaystyle \frac{\partial ^{2s}}{\partial x^{2s}}K_{r}(x,t) = K_{r-s}(x,t),\qquad s=0,\ldots , r-1; \)

  4. (iv)

    \(\displaystyle \frac{\partial ^{2s+1}}{\partial x^{2s+1}}K_{r}(x,t) =\frac{\partial }{\partial x}K_{r-s}(x,t),\qquad s=0,\ldots , r-1.\)

Proof

The (i) and (ii) follow from Proposition 1 and the boundary conditions \(\varepsilon _k(0)=0= \varepsilon '_k(1)=0\).

The (iii) follows by induction. From (11)–(12)

$$\begin{aligned} K_{r}(x,t)=-\int _0^x s K_{r-1}(s,t) ds -\int _x^1 x K_{r-1}(s,t) ds.\end{aligned}$$

By differentiating, we get

$$\begin{aligned} \frac{\partial }{\partial x} K_{r}(x,t)=-\int _x^1 K_{r-1}(s,t) ds. \end{aligned}$$
(14)

By differentiating again we obtain

$$\begin{aligned} \frac{\partial ^2}{\partial x^2} K_{r}(x,t)=K_{r-1}(s,t), \end{aligned}$$

and this proves (iii) for \(s=1. \) Now, suppose that (iii) is true for s. Then

$$\begin{aligned} \frac{\partial ^{2(s+1)}}{\partial x^{2(s+1)}}K_{r}(x,t) = \frac{\partial ^{2}}{\partial x^{2}}\left( \frac{\partial ^{2s}}{\partial x^{2s}}K_{r}(x,t) \right) = \frac{\partial ^{2}}{\partial x^{2}}K_{r-s}(x,t)= K_{r-(s+1)}(x,t). \end{aligned}$$

\(\square \)

Relation (iv) easily follows from (iii).

Proposition 3

For the function \(K_r\) the following inequalities hold

$$\begin{aligned}&\int _0^1 \Bigl |K_r(x,t)\Bigr | dt\le \frac{1}{2^r},\qquad r\ge 1; \end{aligned}$$
(15)
$$\begin{aligned}&\int _0^1 \left| \frac{\partial }{\partial x} K_{r}(x,t)\right| dt\le \frac{1}{2^{r-1}},\qquad r\ge 1. \end{aligned}$$
(16)

Proof

For \(r=1\) a direct computation gives

$$\begin{aligned} \int _0^1 |K_1(x,t)| dt = \int _0^x t\, dt+ x \int _x^1 dt =x-\frac{x^2}{2}\le \frac{1}{2}. \end{aligned}$$

Next, for \(r>1\), relation (12) yields

$$\begin{aligned} \int _0^1 |K_{r+1}(x,t)| dt= \int _0^1 \left| \int _0^1 K_{1}(x,s) K_r(s,t) ds\right| dt\le \frac{1}{2^{r+1}}. \end{aligned}$$

In order to prove the (16), from (14) we get

$$\begin{aligned} \int _0^1 \left| \frac{\partial }{\partial x} K_{r}(x,t)\right| dt\le \int _0^1 \left( \int _x^1 \left| K_{r-1}(s,t)\right| ds\right) dt= \int _x^1 \left( \int _0^1 \left| K_{r-1}(s,t)\right| dt\right) ds. \end{aligned}$$

From (15) we obtain

$$\begin{aligned} \int _0^1 \left| \frac{\partial }{\partial x} K_{r}(x,t)\right| dt\le \frac{1}{2^{r-1}}\int _x^1 ds\le \frac{1}{2^{r-1}}. \end{aligned}$$

\(\square \)

The polynomial sequence \(\left\{ \varepsilon _k\right\} _k\) is useful in the interpolation problem [16]. In fact it is fundamental in the solution of Birkoff interpolation problem given by

$$\begin{aligned} P_{r}^{\left( 2j\right) }\left( 0\right) =\alpha _j,\qquad P_{r}^{\left( 2j+1\right) }\left( 1\right) =\beta _j,\qquad j=0,\ldots , r-1. \end{aligned}$$
(17)

The following two theorems have been proved in [16]:

Theorem 1

The polynomial

$$\begin{aligned} P_r(x)=\sum _{i=0}^{r-1} \frac{1}{(2i+1)!}\left[ \beta _i \varepsilon _i(x)-\alpha _i \varepsilon _i'(1-x)\right] \end{aligned}$$
(18)

is the unique solution of the interpolation problem (17).

The conditions (17) are called Lidstone–Euler conditions.

Theorem 2

Let \(f\in C^{2r} [0,1]\). The polynomial

$$\begin{aligned} P_r[f](x)=\sum _{i=0}^{r-1} \frac{1}{(2i+1)!}\left[ f^{(2i+1)}(1) \varepsilon _i(x)-f^{(2i)}(0) \varepsilon _i'(1-x)\right] \end{aligned}$$

is the unique polynomial of degree \(2r-1\) such that

$$\begin{aligned} P_{r}^{\left( 2j\right) }[f]\left( 0\right) =f^{(2j)}(0),\qquad P_{r}^{\left( 2j+1\right) }[f]\left( 1\right) =f^{(2j+1)}(1),\qquad j=0,\ldots , r-1. \end{aligned}$$
(19)

The polynomial \(P_r[f]\) is called the Lidstone–Euler interpolating polynomial of the function f.

Proposition 4

For the derivatives of the Lidstone–Euler interpolating polynomial there exist constants \(C_{2i}\) and \(C_{2i+1}\) such that

$$\begin{aligned} \left| P_{r}^{\left( 2i\right) }[f]\left( x\right) \right| \le C_{2i},\qquad \left| P_{r}^{\left( 2i+1\right) }[f]\left( x\right) \right| \le C_{2i+1},\quad 0\le x\le 1,\quad i=0,\ldots , r-1, \end{aligned}$$

with

$$\begin{aligned}&C_{2i} =\frac{2}{3} \sum _{k=0}^{r-i-1} \frac{2^{2k}}{(2k)!\pi ^{2k-1}} \left[ \frac{2\left| f^{(2(k+i)+1)}(1)\right| }{(2k+1) \pi } + \left| f^{(2(k+i))}(0)\right| \right] \end{aligned}$$
(20)

and

$$\begin{aligned} C_{2i+1}= & {} \left| f^{(2i+1)}(1)\right| +\frac{2}{3}\sum _{k=1}^{r-i-1} \frac{2^{2k-1}}{(2k-1)!\pi ^{2(k-1)}} \nonumber \\&\quad \Biggl [ \left| f^{(2(k+i)+1)}(1)\right| \frac{1}{k\pi }\ +\left| f^{(2(k+i))}(0)\right| \Biggr ], \end{aligned}$$
(21)

with \(\displaystyle \sum _{k=1}^{0}\cdot =0\).

Proof

The proof follows after easy calculations from Theorem 2, relations (4) and from the inequality for the n-th Euler polynomial \(\displaystyle \Bigl | E_{n}(x)\Bigr | \le \frac{2}{3 \pi ^{n-1}}\) for \(0\le x\le 1\) ([24, p. 303]). \(\square \)

Now, \(\forall x\in [0,1]\) let \(R_r[f](x)=f(x)-P_r[f](x)\), be the interpolation error.

Theorem 3

(Peano representation) If \(f\in C^{2r} [0,1]\), then

$$\begin{aligned} R_r[f](x)=\int _0^1 K_r(x,t) f^{(2r)}(t)\, dt. \end{aligned}$$

Proof

From Theorem 2, \(R_r[f]\) satisfies

$$\begin{aligned} R_r^{(2i)}[f](0)=0,\ \ \ \ R_r^{(2i+1)}[f](1)=0,\ \ \ i=0,\ldots ,r-1. \end{aligned}$$

From the theory of differential equation it is sufficient to prove that \( R_r^{(2r)}[f](x)=f^{(2r)}(x), \ x\in [0,1]\). But it follows immediately from

$$\begin{aligned} R_r^{(2r-2)}[f](x)=\int _0^1 K_1(x,t) f^{(2r)}(t)\, dt. \end{aligned}$$

\(\square \)

Corollary 1

For the kernel \(K_r\) the following explicit representation holds

$$\begin{aligned} K_r(x,t)=\left\{ \begin{array}{llll} \displaystyle \sum _{i=0}^{r-1}\frac{t^{2(r-i)-1}}{(2(r-i)-1)!(2i+1)!}\varepsilon '_i(1-x), &{}&{} t\le x\\ \displaystyle -\sum _{i=0}^{r-1}\frac{(1-t)^{2(r-i-1)}}{(2(r-i-1))!(2i+1)!} \varepsilon _i(x), &{}&{} t> x. \end{array} \right. \end{aligned}$$
(22)

Proof

This can be proved from the comparison between Theorem 3 and Theorem 5.2 in [16], after observing the uniqueness of Peano’s kernel. \(\square \)

Theorem 4

(Cauchy representation) If \(f\in C^{2r} [0,1]\), then

$$\begin{aligned} R_r[f](x)= f^{(2r)}(\xi _x) \int _0^1 K_r(x,t) \, dt,\qquad \xi _x\in (0,1). \end{aligned}$$

Proof

In view of Remark 1, \((-1)^r K_r(x,t)\ge 0, \ 0\le x,t\le 1\). The result follows from the mean value Theorem for integrals, since \(f\in C^{2r} [0,1]\). \(\square \)

3 The Lidstone–Euler boundary value problem

As we said, the Lidstone–Euler boundary value problem consists of a general non-linear differential equation of order 2r as in (7a) with boundary conditions as in (7b).

If \(f\equiv 0\), the problem (7a)–(7b) has the unique solution \(y= P_r\), where \( P_r\) is defined in (18).

So far we are not aware of any proof of the existence and uniqueness of problem (7a)–(7b). For similar problems there are existence theorems in the literature. For example, in [9] the existence of positive solutions is studied for the problem (8) and also for the same differential equation as in (8) but with different boundary conditions. In [29] the existence of solutions of the two-point boundary value problem (9) is analyzed (for other examples see [8, 48] and references therein).

The following theorem provides sufficient conditions for the existence and uniqueness of the solution of general LEbvp (7a)–(7b).

Theorem 5

Suppose that

  1. (i)

    \(k_i>0\), \(0\le i\le q\), are real given numbers and let Q be the maximum of \(\Bigl | f\left( x,y_0,\ldots , y_{q}\right) \Bigr |\) on the compact \([0,1]\times D\), where

    $$\begin{aligned} D=\bigl \{ \left( y_0,\ldots , y_{q}\right) \,:\, |y_i|\le 2 k_i,\; i=0,\ldots , q\bigr \}; \end{aligned}$$
  2. (ii)

    \(C_{2i}<k_{2i},\;i=0,\ldots , \left[ \frac{q}{2}\right] , \; C_{2i+1}<k_{2i+1}, \ i=0,\ldots , \left[ \frac{q-1}{2}\right] \), where \(C_{2i}\) and \(C_{2i+1}\) are defined in (20) and (21) respectively;

  3. (iii)

    \(\displaystyle \frac{Q}{2^{r-i}}\le k_{2i},\ \ i=0,\ldots , \left[ \frac{q}{2}\right] ; \quad \frac{Q}{2^{r-i-1}}\le k_{2i+1},\ \ i=0,\ldots , \left[ \frac{q-1}{2}\right] ; \)

  4. (iv)

    \(\displaystyle \Bigl | f\left( x,y_0,\ldots , y_{q}\right) -f\left( x,z_0,\ldots , z_{q}\right) \Bigr |<\sum _{i=0}^{q} L_i \left| y_i-z_i \right| ,\quad L_i>0;\)

  5. (v)

    \(\displaystyle \theta = \sum _{i=0}^{q} \frac{L_i}{2^r}<1.\)

Then the boundary value problem (7a)–(7b) has a unique solution on D.

Proof

From the results in [20] it follows that the boundary value problem (7a)–(7b) is equivalent to the Fredholm type integral equation

$$\begin{aligned} y(x)=P_r[y](x)+\int _0^1 \Bigl | K_r(x,t)\Bigr | f\left( t,\mathbf{y}(t)\right) dt:=T[y](x), \end{aligned}$$
(23)

with \(\mathbf{y}= \left( y,y',\ldots , y^{(q)}\right) \). If we introduce in \(C^{q}[0,1]\) the finite norm

$$\begin{aligned} \left\| y\right\| =\max _{0\le s\le q} \left\{ \sup _{0\le t\le 1} \left| y^{(s)} (t)\right| \right\} , \end{aligned}$$

it becomes a Banach space. Let’s define B[0, 1] as the set

$$\begin{aligned} B[0,1]=\left\{ y\in C^{q}[0,1]\, : \, \sup _{0\le t\le 1} \left| y^{(i)} (t)\right| \le 2k_i,\; i=0,\ldots , q \right\} . \end{aligned}$$

We will show that the operator \(T:C^{q}[0,1]\rightarrow C^{2r}[0,1]\) defined in (23) maps B[0, 1] into itself. Obviously, any fixed point of T is a solution of the boundary value problem (7a)–(7b).

Let \(y\in B[0,1]\). Then

$$\begin{aligned} \bigl ( T[y]\bigr )^{(2i)}(x)=P_r^{(2i)}[y](x)+\int _0^1 \frac{\partial ^{2i}}{\partial x^{2i}}\Bigl | K_r(x,t)\Bigr | f\left( t,\mathbf{y}(t)\right) dt \end{aligned}$$

and hence, from hypothesis (i), Propositions 2 and 3, we obtain

$$\begin{aligned} \begin{array}{ll} \Bigl |\bigl ( T[y]\bigr )^{(2i)}(x)\Bigr | &{} \displaystyle \le C_{2i} +Q\int _0^1 \Bigl | K_{r-i}(x,t)\Bigr | dt\\ &{} \displaystyle \le k_{2i}+\frac{Q}{2^{r-i}}\le 2k_{2i}, \qquad i=0,\ldots , \left[ \frac{q}{2}\right] . \end{array} \end{aligned}$$
(24)

Similarly,

$$\begin{aligned} \begin{array}{ll} \Bigl |\bigl ( T[y]\bigr )^{(2i+1)}(x)\Bigr | &{} \displaystyle \le C_{2i+1} +Q\int _0^1 \left| \frac{\partial }{\partial x} K_{r-i}(x,t)\right| dt\\ &{} \displaystyle \le k_{2i+1}+\frac{Q}{2^{r-i-1}}\le 2k_{2i+1}, \qquad i=0,\ldots , \left[ \frac{q-1}{2}\right] . \end{array} \end{aligned}$$
(25)

Relations (24) and (25) prove that \( T B[0,1] \subseteq B[0,1]\). Moreover, the inequalities (24)–(25) imply that the sets

$$\begin{aligned} \left\{ \bigl ( T[y]\bigr )^{(i)}\! :\; y\in B[0,1] \right\} , \quad 0\le i\le q, \end{aligned}$$

are uniformly bounded and equicontinuous in [0, 1]. From the Ascoli–Arzela theorem it follows that \(\overline{TB[0,1]}\) is compact. Thus, from the Shauder fixed point theorem, there exists a fixed point of T in D.

Now we will prove the uniqueness. Let’s suppose that there exist two distinct solutions \(y, z\in D\), that is there exist two fixed points of T in B[0, 1]. Then from (iv), (v) and Proposition 3, it results

$$\begin{aligned} \begin{aligned} \Bigl | T[y-z](x) \Bigr |&=\int _0^x \Bigl | K_{r}(x,t)\Bigr |\, \Bigl | f\left( x,\mathbf{y}(x)\right) -f\left( x,\mathbf{z}(x)\right) \Bigr | dt\\&\le \sum _{i=0}^{q} L_i \left| y^{(i)}-z^{(i)} \right| \int _0^1 \Bigl | K_r (x,t)\Bigr | dt\le \theta \left\| y-z\right\| , \end{aligned} \end{aligned}$$

with \(\theta <1\). Hence the uniqueness of the solution follows.\(\square \)

Remark 2

Let \(f\left( x,y(x)\right) =L^2 y(x)\), with \(L<\infty , L\ne 0\), defined and continuous on \([0,1]\times D\), where \(\displaystyle D=\left\{ y\in \mathbb {R}: |y|\le 2 k_0,\ k_0>0\right\} \). Let’s consider the LEbvp

$$\begin{aligned} \left\{ \begin{aligned}&-y''=L^2 y,\qquad x\in [0,1]\\&y(0)=\alpha _0,\quad y'(1)=\beta _0, \qquad \alpha _0, \beta _0\in \mathbb {R}. \end{aligned} \right. \end{aligned}$$
(26)

The interpolating polynomial is \(P_1[y](x)=\beta _0 x+\alpha _0\). Moreover,

$$\begin{aligned} \left| P_1[y](x)\right| \le |\alpha _0|+|\beta _0| <C_0 =\frac{2}{3} \left( \pi |\alpha _0|+2|\beta _0|\right) . \end{aligned}$$

If \(\displaystyle Q=\max _{(x,y) \in [0,1]\times D} |f(x,y)|\), we have \(\displaystyle Q\le 2 L^2 k_0\). If we set \(\displaystyle k_0=C_0+1\), then \(C_0<k_0\). Hence, if \(\displaystyle L^2<1\), then \(\displaystyle \frac{Q}{2}<k_0\). Thus the hypothesis of Theorem 5 are satisfied and the LEbvp (26) has a unique solution in D

$$\begin{aligned} y(x)=\alpha _0 \cos Lx+\frac{\beta _0+\alpha _0L \sin L}{L \cos L} \sin Lx. \end{aligned}$$

For \(\displaystyle L^2=\frac{\pi ^2}{4}>1\), \(\alpha _0=0\), \(\beta _0\ne 0\), the problem has no solution. For \(\displaystyle L^2=\frac{\pi ^2}{4}>1\), \(\alpha _0=\beta _0= 0\), the problem has infinite solutions.

4 The numerical solution

High order boundary value problems arise in the mathematical modelling of viscoelastic and inelastic flows, deformation of beams, plates deflection theory and in many other applications in engeneering, physics and science.

Only a limited number of this type of problems can be solved analytically. For this reason many researchers have proposed numerical solutions (see, for instance, [1, 13, 19,20,21, 33, 34, 43]). Different approaches have been considered, among which collocation, also with spline functions and Lagrange interpolation, Galerkin weighted residual, decomposition, variational techniques.

Here we present a general method for higher order LEbvp, inspired to the idea in [20]. Next we consider in more detail the important case \(r=2\), that arises in many applications. We consider also a tenth-order LEbvp and compare the proposed method with a Modified Decomposition method [43].

The proposed method is based on extrapolated Bernstein polynomials [10].

4.1 The extrapolated Bernstein method

The extrapolated Bernstein method is based on the asymptotic expansion for Bernstein polynomials [10] and the classical Richardson extrapolation process [39].

For a given n, consider the equispaced points \(\displaystyle x_k=\frac{k}{n}\), \(k=0,\ldots , n\), in [0, 1]. Let’s consider the Bernstein polynomial of degree n that approximates \(y^{(2r)}\) on [0, 1]:

$$\begin{aligned} B_{n}\left[ y^{(2r)}\right] (x)=\sum _{k=0}^{n} b_{n,k}(x) y^{(2r)}\left( x_k\right) , \end{aligned}$$
(27)

where

$$\begin{aligned} b_{n,k}(x)=\left( {\begin{array}{c}n\\ k\end{array}}\right) x^k (1-x)^{n-k},\qquad k=0,\ldots , n, \end{aligned}$$

are the Bernstein basis polynomials.

Theorem 6

[10] Let \(f\in C^{2q}[0,1]\), \(q\ge 1\), and \(\displaystyle h=\frac{1}{n}\). Then the Bernestein polynomial for the function f has following asymptotic expansion

$$\begin{aligned} B_n[f](x)=f(x)+\sum _{i=1}^q h^i S_i [f]\left( x\right) + h^{q+1} E_h[f](x), \end{aligned}$$
(28)

where the functions \(S_i[f]\), \(i=1,\ldots , q\), don’t dependent on h and \(E_h[f]\) vanishes as \(h\rightarrow 0\).

Theorem 7

Let nm be two positive integers and \(\displaystyle h=\frac{1}{n}\). Moreover, let \(y\in C^{2(r+m)}[0,1]\) be the solution of the LEbvp. The following asymptotic expansion holds

$$\begin{aligned} y(x)=\phi _n\left[ y\right] \left( x\right) +\sum _{i=1}^m h^i {\overline{S}}_i\left[ y\right] \left( x\right) + h^{m+1}{\overline{E}}_h\left[ y\right] \left( x\right) , \end{aligned}$$
(29)

where

$$\begin{aligned} \phi _n\left[ y\right] \left( x\right)= & {} P_r\left[ y\right] \left( x\right) +\sum _{k=0}^np_{k,n}(x) y^{(2r)}\left( \frac{k}{n}\right) , \end{aligned}$$
(30)
$$\begin{aligned} p_{k,n}(x)= & {} \int _0^1\Bigl |K_r\left( x,t\right) \Bigr |b_{n,k}(t)\,dt, \end{aligned}$$
(31)

\(\displaystyle {\overline{S}}_i\left[ y\right] \) does not dependent on \(\displaystyle h\), and \({\overline{E}}_h\left[ y\right] \) vanishes as \(h\rightarrow 0\).

Proof

As usual, the solution y of the LEbvp can be written as

$$\begin{aligned} y(x)=P_r\left[ y\right] \left( x\right) +\int _0^1\Bigl |K_r\left( x,t\right) \Bigr |y^{(2r)}\left( x\right) \,dt, \end{aligned}$$
(32)

where \(P_r\left[ y\right] \) is the polynomial interpolating the boundary conditions and \(K_r\) is the related Peano kernel. For the function \(y^{(2r)}\) Theorem 6 holds. Hence from (28) we get

$$\begin{aligned} y(x)= & {} P_r\left[ y\right] \left( x\right) +\int _0^1\Bigl |K_r\left( x,t\right) \Bigr | \left[ B_n\left[ y^{(2r)}\right] \left( t\right) \right. \\&\left. -\sum _{i=1}^m h^i S_i\left[ y^{(2r)}\right] \left( t\right) - h^{m+1} E_h\left[ y^{(2r)}\right] \left( t\right) \right] dt \end{aligned}$$

where \(B_n\left[ y^{(2r)}\right] \) is as in (27), \(S_i\left[ y^{(2r)}\right] \left( t\right) , \ i=0,\ldots ,m\) are coefficients independent on h, and \(E_h\left[ y^{(2r)}\right] \rightarrow 0\) as \(h\rightarrow 0\). After easy calculations we get the relations (29), (30), (31). \(\square \)

Corollary 2

For the solution y of the LEbvp we get

$$\begin{aligned} \lim _{n\rightarrow \infty }\phi _n[y]\left( x\right) =y(x) \end{aligned}$$
(33)

uniformly in \(x\in [0,1]\).

The expansion of Theorem 7 suggests the following extrapolation procedure.

Theorem 8

Let \(y\in C^{2(r+m)}[0,1]\), with m a given integer. Let \(\left\{ n_k\right\} _k\) be an increasing sequence of positive integers and \(\displaystyle h_k=n_k^{-1}\). We define a sequence of polynomials of degree \(n_{i+k}\) as follows

$$\begin{aligned} \left\{ \begin{array}{llllll} T_0^{(i)} := T_0^{(i)}[y](x) &{}\displaystyle =\phi _{n_i}[y](x), &{}&{}i=0,\ldots , m,\\ T_k^{(i)} := T_k^{(i)}[y](x) &{}\displaystyle =\frac{h_{i+k} T_{k-1}^{(i)}-h_{i} T_{k-1}^{(i+1)}}{h_{i+k}-h_{i}}, &{}&{} \begin{array}{l} k=1,\ldots , m-1,\\ i=0,\ldots , m-k. \end{array} \end{array} \right. \end{aligned}$$
(34)

For a fixed i

$$\begin{aligned} \lim _{h_i\rightarrow 0} T_k^{(i)}=y(x), \qquad k=1,2,\ldots , m-1, \ i=0,\ldots , m-k. \end{aligned}$$

Moreover, the following representations of the error and of \(T_k^{(i)}\) hold

$$\begin{aligned} T_k^{(i)}-y(x)= & {} (-1)^k h_i\, h_{i+1}\ldots h_{k} \left( {\overline{S}}_{k+1}[y](x)+O\left( h_i\right) \right) ,\\ T_k^{(i)}[y](x)= & {} \sum _{j=0}^kl_j\left( 0\right) \phi _{n_j}[y](x),l_j\left( h\right) =\prod _{i=0,i\ne j}^k\frac{h_i-h}{h_i-h_j} \end{aligned}$$

Proof

See [10, 40]. \(\square \)

The proposed numerical method consists in approximating the solution y of the LEbvp by the extrapolated polynomial \(T_{m-1}^{(0)}[y]\), being \(n_m\) the last element of the considered numerical sequence \(\left\{ n_i\right\} _i\). We observe that our method is a continuos method since it provides an approximating convergent polynomial sequence. Of course, for any \(z\in [0,1]\), y(z) is approximated by \(T_{m-1}^{(0)}[y](z)\).

Several choices for the increasing sequence \(n_i\) can be, for example [15, 28]:

  • \(n_i= \rho ^i,\; \rho \ge 2\) positive integer (Romberg sequence)

  • \(n_i= b_i\;\) or \(\;n_i=n\, b_i\), where \(b_i\) are the Bulirsch numbers \(b_i=2,3,4,6,8,12,16,\ldots \)

  • \(n_i= 2i\), double harmonic sequence (Deuflhard sequence)

  • \(n_i= n+i,\) \(n \in \mathbb {N}\)

  • \(n_i= \frac{n}{m}(m+i)\), with m fixed constant as n/m is integer.

In [15] the five sequences above have been compared.

In the next Section we outline an algorithm for computing of the approximating extrapolated polynomial.

5 An algorithm for computing the extrapolated polynomial

In order to calculate the approximate solution we observe that from (30) \(\phi _n\left[ y\right] \) is an implicitly defined polynomial. Moreover

$$\begin{aligned} y^{(2r)}\left( x_k\right) =f\left( x_k,y\left( x_k\right) ,y'\left( x_k\right) ,\ldots , y^{(q)}\left( x_k\right) \right) ,\quad k=0,\ldots , n, \ \ 0\le q\le 2r-1. \end{aligned}$$

Therefore, in order to compute \(\phi _n\left[ y\right] \left( x\right) \) at \(x\in [0,1]\), we need the values

$$\begin{aligned} \begin{array}{cc} y^{(s)}_k=\phi _n ^{(s)}\left[ y\right] (x_k),\qquad k=0,\ldots , n,\quad s=0,\ldots ,q, \\ \text {with} \; s\ne 2j,\; j=0,\ldots , \left[ \frac{q}{2}\right] ,\; \text {for} \; k=0,\; \\ \text {and}\; s\ne 2j-1,\; j=1,\ldots ,\left[ \frac{q+1}{2}\right] ,\; \text {for} \; k=n. \end{array} \end{aligned}$$
(35)

These values can be obtained by solving the following algebraic non-linear system

$$\begin{aligned} y^{(s)}_{k} =P^{(s)}_{r}[y](x_k)+ \sum _{i=0}^{n}p^{(s)}_{i,n}\left( x_k\right) f\left( x_{i},\mathbf{y}_i\right) \end{aligned}$$
(36)

with ks as in (35), and \(\mathbf{y}_i= \left( y_{i},y_{i}',\ldots , y_{i}^{(q)}\right) \).

The system (36) can be written in the form

$$\begin{aligned} Y-A\,F_Y=C . \end{aligned}$$
(37)

where \(\displaystyle Y=({{\overline{Y}}}_0, \ldots , {{\overline{Y}}}_{q} )^T\), with \(\displaystyle {{\overline{Y}}}_{2j}= \left( y^{(2j)}_1,\ldots , y_{n}^{(2j)}\right) \), \(j=0,\ldots , \left[ \frac{q}{2}\right] \), \(\displaystyle {{\overline{Y}}}_{2j-1}= \left( y^{(2j-1)}_0,\ldots , y_{n-1}^{(2j-1)}\right) \), \(j=1,\ldots , \left[ \frac{q+1}{2}\right] \). A is the diagonal block matrix

$$\begin{aligned} A=\left( \begin{array}{cccc} A_0 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \\ 0 &{}\quad \ddots &{} \quad &{}\quad \vdots \\ \vdots &{}\quad &{} \quad \ddots &{}\quad 0 \\ 0 &{}\quad \cdots &{}\quad 0 &{}\quad A_{2r-1} \end{array} \right) \end{aligned}$$

with \(\displaystyle A_i\in \mathbb {R}^{n\times (n+1)}\),

$$\begin{aligned} A_{2j}= & {} \left( \begin{array}{ccc} p^{(2j)}_{0,n}(x_1) &{}\quad \cdots &{}\quad p^{(2j)}_{n,n}(x_1) \\ \vdots &{}\quad &{}\quad \vdots \\ p^{(2j)}_{0,n}(x_n) &{}\quad \cdots &{}\quad p^{(2j)}_{n,n}(x_n) \\ \end{array} \right) ,\qquad j=0,\dots , \left[ \frac{q}{2}\right] ,\\ A_{2j-1}= & {} \left( \begin{array}{ccc} p^{(2j-1)}_{0,n}(x_0) &{}\quad \cdots &{}\quad p^{(2j-1)}_{n,n}(x_0) \\ \vdots &{} \quad &{}\quad \vdots \\ p^{(2j-1)}_{0,n}(x_{n-1}) &{}\quad \cdots &{}\quad p^{(2j-1)}_{n,n}(x_{n-1}) \\ \end{array} \right) ,\qquad j=1,\dots , \left[ \frac{q+1}{2}\right] . \end{aligned}$$

Moreover \(\displaystyle F_Y=( \underbrace{F_{n}, \ldots , F_{n}}_{\small {q}} )^T\) with \(F_{n}=\left( f_0, \ldots , f_{n}\right) ^T\), being \(f_i=f\left( x_i,\mathbf{y}_i\right) \), and \(C=\left( C_0,\ldots , C_{q}\right) ^T\) with \(C_{2j}=\left( P^{(2j)}_{r-1}[y](x_1),\ldots , P^{(2j)}_{r-1}[y](x_{n})\right) \), \(j=0,\ldots , \left[ \frac{q}{2}\right] \), \(C_{2j-1}=\left( P^{(2j-1)}_{r-1}[y](x_0),\ldots , P^{(2j-1)}_{r-1}[y](x_{n-1})\right) \), \(j=1,\ldots ,\left[ \frac{q+1}{2}\right] \).

Let \(\displaystyle L=\sum _{i=0}^{q} L_i\), being \(L_i\) the Lipschitz constant for f as in Theorem 5. Then for the existence and uniqueness of solution of (37) the following theorem can be proved with standard techniques [20].

Theorem 9

If \( \varTheta = L \Vert A\Vert _{\infty }<1\), system (37) has a unique solution which can be calculated by an iterative method

$$\begin{aligned} \left( Y_n\right) _{j+1}=G\left( \left( Y_n\right) _{j}\right) ,\quad \qquad j\ge 0 \end{aligned}$$

with a fixed \(\left( Y_n\right) _{0}=\left( \alpha _0,\ldots , \alpha _0\right) \in {\mathbb {R}}^{n(q+1)}\) and \( G\left( Y_n\right) =A\, F_{Y_n}+C.\)

Moreover, if Y is the exact solution of the system (37),

$$\begin{aligned} \left\| \left( Y_n\right) _{j+1}-Y\right\| _{\infty } \le \frac{\varTheta ^{j}}{1-\varTheta } \left\| \left( Y_n\right) _{1}-\left( Y_n\right) _{0}\right\| _{\infty }. \end{aligned}$$

Proof

If \(L\Vert A\Vert _{\infty }<1\), G is contractive. Hence the result follows by applying the contraction mapping theorem. \(\square \)

6 Numerical examples

First, we consider the case \(r=2\). In this case the LEbvp becomes

$$\begin{aligned} y^{(4)}(x)=f\left( x,y,y',y'',y'''\right) , \end{aligned}$$
(38)

with the boundary conditions

$$\begin{aligned} \left\{ \begin{array}{ll} y(0)=\alpha _0, \quad y'(1)=\beta _0,\\ y''(0)=\alpha _1, \quad y'''(1)=\beta _1. \end{array} \right. \end{aligned}$$
(39)

We observe that (38) is a general non-linear fourth-order differential equation. It often arises from mathematical modeling of many physical phenomena in numerous branches of applied sciences. For example it models the equilibrium states of traveling waves in a suspension bridge [26]. Different boundary conditions represent types of bridge being modeled. Often fourth-order differential equations are called beam equations [32, 47], for their relevance in beam theory. In this case the boundary value problem (38)–(39) describes the equilibrium state of the deformation of an elastic beam column with one of its end simply supported and the other end is clamped by sliding clamps. The deformation is caused by a load f; \(y''\) represents the bending moment stiffness, \(y'''\) represents the shear force and \(y^{(4)}\) is the load density stiffness. Equation (38) can be equipped also with different boundary conditions, for example [31]:

  • \(y(0)=y(1)=y''(0)=y''(1)=0\),

  • \(y(0)=y(1)=y'(0)=y'(1)=0\).

The method proposed in this paper provides the explicit expression of the approximating polynomials \(\phi _n\left[ y\right] \) and of the extrapolated polynomials \(T_k^{(i)}[y]\). Particularly, in the case of problem (38)–(39), we have

$$\begin{aligned} T_k^{(i)}[y](x)=\sum _{j=0}^kl_j\left( 0\right) \phi _{n_j}[y](x), \end{aligned}$$

with

$$\begin{aligned} l_j\left( h\right)= & {} \prod _{i=0, i\ne j}^k\frac{h_i-h}{h_i-h_j}, \qquad h_i=\frac{1}{n_i},\\ \phi _{n_j}\left[ y\right] \left( x\right)= & {} P_2\left[ y\right] \left( x\right) +\sum _{k=0}^{n_j} p_{k,{n_j}}(x) f\left( \frac{k}{n_j}\right) , \end{aligned}$$

where

$$\begin{aligned} P_2[y](x)=y(0)+\left( y'(1)-y''(0)-\frac{y'''(1)}{2}+\right) x+\frac{y''(0)}{2} x^2+\frac{y'''(1)}{6} x^3 \end{aligned}$$

and

$$\begin{aligned} p_{k,n_j}(x)=\frac{n_j!}{k!}\sum _{i=0}^{n_j-k}\frac{(-1)^{n_j-k-i}}{(n_j-k-i)!} \int _0^1\Bigl |K_2\left( x,t\right) \Bigr |t^{n_j-i}\,dt, \end{aligned}$$
(40)

with

$$\begin{aligned} K_2(x,t)=\frac{1}{6}\left\{ \begin{aligned}&-t^3+6tx-3tx^2,&x\ge t\\&-3t^2x+6tx-x^3,&x< t. \end{aligned} \right. \end{aligned}$$

The elements of each block matrix \(A_k\), \(k=0,\ldots , 3\), are

$$\begin{aligned} a_{i,l}=\left\{ \begin{aligned}&p_{l,n_j}^{(k)}\left( x_{i+1}\right) ,&k \text { even} \\&p_{l,n_j}^{(k)}\left( x_{i}\right) ,&k \text { odd} \end{aligned} \right. \qquad i=0,\ldots ,n_j-1, \ l=0,\ldots , n_j. \end{aligned}$$

Observe that the integrals that appear in (40) are easily calculable exactly since the integral functions in (40) are polynomials. Therefore also the elements of the matrix can be easily calculated.

So far we are not aware of any specific methods for the numerical solution of problem (1)–(2) even in the case \(r=2\) (see [7, 22, 23, 25, 30,31,32, 35, 37, 41, 45,46,47] and references therein). The reason for this could be the non-symmetry of the boundary conditions. Not even specific numerical examples we have found in the literature.

Pending further clarification of these aspects, in the following we report two examples to validate the theoretical results previously given. Since the analytical solutions of the considered examples are known, \(\forall x\in [0,1]\) we compute the true absolute errors

$$\begin{aligned} e_n(x)=\left| y(x)-\phi _n[y](x)\right| \qquad \text {and}\qquad E_m(x)=\left| y(x)-T_m^{(0)}[y](x)\right| . \end{aligned}$$

Example 1

Consider the following problem

$$\begin{aligned} \left\{ \begin{array}{l} y^{(4)}(x)-2 y''(x)+y(x)=-8 e^x, \qquad \quad x\in [0,1] \\ y\left( 0\right) =0,\qquad y'(1)=-e,\\ y''(0)=0, \qquad y'''(1)=-9e. \end{array} \right. \end{aligned}$$
(41)

The theoretical solution is \(\displaystyle { y\left( x\right) = x (1-x) e^x}\) .

The first approximating polynomials, obtained by using equidistant nodes, are

$$\begin{aligned} \phi _2[y](x)&=0.5617 x - 0.2909 x^3 - 0.3333 x^4 - 0.1690 x^5 - 0.03815 x^6\\ \phi _3[y](x)&=0.7118 x - 0.3649 x^3 - 0.3333 x^4 - 0.1496 x^5 - 0.03860 x^6 - 0.00317 x^7\\ \phi _4[y](x)&=0.7854 x - 0.4004 x^3 - 0.3333 x^4 - 0.1411 x^5 - 0.03754 x^6 \\&\quad - 0.00467 x^7 - 0.000167 x^8\\ \phi _5[y](x)&=0.8291 x - 0.4211 x^3 - 0.3333 x^4 - 0.1379 x^5 - 0.03672 x^6 \\&\quad - 0.00541 x^7 - 0.000342 x^8 - 5.11617\cdot 10^{-6} x^9\\ \phi _6[y](x)&=0.8581 x - 0.4347 x^3 - 0.3333 x^4 - 0.1355 x^5 - 0.03614 x^6 \\&\quad - 0.00581 x^7 - 0.000482 x^8 - 0.0000148 x^9- 6.04527\cdot 10^{-8} x^{10}. \end{aligned}$$

Figure 1 shows the graphs of the error functions \(e_{n_i}\) with \(n_i=2^i\), \(i=2,\ldots ,5\) (Figure 1a) and the graph of \(E_3\) (Figure 1b).

Fig. 1
figure 1

Error functions for \(n_i=2^i\), \(i=2,\ldots ,5\)—Problem (41)

Figure 2a contains the plots of the error functions by using the first terms of the Bulirsh sequence for the degree of the approximating polynomials \(\phi _{n_i}[y]\). Figure 2b shows the graph of \(E_4\).

Fig. 2
figure 2

Error functions for \(n_i=2, 3, 4, 6, 8\)—Problem (41)

The errors in \(x=\frac{1}{2}\) by using extrapolation for different sequences \(n_i\) are displayed in Table 1.

Table 1 Extrapolation error in \(x=\frac{1}{2}\) - Problem (41)

Example 2

Consider now the following nonlinear problem

$$\begin{aligned} \left\{ \begin{array}{l} y^{(4)}(x)= \displaystyle \sin x+ \sin ^2 x-\left( y''(x)\right) ^2 \qquad \quad x\in [0,1] \\ y\left( 0\right) =0,\qquad y'(1)=\cos (1),\\ y''(0)=0, \qquad y'''(1)=-\cos (1). \end{array} \right. \end{aligned}$$
(42)

The theoretical solution for this problem is \(\displaystyle { y\left( x\right) = \sin (x)}\).

Figure 3a contains the plots of the error functions \(e_{n_i}\) with \(n_i=7+i\), \(i=1,\ldots ,5\); Fig. 3b shows the graph of \(E_4\).

Fig. 3
figure 3

Error functions for \(n_i=8, 9, 10, 11, 12\)—Problem (42)

Table 2 contains the errors in \(x=\frac{1}{2}\) by using extrapolation for different sequences \(n_i\).

Table 2 Extrapolation error in \(x=\frac{1}{2}\)—problem (42)

6.1 Comparisons

Now we apply the Bernstein extrapolation method proposed previously for the numerical solution of a higher-order LEbvp. To show the reliability of our method, in the absence of existing specific methods for problem (7a)–(7b), we compare it with the modified decomposition method for boundary value problems presented in [43]. The method in [43] is a modified Adomian decomposition method [2, 3], providing the solution of boundary value problems in the form of a rapidly convergent series with components that are recursively computed.

Example 3

Consider the following tenth-order nonlinear problem

$$\begin{aligned} \left\{ \begin{array}{l} y^{(10)}(x)= \displaystyle e^{-x} y^2(x) \qquad \quad x\in [0,1] \\ y^{(2i)}\left( 0\right) =1,\quad y^{(2i+1)}(1)=e,\qquad i=0,\ldots , 4. \end{array} \right. \end{aligned}$$
(43)

The theoretical solution for this problem is \(\displaystyle { y\left( x\right) = e^x}\).

In [43] the same differential equation is considered, but with different boundary conditions.

The modified decomposition method applied to equation in (43) with boundary conditions \(y^{(2i)}\left( 0\right) =1\), \(i=0,\ldots , 4\), yields the series

$$\begin{aligned} \begin{aligned} y(x)&= 1 + A x + \frac{1}{2!} x^2+\frac{1}{3!}Bx^3+\frac{1}{4!}x^4+\frac{1}{5!}Cx^5+\frac{1}{6!}x^6\\&\quad +\frac{1}{7!}D x^7+\frac{1}{8!}x^8+\frac{1}{9!}Ex^9\\&\quad +\frac{1}{10!}x^{10}+\left( \frac{1}{19958400} A -\frac{1}{39916800}\right) x^{11}\\&\quad + \left( \frac{1}{1197510400} A + \frac{1}{159667200}\right) x^{12}+\cdots . \end{aligned} \end{aligned}$$

Stopping the series at \(x^{12}\) and imposing the conditions \(y^{(2i+1)}(1)=e,\) \(i=0,\ldots , 4,\) we have the approximating polynomial

$$\begin{aligned} \begin{aligned} y_W(x)&= 1 + 1.01167 x + \frac{x^2}{2} + 0.161868 x^3 + \frac{x^4}{24} + 0.0089245 x^5 \\&\quad + \frac{x^6}{720} + 0.000164121 x^7\\&\quad +\frac{x^8}{40320} + 3.8058*10^{-6} x^9 + \frac{x^{10}}{3628800} + 2.5637*10^{-8} x^{11}\\&\quad -2.1851*10^{-9} x^{12}.\\ \end{aligned} \end{aligned}$$

If we apply the proposed method, in order to get a polynomial of degree 12, we can consider a single step of the extrapolation scheme with sequence \(n_i=i+1\), \(i=0,1\). In this case we obtain the following extrapolated polynomial:

$$\begin{aligned} \begin{aligned} T_1^{(0)}(x)&= 1 + 0.999915 x + \frac{x^2}{2} + 0.166702 x^3 + \frac{x^4}{24} + 0.00832904 x^5 + \frac{x^6}{720} \\&+ 0.000198645 x^7 +\frac{x^8}{40320} + 2.75565*10^{-6} x^9 + \frac{x^{10}}{3628800}\\&\quad + 2.18959*10^{-8} x^{11} +3.52584*10^{-9} x^{12}.\\ \end{aligned} \end{aligned}$$

In Table 3 we show the absolute errors in equidistant points in [0, 1], obtained by using the two 12\(-th\) degree polynomials \(T_1^{(0)}(x)\) and \(y_W(x)\).

Table 3 Comparison between the Bernstein extrapolation method and the method in [43] for Problem (43)
Table 4 Extrapolation errors for Problem (43)

The Table 3 shows that the numerical performance is better for the extrapolated Bernstein method. Both methods provide a succession of converging approximations. The computational cost is not comparable, as the modified decomposition method requires preliminary analytical work and, therefore, does not produce automatic codes, unlike our method.

We can have better results by considering approximating polynomials of higher degree. In Table 4 we show the absolute errors in equidistant points in [0, 1], obtained by using the extrapolation technique for several sequences \(n_i\).

7 Conclusions

In this paper we considered the Lidstone–Euler interpolation problem and we obtained new properties, for example the Cauchy representation of the error, a new representation of the Peano Kernel, bounds for the Kernel and for its derivatives. Moreover we considered the associated Lidstone–Euler boundary value problem, in both theoretical and computational aspects. We introduced a method for the numerical solution of the LEbvp. This method uses the extrapolated Bernstein polynomials. Hence we gave an approximating, convergent polynomial sequence for the numerical solution of the LEbvp. An algorithm for effective calculation is given too. Numerical examples support theoretical results and show that high accuracy in the approximation is achieved by using extrapolation, both in theoretical and computational aspects. A comparison with a modified decomposition method is given for a tenth-order problem.

For future developments we note that the existence and uniqueness theorem can be improved, especially in inequalities. Other numerical approaches are possible and a global comparison on performance can be made. Finally, it is necessary to study the complementary case that id odd degree equations and suitable boundary conditions can be analized.