1 Introduction

1.1 Quadratures and adaptive methods of numerical integration

The numerical integration of a function \(f:[a,b]\rightarrow \mathbb {R}\) is often performed by applying the quadrature rule:

$$\begin{aligned} \mathcal {I}[f;a,b]:=\int \limits _a^b f(x)\,\mathrm {d}x\approx Q[f;a,b]:=\sum _{k=1}^m w_k f\bigl (\lambda _k a+ (1-\lambda _k)b\bigr ), \end{aligned}$$

where the coefficients \(\lambda _1,\dots ,\lambda _m\in [0,1]\) and the weights \(w_1,\dots ,w_m\in \mathbb {R}\) (usually positive) are fixed. For example, the following assignment is the familiar simple Simpson’s rule:

$$\begin{aligned} \mathcal {S}[f;a,b]=\frac{b-a}{6}\biggl [f(a)+4f\biggl (\frac{a+b}{2}\biggr )+f(b)\biggr ]. \end{aligned}$$

In this paper we also deal with the simple Chebyshev’s rule

$$\begin{aligned} \mathcal {C}[f;a,b]= & {} \frac{b-a}{3} \biggl [ f\biggl (\frac{2+\sqrt{2}}{4}a+\frac{2-\sqrt{2}}{4}b\biggr ) +f\biggl (\frac{a+b}{2}\biggr )\\&+\,f\biggl (\frac{2-\sqrt{2}}{4}a+\frac{2+\sqrt{2}}{4}b\biggr ) \biggr ]. \end{aligned}$$

For the sake of the record, let us also list simple two-point Gauss’ rule

$$\begin{aligned} \mathcal {G}[f;a,b]=\frac{b-a}{2} \biggl [ f\biggl (\frac{3+\sqrt{3}}{6}a+\frac{3-\sqrt{3}}{6}b\biggr ) +f\biggl (\frac{3-\sqrt{3}}{6}a+\frac{3+\sqrt{3}}{6}b\biggr ) \biggr ]. \end{aligned}$$

Every quadrature rule induces the so-called composite rule, which is created by subdividing the interval [ab] into n subintervals with equally spaced end-points:

$$\begin{aligned} a=x_0<x_1<\dots <x_n=b\quad \text {with}\quad x_k=a+k\cdot \frac{b-a}{n},\quad k=0,1,\dots ,n \end{aligned}$$

and if \(\mathcal {I}[f;x_k,x_{k+1}]\approx Q[f;x_k,x_{k+1}]\), then we sum over \(k\in \{0,\dots ,n-1\}\) to arrive at the rule:

$$\begin{aligned} \mathcal {I}[f;a,b]\approx Q_n[f;a,b]:=\sum _{k=0}^{n-1} Q[f;x_k,x_{k+1}]. \end{aligned}$$
(1)

In this way we get Simpson’s and Chebyshev’s composite rules, denoted respectively \(\mathcal {S}_n\) and \(\mathcal {C}_n.\) Let \(\varepsilon >0\) be a fixed tolerance. In most cases it is not enough to approximate the integral by a simple quadrature rule, since \(\bigl |Q[f;a,b]-\mathcal {I}[f;a,b]\bigr |>\varepsilon \). In order to achieve the desired precision we apply the composite rule \(Q_n\) with n large enough to have

$$\begin{aligned} \bigl |Q_n[f;a,b]-\mathcal {I}[f;a,b]\bigr |<\varepsilon . \end{aligned}$$

Such a method of approximation is called adaptive. How to find such n? If the function f is regular enough, the error terms of the simple quadratures are known and the error bounds could be easily found. For instance, for the composite Simpson’s rule over \(f\in \mathcal {C}^4[a,b]\) the following estimation is widely known (cf. e.g. [4, 7] )

$$\begin{aligned} \bigl |\mathcal {S}_n[f;a,b]-\mathcal {I}[f;a,b]\bigr |\leqslant \frac{(b-a)^5}{2880n^4}\sup \bigl \{\bigl |f^{(4)}(x)\bigr |:x\in [a,b]\bigr \}. \end{aligned}$$
(2)

If the term on the right hand side is less than \(\varepsilon \), then a suitable n can be easily determined. Then, of course,

$$\begin{aligned} \bigl |\mathcal {S}_n[f;a,b]-\mathcal {I}[f;a,b]\bigr |<\varepsilon . \end{aligned}$$

Nevertheless, this approach could be complicated because of difficulties in the estimation of the maximum of \(|f^{(4)}|\), or even impossible, if f is not regular enough.

Another approach to adaptive integration is based on successive incrementation of n. Then the natural question arises how large should be n to achieve the desired precision, i.e., when the incrementation could be finished. Various stopping criteria are known in the literature. Let us briefly mention two of them for Simpson’s method. The first one was given by Lyness [9]; it used the so-called local errors

$$\begin{aligned} \varepsilon _k=\frac{1}{15}\biggl | \mathcal {S}\Bigl [f;x_k,\frac{x_k+x_{k+1}}{2}\Bigr ] +\mathcal {S}\Bigl [f;\frac{x_k+x_{k+1}}{2},x_{k+1}\Bigr ]-\mathcal {S}[f;x_k,x_{k+1}]\biggr |. \end{aligned}$$

Another stopping criterion was proposed by Clenshaw and Curtis [3] and was investigated by Rowland and Varol [12], who have proved that the stopping inequality

$$\begin{aligned} \bigl |\mathcal {S}_{2n}[f;a,b]-\mathcal {I}[f;a,b]\bigr |\leqslant \bigl |\mathcal {S}_{2n}[f;a,b]-\mathcal {S}_n[f;a,b]\bigr | \end{aligned}$$
(3)

is valid for all \(n\in \mathbb {N}\) and for all functions \(f\in \mathcal {C}^4[a,b]\) with \(f^{(4)}\) of a constant sign. This means that if \(\bigl |\mathcal {S}_{2n}[f;a,b]-\mathcal {S}_n[f;a,b]\bigr |<\varepsilon \), then also \(\bigl |\mathcal {S}_{2n}[f;a,b]-\mathcal {I}[f;a,b]\bigr |<\varepsilon ,\) so we arrive at the approximation of the integral by \(\mathcal {S}_{2n}[f;a,b].\) This is the starting point for our considerations. It is worth mentioning that none of these stopping criteria are valid for all integrable functions. Indeed, there are functions satisfying these criteria with an error of approximation of the integral greater than the tolerance \(\varepsilon .\) Notice that an extensive survey (not only) of stopping criteria was given by Gonnet [5] (this interesting study is also available on the arXiv repository, arXiv:1003.4629).

1.2 Convex functions of higher order

Hopf [6] proposed to consider functions with non-negative divided differences and subsequently Popoviciu [10, 11] investigated properties of functions in this class. He termed the function f to be n-convex, if

$$\begin{aligned} {[}x_0,x_1,\dots ,x_{n+1};f]\geqslant 0, \end{aligned}$$

where the divided differences are defined recursively by \([x_0;f]=f(x_0)\) and

$$\begin{aligned}{}[x_0,x_1,\dots ,x_n,x_{n+1};f] =\frac{[x_1,\dots ,x_n,x_{n+1};f]-[x_0,x_1,\dots ,x_n;f]}{x_{n+1}-x_0}. \end{aligned}$$

He stated many basic properties of such functions. He proved among others, that if f is n-convex in [ab], then it is \(n-1\) times differentiable on (ab). However the function \(x\mapsto x^n_+\) (where \(x_+=\max \{x,0\}\)) is n-convex on \(\mathbb {R}\) but it is not n-times differentiable at 0. He also showed [10, p. 18] that the composition of an n-convex function with an affine function remains n-convex on the suitable interval, whenever n is an odd number. Another result of Popoviciu tells us that if f is \((n+1)\)-times differentiable on [ab] with \(f^{(n+1)}\geqslant 0\), then f is n-convex on [ab]. Of course 1-convex functions are convex in the usual sense. A function f is n-concave, if \(-f\) is n-convex. Then functions with \(f^{(4)}\) of a constant sign (see the previous subsection) are necessarily either 3-convex, or 3-concave. One could ask whether the stopping inequality (3) holds for all 3-convex and 3-concave functions. The positive answer could be derived from the paper [1] by Bessenyei and Páles, who developed the smoothing technique for n-convex functions. In Sect. 2 as a tool we apply the spline approximation of n-convex functions. Below we recall a result due to Bojanić and Roulier [2].

Theorem 1

Every n-convex function mapping [ab] into \(\mathbb {R}\) can be uniformly approximated on [ab] by (n-convex) spline functions of the form

$$\begin{aligned} x\mapsto p(x)+\sum _{i=1}^ma_i(x-c_i)^n_+, \end{aligned}$$

where p is a polynomial of degree at most n, \(a_i>0\), \(c_i\in [a,b]\) (\(i=1,\dots ,m\)).

1.3 Inequalities between simple quadratures

Bessenyei and Páles proved that the inequality (of Hermite–Hadamard type)

$$\begin{aligned} \mathcal {G}[f;a,b]\leqslant \mathcal {I}[f;a,b]\leqslant \mathcal {S}[f;a,b] \end{aligned}$$
(4)

holds for any 3-convex function \(f:[a,b]\rightarrow \mathbb {R}\) (see [1, Corollary 3]). The author reproved it in [14, Proposition 12] using the support technique. He has also established an inequality connected with the simple Chebyshev’s rule: if \(g:[-1,1]\rightarrow \mathbb {R}\) is 3-convex, then

$$\begin{aligned} \mathcal {C}[g;-1,1]\leqslant \mathcal {I}[g;-1,1] \end{aligned}$$
(5)

[14, Proposition 14]. Applying the affine substitution

$$\begin{aligned} t=\frac{2}{b-a}(x-a)-1,\quad x\in [a,b] \end{aligned}$$
(6)

we transform the interval [ab] onto \([-1,1]\). If f is 3-convex on [ab], then \(g:[-1,1]\rightarrow \mathbb {R}\) given by

$$\begin{aligned} g(t)=f\Bigl (\frac{a+b}{2}+\frac{b-a}{2}t\Bigr ) \end{aligned}$$
(7)

is 3-convex on \([-1,1]\) (see the remark in the previous subsection). Then (5) together with (4) lead immediately to the inequality

$$\begin{aligned} \mathcal {G}[f;a,b]\leqslant \mathcal {C}[f;a,b]\leqslant \mathcal {I}[f;a,b]\leqslant \mathcal {S}[f;a,b], \end{aligned}$$
(8)

which holds true for any 3-convex function \(f:[a,b]\rightarrow \mathbb {R}\).

1.4 The aim of the paper

In Sect. 2 we improve the part of the inequality (8), which is

$$\begin{aligned} \mathcal {C}[f;a,b]\leqslant \mathcal {I}[f;a,b]\leqslant \mathcal {S}[f;a,b]. \end{aligned}$$
(9)

This allows us to introduce in Sect. 3 a certain adaptive method and give its stopping criterion. In Sect. 4 we compare it to Rowland and Varol’s adaptive method with stopping criterion (3). It is to be noted that our approach requires considerably fewer subdivisions than that of Rowland and Varol.

2 The inequality of Hermite–Hadamard type

The inequality we establish below improves inequality (9). Let us start from the interval \([-1,1]\). Then

Theorem 2

If \(g:[-1,1]\rightarrow \mathbb {R}\) is 3-convex, then

$$\begin{aligned} \mathcal {C}[g;-1,1]\leqslant \mathcal {I}[g;-1,1]\leqslant \frac{\mathcal {C}[g;-1,1]+\mathcal {S}[g;-1,1]}{2}. \end{aligned}$$
(10)

Proof

The inequality on the left is nothing but (5). We prove the inequality on the right. In order to simplify the notation, let us skip the endpoints of the interval \([-1,1]\): \(\mathcal {I}[g]=\mathcal {I}[g;-1,1]\) and so on. Let

$$\begin{aligned} E[g]=\frac{C[g]+\mathcal {S}[g]}{2}-\mathcal {I}[g]. \end{aligned}$$

We shall prove that \(E[g]\geqslant 0\) for any 3-convex function \(g:[-1,1]\rightarrow \mathbb {R}.\) The operators \(\mathcal {C},\mathcal {I},\mathcal {S}\) are linear and \(\mathcal {C}[p]=\mathcal {I}[p]=\mathcal {S}[p]\) for polynomials of degree not greater than 3. Let \((g_n)\) be the sequence of 3-convex functions converging uniformly to g. Then \(\mathcal {I}[g_n]\rightarrow \mathcal {I}[g]\) as \(n\rightarrow \infty \) and, by the pointwise convergence, we infer that \(E[g_n]\rightarrow E[g]\) as \(n\rightarrow \infty \). Hence by Theorem 1 it is enough to check that for all \(c\in [-1,1]\)

$$\begin{aligned} \varphi (c):=E\bigl [f_c\bigr ]\geqslant 0, \end{aligned}$$
(11)

where \(f_c(x)=(x-c)^3_+.\) First we claim that \(\varphi \) is an even function. Define also \(g_c(x)=(c-x)^3_+.\) Observe that \(f_c(x)-g_c(x)=(x-c)^3,\) which implies that \(E[f_c-g_c]=0.\) Therefore

$$\begin{aligned} \varphi (c)=E\bigl [f_c]=E[f_c-g_c]+E[g_c]=E[g_c]. \end{aligned}$$

Because of the form of the operators \(\mathcal {C},\,\mathcal {I},\,\mathcal {S}\), the operator E is symmetric, that is, \(E[g](-x)=E[g](x)\) (we emphasize here that the operator E is applied to g as a function of x). Hence

$$\begin{aligned} \varphi (c)=E[g_c](x)=E[g_c](-x). \end{aligned}$$

Since \(g_c(-x)=(x+c)^3_+=\bigl (x-(-c)\bigr )^3_+=f_{-c}(x),\) we arrive at

$$\begin{aligned} \varphi (c)=E[g_c](-x)=E[f_{-c}](x)=\varphi (-c) \end{aligned}$$

and \(\varphi \) is an even function on \([-1,1]\). Consequently, it is enough to prove (11) for all \(c\in [0,1].\) For this, let us write \(\varphi (c)\) explicitly:

$$\begin{aligned} \varphi (c)= & {} E[f_c]=\frac{C[f_c]+\mathcal {S}[f_c]}{2}-\mathcal {I}[f_c]\\= & {} \frac{1}{3} \biggl [ f_c\biggl (-\frac{\sqrt{2}}{2}\biggr )+f_c(0) +f_c\biggl (\frac{\sqrt{2}}{2}\biggr ) \biggr ]\\&+\,\frac{1}{6}\bigl [f_c(-1)+4f_c(0)+f_c(1)\bigr ] -\int \limits _{-1}^1 f_c(x)\,\mathrm {d}x. \end{aligned}$$

If \(c\in [0,1]\), then \(f_c(-1)=f_c\bigl (-\frac{\sqrt{2}}{2}\bigr )=f_c(0)=0\), while \(f_c(1)=(1-c)^3\) and

$$\begin{aligned} \int \limits _{-1}^1 f_c(x)\,\mathrm {d}x=\int \limits _{-1}^1 (x-c)^3_+\,\mathrm {d}x=\int \limits _c^1 (x-c)^3\,\mathrm {d}x=\frac{1}{4}(1-c)^4. \end{aligned}$$

Finally, if \(c\in [0,1]\), then

$$\begin{aligned} \varphi (c)= & {} \frac{1}{3}f_c\biggl (\frac{\sqrt{2}}{2}\biggr )+\frac{1}{6}f_c(1) -\int \limits _{-1}^1 f_c(x)\,\mathrm {d}x=\frac{1}{3}\biggl (\frac{\sqrt{2}}{2}-c\biggr )^3_+\\&+\,\frac{1}{6}(1-c)^3-\frac{1}{4}(1-c)^4. \end{aligned}$$

Multiplying both sides by 12 we get

$$\begin{aligned} \psi (c):=12\varphi (c)=4\biggl (\frac{\sqrt{2}}{2}-c\biggr )^3_++(3c-1)(1-c)^3. \end{aligned}$$

Obviously \(\psi (c)\geqslant 0\) as long as \(\frac{\sqrt{2}}{2}\leqslant c\leqslant 1.\) Let \(0\leqslant c<\frac{\sqrt{2}}{2}.\) Then

$$\begin{aligned} \psi (c)=4\biggl (\frac{\sqrt{2}}{2}-c\biggr )^3+(3c-1)(1-c)^3=-3c^4+6c^3 +(6\sqrt{2}-12)c^2+\sqrt{2}-1 \end{aligned}$$

and

$$\begin{aligned} \psi '(c)=-12c^3+18c^2+(12\sqrt{2}-24)c=-6c\bigl (2c^2-3c-(2\sqrt{2}-4)\bigr )<0 \end{aligned}$$

for \(0<c<\frac{\sqrt{2}}{2}\) and \(\psi \) decreases on \(\Bigl [0,\frac{\sqrt{2}}{2}\Bigr ).\) Since \(\psi \) is continuous and \(\psi (c)\geqslant 0\) on \(\Bigl [\frac{\sqrt{2}}{2},1\Bigr ),\) we have \(\varphi (c)=\frac{1}{12}\psi (c)\geqslant 0\) on [0, 1], which concludes the proof.

\(\square \)

Remark 3

Let us note that in the above proof the function \(\frac{1}{6}\varphi \) is in fact the Peano kernel of E. So, Bojanié and Roulier’s result in the proof of Theorem 1 could be replaced by the Peano Kernel Theorem for 3-convex functions \(g\in \mathcal {C}^4[-1,1]\) Then, by Bessenyei and Páles’ smoothing technique (mentioned in the Introduction, cf. [1]), it is enough to extend the result to arbitrary 3-convex functions \(g:[-1,1]\rightarrow \mathbb {R}.\) This alternative approach is used in the recent paper [8] by Komisarski and the present author.

Remark 4

Because of inequality (10), the best approximation of \(\mathcal {I}[g;-1,1]\) (for a 3-convex function \(g:[-1,1]\rightarrow \mathbb {R}\)) is

$$\begin{aligned} Q[g;-1,1]:= & {} \frac{1}{2}\mathcal {C}[g;-1,1]+\frac{1}{2}\frac{\mathcal {C}[g;-1,1]+\mathcal {S}[g;-1,1]}{2}\\= & {} \frac{3}{4}\mathcal {C}[g;-1,1]+\frac{1}{4}\mathcal {S}[g;-1,1]. \end{aligned}$$

However, it is impossible to further improve inequality (9) by the procedure of halving. Indeed, consider two 3-convex functions: \(f(x)=x^3_+\) and \(g(x)=\bigl (x-\frac{1}{2}\bigr )^3_+.\) Then \(Q[f;-1,1]-\mathcal {I}[f;-1,1]=\frac{3\sqrt{2}-4}{24}>0\), while \(Q[g;-1,1]-\mathcal {I}[g;-1,1]=\frac{5(12\sqrt{2}-17)}{192}<0.\) So, neither of the inequalities \(\mathcal {C}[g;-1,1]\leqslant \mathcal {I}[g;-1,1]\leqslant Q[g;-1,1]\) or \(Q[g;-1,1]\leqslant \mathcal {I}[g;-1,1]\leqslant \frac{C[g;-1,1]+\mathcal {S}[g;-1,1]}{2}\) holds for all 3-convex functions \(g:[-1,1]\rightarrow \mathbb {R}.\)

Now we pass to the arbitrary interval [ab]. Below we establish an inequality of Hermite–Hadamard type.

Corollary 5

If \(f:[a,b]\rightarrow \mathbb {R}\) is 3-convex, then

$$\begin{aligned} \mathcal {C}[f;a,b]\leqslant \mathcal {I}[f;a,b]\leqslant \frac{\mathcal {C}[f;a,b]+\mathcal {S}[f;a,b]}{2}. \end{aligned}$$

Proof

Let \(f:[a,b]\rightarrow \mathbb {R}\) be a 3-convex function. We proceed as in Sect. 1.3. The function \(g:[-1,1]\rightarrow \mathbb {R}\) given by (7) is 3-convex on \([-1,1]\). We apply Theorem 2 to g. The substitution (6) gives the desired conclusion.

Let

$$\begin{aligned} Q[f;a,b]=\frac{3}{4}\mathcal {C}[f;a,b]+\frac{1}{4}\mathcal {S}[f;a,b]. \end{aligned}$$
(12)

Below we give the inequality which plays the key role in the next section. It is an immediate consequence of Corollary 5 and the above definition of Q[fab].

Corollary 6

If \(f:[a,b]\rightarrow \mathbb {R}\) is 3-convex, then

$$\begin{aligned} \bigl |Q[f;a,b]-\mathcal {I}[f;a,b]\bigr |\leqslant \frac{1}{4}\bigl (\mathcal {S}[f;a,b]-\mathcal {C}[f;a,b]\bigr ). \end{aligned}$$

3 An adaptive method and its stopping criterion

Let the quadrature rule Q be defined by (12). In the way we described in the Introduction (cf. (1)) we define the composite rules \(\mathcal {C}_n[f;a,b], Q_n[f;a,b]\) and \(\mathcal {S}_n[f;a,b]\).

Theorem 7

If \(f:[a,b]\rightarrow \mathbb {R}\) is either a 3-convex or a 3-concave function, then

$$\begin{aligned} \bigl |Q_n[f;a,b]-\mathcal {I}[f;a,b]\bigr | \leqslant \frac{1}{4}\bigl |\mathcal {S}_n[f;a,b]-\mathcal {C}_n[f;a,b]\bigr |. \end{aligned}$$
(13)

Proof

First, suppose that \(f:[a,b]\rightarrow \mathbb {R}\) is 3-convex. Considering the construction of \(\mathcal {C}_n[f;a,b]\), \(Q_n[f;a,b]\) and \(\mathcal {S}_n[f;a,b]\), and applying Corollary 6, we arrive at

$$\begin{aligned} \bigl |Q_n[f;a,b]-\mathcal {I}[f;a,b]\bigr |&=\biggl |\sum _{k=0}^{n-1} Q[f;x_k,x_{k+1}]-\mathcal {I}[f;x_k,x_{k+1}]\biggr |\\&\leqslant \sum _{k=0}^{n-1} \bigl |Q[f;x_k,x_{k+1}]-\mathcal {I}[f;x_k,x_{k+1}]\bigr |\\&\leqslant \frac{1}{4}\sum _{k=0}^{n-1}\bigl (\mathcal {S}[f;x_k,x_{k+1}]-\mathcal {C}[f;x_k,x_{k+1}]\bigr )\\&=\frac{1}{4}\bigl (\mathcal {S}_n[f;a,b]-\mathcal {C}_n[f;a,b]\bigr ). \end{aligned}$$

If f is 3-concave, we then apply the above inequality to the 3-convex function \(-f\) and use the linearity of the operators \(Q_n,\)\(\mathcal {I}\), \(\mathcal {S}_n\):

$$\begin{aligned} \bigl |Q_n[f;a,b]-\mathcal {I}[f;a,b]\bigr |\leqslant \frac{1}{4}\bigl (\mathcal {C}_n[f;a,b]-\mathcal {S}_n[f;a,b]\bigr ). \end{aligned}$$

In both cases we obtain inequality (13) and the proof is complete. \(\square \)

Our next result is derived trivially from Theorem 7.

Corollary 8

Let \(\varepsilon >0.\) If \(f:[a,b]\rightarrow \mathbb {R}\) is either a 3-convex or a 3-concave function and

$$\begin{aligned} \bigl |\mathcal {S}_n[f;a,b]-\mathcal {C}_n[f;a,b]\bigr |<4\varepsilon , \end{aligned}$$

then \(\bigl |Q_n[f;a,b]-\mathcal {I}[f;a,b]\bigr |<\varepsilon \).

The adaptive method we propose goes by the following steps.

  1. 1.

    Take a function \(f:[a,b]\rightarrow \mathbb {R}\) which is either 3-convex, or 3-convace.

  2. 2.

    Fix a tolerance \(\varepsilon >0\) and take \(n=1\).

  3. 3.

    Increase n until \(\bigl |\mathcal {S}_n[f;a,b]-\mathcal {C}_n[f;a,b]\bigr |<4\varepsilon \).

  4. 4.

    By Corollary 8, we get \(\bigl |Q_n[f;a,b]-\mathcal {I}[f;a,b]\bigr |<\varepsilon ,\) so we take \(\mathcal {I}[f;a,b]\approx Q_n[f;a,b].\)

Then the inequality

$$\begin{aligned} \bigl |\mathcal {S}_n[f;a,b]-\mathcal {C}_n[f;a,b]\bigr |<4\varepsilon \end{aligned}$$
(14)

constitutes the stopping criterion for our method. If \(\varepsilon >0\) is arbitrarily chosen, then this inequality is fulfilled for n large enough and the algorithm is stopped. Indeed, by a standard argument we have \(\lim \limits _{n\rightarrow \infty }\bigl |\mathcal {S}_n[f;a,b]-\mathcal {C}_n[f;a,b]\bigr |=0.\)

Remark 9

The stopping criterion (14) ensures the desired approximation of the integral for all 3-convex and 3-concave functions. Nevertheless, it may lead to incorrect results for those beyond that class. In order to see this, let us consider \(f(x)=\bigl ||x|-\frac{1}{2}\bigr |\) for \(x\in [-1,1]\). Its graph is a polyline passing through the points \(\bigl (-1,\frac{1}{2}\bigr )\), \(\bigl (-\frac{1}{2},0\bigr )\), \(\bigl (0,\frac{1}{2}\bigr )\), \(\bigl (\frac{1}{2},0\bigr )\) and \(\bigl (1,\frac{1}{2}\bigr )\). Take \(n=1\). Because \(\mathcal {C}_1[f;-1,1]=\mathcal {C}[f;-1,1]\), and similarly with the quadratures \(\mathcal {S}_1\) and \(Q_1\), we have

$$\begin{aligned} \mathcal {C}[f;-1,1]=\frac{2\sqrt{2}-1}{3},\;\mathcal {S}[f;-1,1]=1, \;Q[f;-1,1]=\frac{\sqrt{2}}{2},\;\mathcal {I}[f;-1,1]=\frac{1}{2} \end{aligned}$$

and, as a consequence,

$$\begin{aligned} \bigl |Q[f;-1,1]-\mathcal {I}[f;-1,1]\bigr |=\frac{\sqrt{2}-1}{2} >\frac{2-\sqrt{2}}{6}=\frac{1}{4}\bigl |\mathcal {S}[f;-1,1]-\mathcal {C}[f;-1,1]\bigr | \end{aligned}$$

so inequality (13) from Theorem 7 fails to hold.

4 Numerical experiments

To demonstrate our adaptive method for 3-convex functions in action, the author created a computer program using the Python programming language.Footnote 1 It compares this method with three other ones:

  1. 1.

    The method of Rowland and Varol based on the approximation \(\mathcal {I}[f;a,b]\approx \mathcal {S}_{2n}[f;a,b]\) with the stopping criterion (3).

  2. 2.

    The classical Simpson’s adaptive method \(\mathcal {I}[f;a,b]\approx S_n[f;a,b]\) for \(f\in \mathcal {C}^4[a,b]\) with n given by the error bound (2), i.e. the smallest \(n\in \mathbb {N}\) satisfying the inequality

    $$\begin{aligned} \frac{(b-a)^5}{2880n^4} \sup \bigl \{\bigl |f^{(4)}(x)\bigr |:x\in [a,b]\bigr \}<\varepsilon . \end{aligned}$$
    (15)
  3. 3.

    Chebyshev’s adaptive method \(\mathcal {I}[f;a,b]\approx \mathcal {C}_n[f;a,b]\) for \(f\in \mathcal {C}^4[a,b]\) with the smallest \(n\in \mathbb {N}\) satisfying the inequality

    $$\begin{aligned} \bigl |\mathcal {C}_n[f;a,b]-\mathcal {I}[f;a,b]\bigr |\leqslant \frac{(b-a)^5}{11520n^4}\sup \bigl \{\bigl |f^{(4)}(x)\bigr |:x\in [a,b]\bigr \} <\varepsilon . \end{aligned}$$
    (16)

    The above error bound could be easily obtained by the substitution (6), if one knows the error bound of the simple Chebyshev quadrature (cf. e.g. [13]):

    $$\begin{aligned} \bigl |\mathcal {C}[g;-1,1]-\mathcal {I}[g;-1,1]\bigr |\leqslant \frac{1}{360}\sup \bigl \{\bigl |g^{(4)}(x)\bigr |:x\in [-1,1]\bigr \} \end{aligned}$$

    for \(g\in \mathcal {C}^4[-1,1]\).

Below we present the results of two experiments performed with our program.

Experiment 1. We start with the integral \( \int \limits _1^2 \frac{\mathrm {d}x}{x}=\ln 2. \) The function \(f(x)=\frac{1}{x}\) is 3-convex as \(f^{(4)}(x)=\frac{24}{x^5}>0\) for \(x\in [1,2]\). In the table below we put the numbers of subdivisions of the interval [1, 2] needed to approximate the integral with the desired tolerance \(\varepsilon \in \{10^{-1}, 10^{-2},\dots ,10^{-16}\}\). In each case one can see that our method \(Q_n[f;1,2]\) achieves the lowest number of subdivisions, which is almost three times less than that required by Rowland and Varol’s method \(\mathcal {S}_{2n}[f;1,2]\) (the highest number in the experiment). The classical methods \(\mathcal {S}_n[f;1,2]\) and \(\mathcal {C}_n[f;1,2]\) are in the middle.

\(f(x)=\frac{1}{x}\)

Stopping criteria (14), (3)

Error bounds given by (15), (16)

Tolerance

\(Q_n[f;1,2]\)

\(\mathcal {S}_{2n}[f;1,2]\)

\(\mathcal {S}_n[f;1,2]\)

\(\mathcal {C}_{n}[f;1,2]\)

\( 10^{-1} \)

1

2

1

1

\( 10^{-2}\)

1

2

1

1

\( 10^{-3}\)

1

4

2

2

\( 10^{-4}\)

2

4

4

3

\( 10^{-5}\)

3

8

6

4

\( 10^{-6}\)

5

14

10

7

\( 10^{-7}\)

9

24

17

13

\( 10^{-8}\)

16

42

31

22

\( 10^{-9} \)

28

74

54

38

\( 10^{-10} \)

50

132

96

68

\( 10^{-11} \)

89

234

170

121

\( 10^{-12}\)

158

414

303

214

\( 10^{-13} \)

280

736

538

380

\( 10^{-14}\)

498

1310

956

676

\( 10^{-15}\)

884

2328

1700

1202

\( 10^{-16} \)

1572

4138

3022

2137

Experiment 2. Next we experiment with the 3-convex function \(f(x)={{\,\mathrm{e}\,}}^x\) over the intervals [0, b] with \(b\in \{1,2,\dots ,10\}.\) We choose the tolerance \(\varepsilon =10^{-8}\). The conclusion about the number of subdivisions is exactly the same as in Experiment 1.

\(f(x)={{\,\mathrm{e}\,}}^x\)

Stopping criteria (14), (3)

Error bounds given by (15), (16)

The right endpoint b

\(Q_n[f;1,2]\)

\(\mathcal {S}_{2n}[f;1,2]\)

\(\mathcal {S}_n[f;1,2]\)

\(\mathcal {C}_{n}[f;1,2]\)

1

12

32

18

13

2

33

86

54

38

3

64

170

115

81

4

111

292

210

149

5

178

470

357

252

6

275

722

575

407

7

412

1082

895

633

8

604

1588

1358

960

9

872

2294

2019

1428

10

1244

3274

2958

2092