Abstract
We introduce a new quadrature rule based on Chebyshev’s and Simpson’s rules. The corresponding composite rule induces the adaptive method of approximate integration. We propose a stopping criterion for this method and we prove that if it is satisfied for a function which is either 3-convex or 3-concave, then the integral is approximated with the prescribed tolerance. Nevertheless, we give an example of a function which does satisfy our criterion, but the approximation error exceeds the assumed tolerance. The numerical experiments (performed by a computer program created by the author) show that integration of 3-convex functions with our method requires considerably fewer steps than the adaptive Simpson’s method with a classical stopping criterion. As a tool in our investigations we present a certain inequality of Hermite–Hadamard type.
Similar content being viewed by others
1 Introduction
1.1 Quadratures and adaptive methods of numerical integration
The numerical integration of a function \(f:[a,b]\rightarrow \mathbb {R}\) is often performed by applying the quadrature rule:
where the coefficients \(\lambda _1,\dots ,\lambda _m\in [0,1]\) and the weights \(w_1,\dots ,w_m\in \mathbb {R}\) (usually positive) are fixed. For example, the following assignment is the familiar simple Simpson’s rule:
In this paper we also deal with the simple Chebyshev’s rule
For the sake of the record, let us also list simple two-point Gauss’ rule
Every quadrature rule induces the so-called composite rule, which is created by subdividing the interval [a, b] into n subintervals with equally spaced end-points:
and if \(\mathcal {I}[f;x_k,x_{k+1}]\approx Q[f;x_k,x_{k+1}]\), then we sum over \(k\in \{0,\dots ,n-1\}\) to arrive at the rule:
In this way we get Simpson’s and Chebyshev’s composite rules, denoted respectively \(\mathcal {S}_n\) and \(\mathcal {C}_n.\) Let \(\varepsilon >0\) be a fixed tolerance. In most cases it is not enough to approximate the integral by a simple quadrature rule, since \(\bigl |Q[f;a,b]-\mathcal {I}[f;a,b]\bigr |>\varepsilon \). In order to achieve the desired precision we apply the composite rule \(Q_n\) with n large enough to have
Such a method of approximation is called adaptive. How to find such n? If the function f is regular enough, the error terms of the simple quadratures are known and the error bounds could be easily found. For instance, for the composite Simpson’s rule over \(f\in \mathcal {C}^4[a,b]\) the following estimation is widely known (cf. e.g. [4, 7] )
If the term on the right hand side is less than \(\varepsilon \), then a suitable n can be easily determined. Then, of course,
Nevertheless, this approach could be complicated because of difficulties in the estimation of the maximum of \(|f^{(4)}|\), or even impossible, if f is not regular enough.
Another approach to adaptive integration is based on successive incrementation of n. Then the natural question arises how large should be n to achieve the desired precision, i.e., when the incrementation could be finished. Various stopping criteria are known in the literature. Let us briefly mention two of them for Simpson’s method. The first one was given by Lyness [9]; it used the so-called local errors
Another stopping criterion was proposed by Clenshaw and Curtis [3] and was investigated by Rowland and Varol [12], who have proved that the stopping inequality
is valid for all \(n\in \mathbb {N}\) and for all functions \(f\in \mathcal {C}^4[a,b]\) with \(f^{(4)}\) of a constant sign. This means that if \(\bigl |\mathcal {S}_{2n}[f;a,b]-\mathcal {S}_n[f;a,b]\bigr |<\varepsilon \), then also \(\bigl |\mathcal {S}_{2n}[f;a,b]-\mathcal {I}[f;a,b]\bigr |<\varepsilon ,\) so we arrive at the approximation of the integral by \(\mathcal {S}_{2n}[f;a,b].\) This is the starting point for our considerations. It is worth mentioning that none of these stopping criteria are valid for all integrable functions. Indeed, there are functions satisfying these criteria with an error of approximation of the integral greater than the tolerance \(\varepsilon .\) Notice that an extensive survey (not only) of stopping criteria was given by Gonnet [5] (this interesting study is also available on the arXiv repository, arXiv:1003.4629).
1.2 Convex functions of higher order
Hopf [6] proposed to consider functions with non-negative divided differences and subsequently Popoviciu [10, 11] investigated properties of functions in this class. He termed the function f to be n-convex, if
where the divided differences are defined recursively by \([x_0;f]=f(x_0)\) and
He stated many basic properties of such functions. He proved among others, that if f is n-convex in [a, b], then it is \(n-1\) times differentiable on (a, b). However the function \(x\mapsto x^n_+\) (where \(x_+=\max \{x,0\}\)) is n-convex on \(\mathbb {R}\) but it is not n-times differentiable at 0. He also showed [10, p. 18] that the composition of an n-convex function with an affine function remains n-convex on the suitable interval, whenever n is an odd number. Another result of Popoviciu tells us that if f is \((n+1)\)-times differentiable on [a, b] with \(f^{(n+1)}\geqslant 0\), then f is n-convex on [a, b]. Of course 1-convex functions are convex in the usual sense. A function f is n-concave, if \(-f\) is n-convex. Then functions with \(f^{(4)}\) of a constant sign (see the previous subsection) are necessarily either 3-convex, or 3-concave. One could ask whether the stopping inequality (3) holds for all 3-convex and 3-concave functions. The positive answer could be derived from the paper [1] by Bessenyei and Páles, who developed the smoothing technique for n-convex functions. In Sect. 2 as a tool we apply the spline approximation of n-convex functions. Below we recall a result due to Bojanić and Roulier [2].
Theorem 1
Every n-convex function mapping [a, b] into \(\mathbb {R}\) can be uniformly approximated on [a, b] by (n-convex) spline functions of the form
where p is a polynomial of degree at most n, \(a_i>0\), \(c_i\in [a,b]\) (\(i=1,\dots ,m\)).
1.3 Inequalities between simple quadratures
Bessenyei and Páles proved that the inequality (of Hermite–Hadamard type)
holds for any 3-convex function \(f:[a,b]\rightarrow \mathbb {R}\) (see [1, Corollary 3]). The author reproved it in [14, Proposition 12] using the support technique. He has also established an inequality connected with the simple Chebyshev’s rule: if \(g:[-1,1]\rightarrow \mathbb {R}\) is 3-convex, then
[14, Proposition 14]. Applying the affine substitution
we transform the interval [a, b] onto \([-1,1]\). If f is 3-convex on [a, b], then \(g:[-1,1]\rightarrow \mathbb {R}\) given by
is 3-convex on \([-1,1]\) (see the remark in the previous subsection). Then (5) together with (4) lead immediately to the inequality
which holds true for any 3-convex function \(f:[a,b]\rightarrow \mathbb {R}\).
1.4 The aim of the paper
In Sect. 2 we improve the part of the inequality (8), which is
This allows us to introduce in Sect. 3 a certain adaptive method and give its stopping criterion. In Sect. 4 we compare it to Rowland and Varol’s adaptive method with stopping criterion (3). It is to be noted that our approach requires considerably fewer subdivisions than that of Rowland and Varol.
2 The inequality of Hermite–Hadamard type
The inequality we establish below improves inequality (9). Let us start from the interval \([-1,1]\). Then
Theorem 2
If \(g:[-1,1]\rightarrow \mathbb {R}\) is 3-convex, then
Proof
The inequality on the left is nothing but (5). We prove the inequality on the right. In order to simplify the notation, let us skip the endpoints of the interval \([-1,1]\): \(\mathcal {I}[g]=\mathcal {I}[g;-1,1]\) and so on. Let
We shall prove that \(E[g]\geqslant 0\) for any 3-convex function \(g:[-1,1]\rightarrow \mathbb {R}.\) The operators \(\mathcal {C},\mathcal {I},\mathcal {S}\) are linear and \(\mathcal {C}[p]=\mathcal {I}[p]=\mathcal {S}[p]\) for polynomials of degree not greater than 3. Let \((g_n)\) be the sequence of 3-convex functions converging uniformly to g. Then \(\mathcal {I}[g_n]\rightarrow \mathcal {I}[g]\) as \(n\rightarrow \infty \) and, by the pointwise convergence, we infer that \(E[g_n]\rightarrow E[g]\) as \(n\rightarrow \infty \). Hence by Theorem 1 it is enough to check that for all \(c\in [-1,1]\)
where \(f_c(x)=(x-c)^3_+.\) First we claim that \(\varphi \) is an even function. Define also \(g_c(x)=(c-x)^3_+.\) Observe that \(f_c(x)-g_c(x)=(x-c)^3,\) which implies that \(E[f_c-g_c]=0.\) Therefore
Because of the form of the operators \(\mathcal {C},\,\mathcal {I},\,\mathcal {S}\), the operator E is symmetric, that is, \(E[g](-x)=E[g](x)\) (we emphasize here that the operator E is applied to g as a function of x). Hence
Since \(g_c(-x)=(x+c)^3_+=\bigl (x-(-c)\bigr )^3_+=f_{-c}(x),\) we arrive at
and \(\varphi \) is an even function on \([-1,1]\). Consequently, it is enough to prove (11) for all \(c\in [0,1].\) For this, let us write \(\varphi (c)\) explicitly:
If \(c\in [0,1]\), then \(f_c(-1)=f_c\bigl (-\frac{\sqrt{2}}{2}\bigr )=f_c(0)=0\), while \(f_c(1)=(1-c)^3\) and
Finally, if \(c\in [0,1]\), then
Multiplying both sides by 12 we get
Obviously \(\psi (c)\geqslant 0\) as long as \(\frac{\sqrt{2}}{2}\leqslant c\leqslant 1.\) Let \(0\leqslant c<\frac{\sqrt{2}}{2}.\) Then
and
for \(0<c<\frac{\sqrt{2}}{2}\) and \(\psi \) decreases on \(\Bigl [0,\frac{\sqrt{2}}{2}\Bigr ).\) Since \(\psi \) is continuous and \(\psi (c)\geqslant 0\) on \(\Bigl [\frac{\sqrt{2}}{2},1\Bigr ),\) we have \(\varphi (c)=\frac{1}{12}\psi (c)\geqslant 0\) on [0, 1], which concludes the proof.
\(\square \)
Remark 3
Let us note that in the above proof the function \(\frac{1}{6}\varphi \) is in fact the Peano kernel of E. So, Bojanié and Roulier’s result in the proof of Theorem 1 could be replaced by the Peano Kernel Theorem for 3-convex functions \(g\in \mathcal {C}^4[-1,1]\) Then, by Bessenyei and Páles’ smoothing technique (mentioned in the Introduction, cf. [1]), it is enough to extend the result to arbitrary 3-convex functions \(g:[-1,1]\rightarrow \mathbb {R}.\) This alternative approach is used in the recent paper [8] by Komisarski and the present author.
Remark 4
Because of inequality (10), the best approximation of \(\mathcal {I}[g;-1,1]\) (for a 3-convex function \(g:[-1,1]\rightarrow \mathbb {R}\)) is
However, it is impossible to further improve inequality (9) by the procedure of halving. Indeed, consider two 3-convex functions: \(f(x)=x^3_+\) and \(g(x)=\bigl (x-\frac{1}{2}\bigr )^3_+.\) Then \(Q[f;-1,1]-\mathcal {I}[f;-1,1]=\frac{3\sqrt{2}-4}{24}>0\), while \(Q[g;-1,1]-\mathcal {I}[g;-1,1]=\frac{5(12\sqrt{2}-17)}{192}<0.\) So, neither of the inequalities \(\mathcal {C}[g;-1,1]\leqslant \mathcal {I}[g;-1,1]\leqslant Q[g;-1,1]\) or \(Q[g;-1,1]\leqslant \mathcal {I}[g;-1,1]\leqslant \frac{C[g;-1,1]+\mathcal {S}[g;-1,1]}{2}\) holds for all 3-convex functions \(g:[-1,1]\rightarrow \mathbb {R}.\)
Now we pass to the arbitrary interval [a, b]. Below we establish an inequality of Hermite–Hadamard type.
Corollary 5
If \(f:[a,b]\rightarrow \mathbb {R}\) is 3-convex, then
Proof
Let \(f:[a,b]\rightarrow \mathbb {R}\) be a 3-convex function. We proceed as in Sect. 1.3. The function \(g:[-1,1]\rightarrow \mathbb {R}\) given by (7) is 3-convex on \([-1,1]\). We apply Theorem 2 to g. The substitution (6) gives the desired conclusion.
Let
Below we give the inequality which plays the key role in the next section. It is an immediate consequence of Corollary 5 and the above definition of Q[f; a, b].
Corollary 6
If \(f:[a,b]\rightarrow \mathbb {R}\) is 3-convex, then
3 An adaptive method and its stopping criterion
Let the quadrature rule Q be defined by (12). In the way we described in the Introduction (cf. (1)) we define the composite rules \(\mathcal {C}_n[f;a,b], Q_n[f;a,b]\) and \(\mathcal {S}_n[f;a,b]\).
Theorem 7
If \(f:[a,b]\rightarrow \mathbb {R}\) is either a 3-convex or a 3-concave function, then
Proof
First, suppose that \(f:[a,b]\rightarrow \mathbb {R}\) is 3-convex. Considering the construction of \(\mathcal {C}_n[f;a,b]\), \(Q_n[f;a,b]\) and \(\mathcal {S}_n[f;a,b]\), and applying Corollary 6, we arrive at
If f is 3-concave, we then apply the above inequality to the 3-convex function \(-f\) and use the linearity of the operators \(Q_n,\)\(\mathcal {I}\), \(\mathcal {S}_n\):
In both cases we obtain inequality (13) and the proof is complete. \(\square \)
Our next result is derived trivially from Theorem 7.
Corollary 8
Let \(\varepsilon >0.\) If \(f:[a,b]\rightarrow \mathbb {R}\) is either a 3-convex or a 3-concave function and
then \(\bigl |Q_n[f;a,b]-\mathcal {I}[f;a,b]\bigr |<\varepsilon \).
The adaptive method we propose goes by the following steps.
-
1.
Take a function \(f:[a,b]\rightarrow \mathbb {R}\) which is either 3-convex, or 3-convace.
-
2.
Fix a tolerance \(\varepsilon >0\) and take \(n=1\).
-
3.
Increase n until \(\bigl |\mathcal {S}_n[f;a,b]-\mathcal {C}_n[f;a,b]\bigr |<4\varepsilon \).
-
4.
By Corollary 8, we get \(\bigl |Q_n[f;a,b]-\mathcal {I}[f;a,b]\bigr |<\varepsilon ,\) so we take \(\mathcal {I}[f;a,b]\approx Q_n[f;a,b].\)
Then the inequality
constitutes the stopping criterion for our method. If \(\varepsilon >0\) is arbitrarily chosen, then this inequality is fulfilled for n large enough and the algorithm is stopped. Indeed, by a standard argument we have \(\lim \limits _{n\rightarrow \infty }\bigl |\mathcal {S}_n[f;a,b]-\mathcal {C}_n[f;a,b]\bigr |=0.\)
Remark 9
The stopping criterion (14) ensures the desired approximation of the integral for all 3-convex and 3-concave functions. Nevertheless, it may lead to incorrect results for those beyond that class. In order to see this, let us consider \(f(x)=\bigl ||x|-\frac{1}{2}\bigr |\) for \(x\in [-1,1]\). Its graph is a polyline passing through the points \(\bigl (-1,\frac{1}{2}\bigr )\), \(\bigl (-\frac{1}{2},0\bigr )\), \(\bigl (0,\frac{1}{2}\bigr )\), \(\bigl (\frac{1}{2},0\bigr )\) and \(\bigl (1,\frac{1}{2}\bigr )\). Take \(n=1\). Because \(\mathcal {C}_1[f;-1,1]=\mathcal {C}[f;-1,1]\), and similarly with the quadratures \(\mathcal {S}_1\) and \(Q_1\), we have
and, as a consequence,
4 Numerical experiments
To demonstrate our adaptive method for 3-convex functions in action, the author created a computer program using the Python programming language.Footnote 1 It compares this method with three other ones:
-
1.
The method of Rowland and Varol based on the approximation \(\mathcal {I}[f;a,b]\approx \mathcal {S}_{2n}[f;a,b]\) with the stopping criterion (3).
-
2.
The classical Simpson’s adaptive method \(\mathcal {I}[f;a,b]\approx S_n[f;a,b]\) for \(f\in \mathcal {C}^4[a,b]\) with n given by the error bound (2), i.e. the smallest \(n\in \mathbb {N}\) satisfying the inequality
$$\begin{aligned} \frac{(b-a)^5}{2880n^4} \sup \bigl \{\bigl |f^{(4)}(x)\bigr |:x\in [a,b]\bigr \}<\varepsilon . \end{aligned}$$(15) -
3.
Chebyshev’s adaptive method \(\mathcal {I}[f;a,b]\approx \mathcal {C}_n[f;a,b]\) for \(f\in \mathcal {C}^4[a,b]\) with the smallest \(n\in \mathbb {N}\) satisfying the inequality
$$\begin{aligned} \bigl |\mathcal {C}_n[f;a,b]-\mathcal {I}[f;a,b]\bigr |\leqslant \frac{(b-a)^5}{11520n^4}\sup \bigl \{\bigl |f^{(4)}(x)\bigr |:x\in [a,b]\bigr \} <\varepsilon . \end{aligned}$$(16)The above error bound could be easily obtained by the substitution (6), if one knows the error bound of the simple Chebyshev quadrature (cf. e.g. [13]):
$$\begin{aligned} \bigl |\mathcal {C}[g;-1,1]-\mathcal {I}[g;-1,1]\bigr |\leqslant \frac{1}{360}\sup \bigl \{\bigl |g^{(4)}(x)\bigr |:x\in [-1,1]\bigr \} \end{aligned}$$for \(g\in \mathcal {C}^4[-1,1]\).
Below we present the results of two experiments performed with our program.
Experiment 1. We start with the integral \( \int \limits _1^2 \frac{\mathrm {d}x}{x}=\ln 2. \) The function \(f(x)=\frac{1}{x}\) is 3-convex as \(f^{(4)}(x)=\frac{24}{x^5}>0\) for \(x\in [1,2]\). In the table below we put the numbers of subdivisions of the interval [1, 2] needed to approximate the integral with the desired tolerance \(\varepsilon \in \{10^{-1}, 10^{-2},\dots ,10^{-16}\}\). In each case one can see that our method \(Q_n[f;1,2]\) achieves the lowest number of subdivisions, which is almost three times less than that required by Rowland and Varol’s method \(\mathcal {S}_{2n}[f;1,2]\) (the highest number in the experiment). The classical methods \(\mathcal {S}_n[f;1,2]\) and \(\mathcal {C}_n[f;1,2]\) are in the middle.
\(f(x)=\frac{1}{x}\) | ||||
---|---|---|---|---|
Tolerance | \(Q_n[f;1,2]\) | \(\mathcal {S}_{2n}[f;1,2]\) | \(\mathcal {S}_n[f;1,2]\) | \(\mathcal {C}_{n}[f;1,2]\) |
\( 10^{-1} \) | 1 | 2 | 1 | 1 |
\( 10^{-2}\) | 1 | 2 | 1 | 1 |
\( 10^{-3}\) | 1 | 4 | 2 | 2 |
\( 10^{-4}\) | 2 | 4 | 4 | 3 |
\( 10^{-5}\) | 3 | 8 | 6 | 4 |
\( 10^{-6}\) | 5 | 14 | 10 | 7 |
\( 10^{-7}\) | 9 | 24 | 17 | 13 |
\( 10^{-8}\) | 16 | 42 | 31 | 22 |
\( 10^{-9} \) | 28 | 74 | 54 | 38 |
\( 10^{-10} \) | 50 | 132 | 96 | 68 |
\( 10^{-11} \) | 89 | 234 | 170 | 121 |
\( 10^{-12}\) | 158 | 414 | 303 | 214 |
\( 10^{-13} \) | 280 | 736 | 538 | 380 |
\( 10^{-14}\) | 498 | 1310 | 956 | 676 |
\( 10^{-15}\) | 884 | 2328 | 1700 | 1202 |
\( 10^{-16} \) | 1572 | 4138 | 3022 | 2137 |
Experiment 2. Next we experiment with the 3-convex function \(f(x)={{\,\mathrm{e}\,}}^x\) over the intervals [0, b] with \(b\in \{1,2,\dots ,10\}.\) We choose the tolerance \(\varepsilon =10^{-8}\). The conclusion about the number of subdivisions is exactly the same as in Experiment 1.
\(f(x)={{\,\mathrm{e}\,}}^x\) | ||||
---|---|---|---|---|
The right endpoint b | \(Q_n[f;1,2]\) | \(\mathcal {S}_{2n}[f;1,2]\) | \(\mathcal {S}_n[f;1,2]\) | \(\mathcal {C}_{n}[f;1,2]\) |
1 | 12 | 32 | 18 | 13 |
2 | 33 | 86 | 54 | 38 |
3 | 64 | 170 | 115 | 81 |
4 | 111 | 292 | 210 | 149 |
5 | 178 | 470 | 357 | 252 |
6 | 275 | 722 | 575 | 407 |
7 | 412 | 1082 | 895 | 633 |
8 | 604 | 1588 | 1358 | 960 |
9 | 872 | 2294 | 2019 | 1428 |
10 | 1244 | 3274 | 2958 | 2092 |
Notes
The files of the program are available on the author’s ResearchGate profile (https://www.researchgate.net/profile/Szymon_Wsowicz) in the project named Adaptive methods of approximate integration and their stopping criteria. The reader could also obtain them by the e-mail communication with the author.
References
Bessenyei, M., Páles, Z.: Higher-order generalizations of Hadamard’s inequality. Publ. Math. Debr. 61(3–4), 623–643 (2002)
Bojanić, R., Roulier, J.: Approximation of convex functions by convex splines and convexity preserving continuous linear operators. Rev. Anal. Numér. Théor. Approx. 3(2), 143–150 (1974)
Clenshaw, C.W., Curtis, A.R.: A method for numerical integration on an automatic computer. Numer. Math. 2, 197–205 (1960)
Gautschi, W.: Numerical Analysis, 2nd edn. Birkhäuser, Boston, MA (2012)
Gonnet, P.: A review of error estimation in adaptive quadrature. ACM Comput. Surv. (CSUR) 44, 1–36 (2012)
Hopf, E.: Über die Zusammenhänge zwischen gewissen höheren Differenzenquotienten reeller Funktionen einer reellen Variablen und deren Differenzierbarkeitseigenschaften. Ph.D. dissertation, Friedrich–Wilhelms–Universität, Berlin (1926)
Kincaid, D., Cheney, W.: Numerical Analysis. Mathematics of Scientific Computing, 2nd edn. Brooks/Cole Publishing Co., Pacific Grove, CA (1996)
Komisarski, A., Wąsowicz, S.: Inequalities between remainders of quadratures. Aequ. Math. 91(6), 1103–1114 (2017)
Lyness, J.N.: Notes on the adaptive Simpson quadrature routine. J. Assoc. Comput. Mach. 16, 483–495 (1969)
Popoviciu, T.: Sur quelques propriétés des fonctions d’une ou de deux variables réelles. Mathematica Cluj 8, 1–85 (1934)
Popoviciu, T.: Les fonctions convexes. Actualités Sci. Ind., no. 992. Hermann et Cie, Paris (1944)
Rowland, J.H., Varol, Y.L.: Exit criteria for Simpson’s compound rule. Math. Comput. 26, 699–703 (1972)
Weisstein, E.W.: Chebyshev quadrature. From MathWorld—A Wolfram Web Resource. http://mathworld.wolfram.com/ChebyshevQuadrature.html
Wąsowicz, S.: Support-type properties of convex functions of higher order and Hadamard-type inequalities. J. Math. Anal. Appl. 332(2), 1229–1241 (2007)
Acknowledgements
The author thanks the anonymous referees for the remarks improving the paper and Dominik Kozaczko, a computer scientist from Bielsko-Biała (Poland), for his valuable help with testing and running the computer program used in Experiments 1, 3.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wąsowicz, S. On a certain adaptive method of approximate integration and its stopping criterion. Aequat. Math. 94, 887–898 (2020). https://doi.org/10.1007/s00010-020-00719-0
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00010-020-00719-0
Keywords
- Approximate integration
- Quadratures
- Adaptive methods
- Chebyshev’s rule
- Simpson’s rule
- Higher-order convexity
- Hermite–Hadamard type inequality