1 Introduction

This paper discusses the problem of numerical integration (or quadrature), which has been a fundamental task in numerical analysis, statistics, computer science including machine learning and other areas. Let P be a (known) Borel probability measure on the Euclidian space \({\mathbb {R}}^d\) with support contained in an open set \(\varOmega \subset {\mathbb {R}}^d\) and f be an integrand on \(\varOmega \). Suppose that the integral \(\int f(x)\mathrm{d}P(x)\) has no closed-form solution. We consider quadrature rules that provide an approximation of the integral, in the form of a weighted sum of function values

$$\begin{aligned} \sum _{i=1}^n w_i f(X_i) \approx \int f(x)\mathrm{d}P(x), \end{aligned}$$
(1)

where \(X_1,\dots ,X_n \in \varOmega \) are design points and \(w_1,\dots ,w_n \in {\mathbb {R}}\) are quadrature weights. Throughout this paper, the integral of f and its quadrature estimate are denoted by Pf and \(P_nf\), respectively; namely,

$$\begin{aligned} Pf := \int f(x)\mathrm{d}P(x), \quad P_n f := \sum _{i=1}^n w_i f(X_i). \end{aligned}$$
(2)

Examples of such quadrature rules include Monte Carlo methods, which make use of a random sample from a suitable proposal distribution as \(X_1,\dots ,X_n\) and importance weights as \(w_1,\dots ,w_n\). A limitation of standard Monte Carlo methods is that a huge number of design points (i.e., large n) may be needed for providing an accurate approximation of the integral; this comes from the fact that the rate of convergence of Monte Carlo methods is typically of the order \({\mathbb {E}}[|Pf- P_nf|] = O(n^{-1/2})\) as \(n \rightarrow \infty \), where \({\mathbb {E}}[\cdot ]\) denotes the expectation with respect to the random sample. The need for large n is problematic, when an evaluation of the function value f(x) is expensive for each input x. Such situations appear in modern scientific and engineering problems where the mapping \(x \mapsto f(x)\) involves complicated computer simulation. In applications to time-series forecasting, for instance, x may be a parameter of an underlying system, f(x) a certain quantity of interest in future and P a prior distribution on x. Then, the target integral \(\int f(x)\mathrm{d}P(x)\) is the predictive value of the future quantity. The evaluation of f(x) for each x may require numerically solving an initial value problem for the differential equation, which results in time-consuming computation [7]. Similar examples can be seen in applications to statistics and machine learning, as mentioned below. In these situations, one can only use a limited number of design points, and thus, it is desirable to have quadrature rules with a faster convergence rate, in order to obtain a reliable solution [46].

1.1 Kernel-Based Quadrature Rules

How can we obtain a quadrature rule whose convergence rate is faster than \(O(n^{-1/2})\)? In practice, one often has prior knowledge or belief on the integrand f, such as smoothness, periodicity and sparsity. Exploiting such knowledge or assumption in constructing a quadrature rule \(\{ (w_i,X_i) \}_{i=1}^n\) may achieve faster rates of convergence, and such methods have been extensively studied in the literature for decades; see, e.g., [17] and [9] for review.

This paper deals with quadrature rules using reproducing kernel Hilbert spaces (RKHS) explicitly or implicitly to achieve fast convergence rates; we will refer to such methods as kernel-based quadrature rules or simply kernel quadrature. As discussed in Sect. 2.4, notable examples include quasi-Monte Carlo methods [17, 18, 26, 42], Bayesian quadrature [9, 48] and kernel herding [5, 10, 11]. These methods have been studied extensively in recent years [4, 8, 30, 45, 46, 55, 62] and have recently found applications in, for instance, machine learning and statistics [3, 9, 21, 31, 32, 43, 50].

In kernel quadrature, we make use of available knowledge on properties of the integrand f by assuming that f belongs to a certain RKHS \({{\mathcal {H}}}_k\) that possesses those properties (where k is the reproducing kernel) and then constructing weighted points \(\{ (w_i, X_i) \}_{i=1}^n\) such that the worst-case error in the RKHS

$$\begin{aligned} e_n(P;{{\mathcal {H}}}_k) := \sup _{f \in {{\mathcal {H}}}_k: \Vert f \Vert _{{{\mathcal {H}}}_k} \le 1} \left| Pf - P_n f\right| \end{aligned}$$
(3)

is made small, where \(\Vert \cdot \Vert _{{{\mathcal {H}}}_k}\) is the norm of \({{\mathcal {H}}}_k\). The use of RKHS is beneficial when compared to other function spaces, as it leads to a closed-form expression of worst-case error (3) in terms of the kernel, and thus, one may explicitly use this expression for designing \(\{ (w_i, X_i) \}_{i=1}^n\) (see Sect. 2.3).

Note that, in a well-specified case, that is, the integrand f satisfies \(f\in {{\mathcal {H}}}_k\), the quadrature error is bounded as

$$\begin{aligned} \left| P_n f - P f \right| \le \Vert f \Vert _{{{\mathcal {H}}}_k} e_n(P;{{\mathcal {H}}}_k). \end{aligned}$$

This guarantees that if a quadrature rule satisfies \(e_n(P; {{\mathcal {H}}}_k) = O(n^{-b})\) as \(n \rightarrow \infty \) for some \(b > 0\), then the quadrature error also satisfies \(\left| P_n f - P f \right| = O(n^{-b})\). Take a Sobolev space \(H^r(\varOmega )\) of order \(r>d/2\) on \(\varOmega \) as the RKHS \({{\mathcal {H}}}_k\), for example. It is known that optimal quadrature rules achieve \(e_n(P;{{\mathcal {H}}}_k) = O(n^{-r/d})\) [40], and thus, \(\left| P_n f - P f \right| = O(n^{-r/d})\) holds for any \(f\in {{\mathcal {H}}}_k\). As we have \(r/d > 1/2\), this rate is faster than Monte Carlo integration; this is the desideratum that has been discussed.

1.2 Misspecified Settings

This paper focuses on situations where the assumption \(f \in {{\mathcal {H}}}_k\) is violated, that is, misspecified settings. As explained above, convergence guarantees for kernel quadrature rules often assume that \(f\in {{\mathcal {H}}}_k\). However, in practice one may lack the full knowledge on the properties on the integrand, and therefore, misspecification of the RKHS (via the choice of its reproducing kernel k) may occur, that is, \(f \notin {{\mathcal {H}}}_k\).

Such misspecification is likely to happen when the integrand is a black box function. An illustrative example can be found in applications to computer graphics such as the problem of illumination integration (see, e.g., [9]), where the task is to compute the total amount of light arriving at a camera in a virtual environment. This problem is solved by quadrature, with integrand f(x) being the intensity of light arriving at the camera from a direction x (angle). However, the value of f(x) is only given by simulation of the environment for each x, so the integrand f is a black box function. Similar situations can be found in application to statistics and machine learning. A representative example is the computation of marginal likelihood for a probabilistic model, which is an important but challenging task required for model selection (see, e.g., [47]). In modern scientific applications where complex phenomena are dealt with (e.g., climate science), we often encounter situations where the evaluation of a likelihood function, which forms the integrand in marginal likelihood computation, involves an expensive simulation model, making the integrand complex and even black box.

If the integrand is a black box function, there is a trade-off between the risk of misspecification and gain in the rate of convergence for kernel-based quadrature rules; for a faster convergence rate, one may want to use a quadrature rule for a narrower \({{\mathcal {H}}}_k\) such as of higher-order differentiability, while such a choice may cause misspecification of the function class. Therefore, it is of great importance to elucidate their convergence properties in misspecified situations, in order to make use of such quadrature rules in a safe manner.

1.3 Contributions

This paper provides convergence rates of kernel-based quadrature rules in misspecified settings, focusing on deterministic rules (i.e., without randomization). The focus of misspecification is placed on the order of Sobolev spaces: The unknown order s of the integrand f is overestimated as r, that is, \(s \le r\).

Let \(\varOmega \subset {\mathbb {R}}^d\) be a bounded domain with a Lipschitz boundary (see Sect. 3 for definition). For \(r>d/2\), consider a positive definite kernel \(k_r\) on \(\varOmega \) that satisfies the following assumption;

Assumption 1

The kernel \(k_r\) on \(\varOmega \) satisfies \(k_r(x,y) := \varPhi (x-y)\), where \(\varPhi : {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) is a positive definite function such that

$$\begin{aligned} C_1 (1 + \Vert \xi \Vert ^2)^{-r} \le {\hat{\varPhi }} (\xi ) \le C_2 (1 + \Vert \xi \Vert ^2)^{-r} \end{aligned}$$

for some constants \(C_1, C_2 > 0\), where \({\hat{\varPhi }}\) is the Fourier transform of \(\varPhi \). The RKHS \({{\mathcal {H}}}_{k_r}(\varOmega )\) is the restriction of \({{\mathcal {H}}}_{k_r}({\mathbb {R}}^d)\) to \(\varOmega \) (see Sect. 2).

The resulting RKHS \({{\mathcal {H}}}_{k_r}(\varOmega )\) is norm-equivalent to the standard Sobolev space \(H^r(\varOmega )\). The Matérn and Wendland kernels satisfy Assumption 1 (see Sect. 2).

Consider a quadrature rule \(\{ (w_i,X_i) \}_{i=1}^n\) with the kernel \(k_r\) such that

$$\begin{aligned} e_n(P;{{\mathcal {H}}}_{k_r}(\varOmega )) = O(n^{-b}) \quad (n \rightarrow \infty ). \end{aligned}$$
(4)

We do not specify how the weighted points are generated, but assume (4) aiming for wide applicability. Suppose that an integrand \(f:\varOmega \rightarrow {\mathbb {R}}\) has partial derivatives up to order s and they are bounded and uniformly continuous. If \(s \le r\), the integrand may not belong to the assumed RKHS \({{\mathcal {H}}}_{k_r}\), in which case a misspecification occurs.

Under this misspecified setting, two types of assumptions on the quadrature rule \(\{ (w_i,X_i) \}_{i=1}^n\) will be considered: one on the quadrature weights \(w_1,\dots ,w_n\) (Sect. 4.1) and the other on the design points \(X_1,\dots ,X_n\) (Sect. 4.2). In both cases, a rate of convergence of the form

$$\begin{aligned} | P_n f - P f | = O(n^{-bs/r}) \quad (n\rightarrow \infty ) \end{aligned}$$
(5)

will be derived under some additional conditions. The results guarantee the convergence in the misspecified setting, and the rate is determined by the ratio s / r between the true smoothness s and the assumed smoothness r. As discussed in Sect. 2, the optimal rate of deterministic quadrature rules for the Sobolev space \(H^r(\varOmega )\) is \(O(n^{-r/d})\) [40]. If a quadrature rule satisfies this optimal rate (i.e., \(b=r/d\)), then rate (5) becomes \(O(n^{-s/d})\) for an integrand \(f\in H^s(\varOmega )\) (\(s<r\)), which matches the optimal rate for \(H^s(\varOmega )\).

The specific results are summarized as follows:

  • In Sect. 4.1, it is assumed that \(\sum _{i=1}^n | w_i | = O(n^{c})\) as \(n \rightarrow \infty \) for some constant \(c \ge 0\). Note that \(c = 0\) is taken if the weights satisfy \(\max _{i=1,\dots ,n} |w_i| = O(n^{-1})\), an example of which is the equal weights \(w_1 = \cdots =w_n = 1/n\). Under this assumption and other suitable conditions, Corollary 2 shows

    $$\begin{aligned} | P_n f - P f | = O( n^{ - bs/r + c (r-s)/r } ) \quad (n \rightarrow \infty ). \end{aligned}$$

    The rate \(O(n^{-bs/r})\) in (5) holds if \(c = 0\). Therefore, this result provides convergence guarantees in particular for equal-weight quadrature rules, such as quasi-Monte Carlo methods and kernel herding, in the misspecified setting.

  • Section 4.2 uses an assumption on design points \(X^n := \{X_1,\dots ,X_n\}\) in terms of separation radius\(q_{X_n}\), which is defined by

    $$\begin{aligned} q_{X_n} := \frac{1}{2} \min _{i \ne j} \Vert X_i - X_j \Vert . \end{aligned}$$
    (6)

    Corollary 3 shows that, if \(q_{X^n} = \varTheta (n^{-a})\) as \(n \rightarrow \infty \) for some \(a > 0\), under other regularity conditions,

    $$\begin{aligned} | P_n f - P f | = O(n^{- \min ( b - a(r-s), as)} ) \quad (n \rightarrow \infty ). \end{aligned}$$
    (7)

    The best possible rate is \(O(n^{-bs/r})\) when \(a = b/r\). This result provides a convergence guarantee for quadrature rules that obtain the weights \(w_1,\dots ,w_n\) to give \(O(n^{-b})\) for the worst-case error with \(X_1,\dots ,X_n\) fixed beforehand. We demonstrate this result by applying it to Bayesian quadrature, as explained below. Our result may also provide the following guideline for practitioners: in order to make a kernel quadrature rule robust to misspecification, one should specify the design points so that the spacing is not too small.

  • Section 5 discusses a convergence rate for Bayesian quadrature under the misspecified setting, demonstrating the results of Sect. 4.2. Given design points \(X^n=\{X_1,\dots ,X_n\}\), Bayesian quadrature defines weights \(w_1,\ldots ,w_n\) as the minimizer of worst-case error (3), which can be obtained by solving a linear equation (see Sect. 2.4 for more detail). For points \(X^n=\{X_1,\dots ,X_n\}\) in \(\varOmega \), the fill distance\(h_{X^n,\varOmega }\) is defined by

    $$\begin{aligned} h_{X^n, \varOmega } := \sup _{x \in \varOmega } \min _{i=1,\dots ,n} \Vert x - X_i \Vert . \end{aligned}$$
    (8)

    Assume that there exists a constant \(c_q > 0\) independent of \(X^n\) such that

    $$\begin{aligned} h_{X^n,\varOmega } \le c_q q_{X^n}, \end{aligned}$$
    (9)

    and that \(h_{X^n,\varOmega } = O(n^{- 1/d})\) as \(n \rightarrow \infty \). Then, Corollary 4 shows that with Bayesian quadrature weights based on the kernel \(k_r\) we have

    $$\begin{aligned} \left| P_n f - Pf \right| = O(n^{ - s/d }) \quad (n \rightarrow \infty ). \end{aligned}$$

    Note that the rate \(O(n^{ - s/d })\) matches the minimax optimal rate for deterministic quadrature rules in the Sobolev space of order s [40], which implies that Bayesian quadrature can be adaptive to the unknown smoothness s of the integrand f. The adaptivity means that it can achieve the rate \(O(n^{-s/d})\) without the knowledge of s; it only requires the knowledge of the upper bound of the true smoothness \(s \le r\).

  • Section 3 establishes a rate of convergence for Bayesian quadrature in the well-specified case, which serves as a basis for the results in the misspecified case (Sect. 5). Corollary 1 asserts that if the design points satisfy \(h_{X^n, \varOmega } = O(n^{-1/d})\) as \(n \rightarrow \infty \), then

    $$\begin{aligned} e_n(P; {{\mathcal {H}}}_{k_r}(\varOmega )) = O(n^{-r/d}) \quad (n\rightarrow \infty ). \end{aligned}$$

    This rate \(O(n^{-r/d})\) is minimax optimal for deterministic quadrature rules in Sobolev spaces. To the best of our knowledge, this optimality of Bayesian quadrature has not been established before, while recently there has been extensive theoretical analysis on Bayesian quadrature [4, 8, 9, 44].

This paper is organized as follows. Section 2 provides various definitions, notation and preliminaries including reviews on kernel-based quadrature rules. Section 3 then establishes a rate of convergence for the worst-case error of Bayesian quadrature in a Sobolev space. Section 4 presents the main contributions on the convergence analysis in misspecified settings, and Sect. 5 demonstrates these results by applying them to Bayesian quadrature. We illustrate the obtained theoretical results with simulation experiments in Sect. 6. Finally Sect. 7 concludes the paper with possible future directions.

Preliminary results This paper expands on preliminary results reported in a conference paper by the authors [29]. Specifically, this paper is a complete version of the results presented in Section 5 of [29]. The current paper contains significantly new topics mainly in the following points: (i) We establish the rate of convergence for Bayesian quadrature with deterministic design points and show that it can achieve minimax optimal rates in Sobolev spaces (Sect. 3); (ii) we apply our general convergence guarantees in misspecified settings to the specific case of Bayesian quadrature and reveal the conditions required for Bayesian quadrature to be robust to misspecification (Sect. 5); to make the contribution (ii) possible, we derive finite sample bounds on quadrature error in misspecified settings (Sect. 4). These results are not included in the conference paper.

We also mention that this paper does not contain the results presented in Section 4 of the conference paper [29], which deal with randomized design points. For randomized design points, theoretical analysis can be done based on an approximation theory developed in the statical learning theory literature [12]. On the other hand, the analysis in the deterministic case makes use of the approximation theory developed by [37], which is based on Calderón’s decomposition formula in harmonic analysis [19]. This paper focuses on the deterministic case, and we will report a complete version of the randomized case in a forthcoming paper.

Related work The setting of this paper is complementary to that of [45], in which the integrand is smoother than assumed. That paper proposes to apply the control functional method by [46] to quasi-Monte Carlo integration, in order to make it adaptable to the (unknown) greater smoothness of the integrand.

Another related line of research is the proposals of quadrature rules that are adaptive to less smooth integrands [14,15,16, 20, 23]. For instance, [20] proposed a kernel-based quadrature rule on a finite-dimensional sphere. Their method is essentially a Bayesian quadrature using a specific kernel designed for spheres. They derive convergence rates for this method in both well-specified and misspecified settings and obtain results similar to ours. The current work differs from [20] in mainly two aspects: (i) Quadrature problems are considered in standard Euclidean spaces, as opposed to spheres; (ii) a generic framework is presented, as opposed to the analysis of a specific quadrature rule. See also a recent work by [62], in which Bayesian quadrature for vector-valued numerical integration is proposed and its adaptability to the less smooth integrands is discussed.

Quasi-Monte Carlo rules based on a certain digit interlacing algorithm [14,15,16, 23] are also shown to be adaptive to the (unknown) lower smoothness of an integrand. These papers assume that an integrand is in an anisotropic function class in which every function possesses (square-integrable) partial mixed derivatives of order \(\alpha \in {\mathbb {N}}\) in each variable. Examples of such spaces include Korobov spaces, Walsh spaces and Sobolev spaces of dominating mixed smoothness (see, e.g., [17, 42]). In their notation, an integer d, which is a parameter called an interlacing factor, can be regarded as an assumed smoothness. Then, if an integrand belongs to an anisotropic function class with smoothness \(\alpha \in {\mathbb {N}}\) such that \(\alpha \le d\), the rate of the form \(O(n^{-\alpha + \varepsilon })\) (or \(O(n^{-\alpha -1/2 + \varepsilon })\) in a randomized setting) is guaranteed for the quadrature error for arbitrary \(\varepsilon > 0\). The present work differs from these works in that (i) isotropic Sobolev spaces are discussed, where the order of differentiability is identical in all directions of variables, and that (ii) theoretical guarantees are provided for generic quadrature rules, as opposed to analysis of specific quadrature methods.

2 Preliminaries

2.1 Basic Definitions and Notation

We will use the following notation throughout the paper. The set of positive integers is denoted by \({\mathbb {N}}\), and \({\mathbb {N}}_0 := {\mathbb {N}}\cup \{ 0 \}\). For \(\alpha := (\alpha _1,\dots ,\alpha _d)^T \in {\mathbb {N}}_0^d\), we write \(| \alpha | := \sum _{i=1}^d \alpha _i\). The d-dimensional Euclidean space is denoted by \({\mathbb {R}}^d\), and the closed ball of radius \(R>0\) centered at \(z\in {\mathbb {R}}^d\) by B(zR). For \(a \in {\mathbb {R}}\), \(\lfloor a \rfloor \) is the greatest integer that is less than a. For a set \(\varOmega \subset {\mathbb {R}}^d\), \(\mathrm{diam} (\varOmega ) := \sup _{x, y \in \varOmega } \Vert x - y\Vert \) is the diameter of \(\varOmega \).

Let \(p>0\) and \(\mu \) be a Borel measure on a Borel set \(\varOmega \) in \({\mathbb {R}}^d\). The Banach space \(L_p(\mu )\) of p-integrable functions is defined in the standard way with norm \(\Vert f \Vert _{L_p(\mu )} = (\int |f(x)|^p d\mu (x))^{1/p}\), and \(L_\infty (\varOmega )\) is the class of essentially bounded measurable functions on \(\varOmega \) with norm \(\Vert f \Vert _{L_\infty (\varOmega )} := \mathrm{ess}\sup _{x \in \varOmega } |f(x)|\). If \(\mu \) is the Lebesgue measure on \(\varOmega \subset {\mathbb {R}}^d\), we write \(L_p(\varOmega ) := L_p(\mu )\) and further \(L_p := L_p({\mathbb {R}}^d)\) for \(p \in {\mathbb {N}}\cup \{ \infty \}\). For \(f \in L_1({\mathbb {R}}^d)\), its Fourier transform \({\hat{f}}\) is defined by

$$\begin{aligned} {\hat{f}}(\xi ) := \int _{{\mathbb {R}}^d} f(x) e^{- i \xi ^T x} \mathrm{d}x, \quad \xi \in {\mathbb {R}}^d, \end{aligned}$$

where \(i := \sqrt{-1}\).

For \(s \in {\mathbb {N}}\) and an open set \(\varOmega \) in \({\mathbb {R}}^d\), \(C^s(\varOmega )\) denotes the vector space of all functions on \(\varOmega \) that are continuously differentiable up to order s, and \(C_B^s(\varOmega ) \subset C^s(\varOmega )\) the Banach space of all functions whose partial derivatives up to order s are bounded and uniformly continuous. The norm of \(C_B^s(\varOmega )\) is given by \(\Vert f \Vert _{C_B^s(\varOmega )} := \sum _{\alpha \in {\mathbb {N}}_0^d: | \alpha | \le s} \sup _{x \in \varOmega } |\partial ^\alpha f (x)| \), where \(\partial ^\alpha \) is the partial derivative with multi-index \(\alpha \in {\mathbb {N}}_0^d\). The Banach space of the continuous functions that vanish at infinity is denoted by \(C_0 := C_0({\mathbb {R}}^d)\) with sup norm. Let \(C_0^s := C_0^s({\mathbb {R}}^d) := C_0({\mathbb {R}}^d) \cap C_B^s({\mathbb {R}}^d)\) be a Banach space with the norm \(\Vert f \Vert _{C_0^s({\mathbb {R}}^d)} := \Vert f \Vert _{C_B^s({\mathbb {R}}^d)}\).

For function f and a measure \(\mu \) on \({\mathbb {R}}^d\), the support of f and \(\mu \) is denoted by \(\mathrm{supp}(f)\) and \(\mathrm{supp} (\mu )\), respectively. The restriction of f to a subset \(\varOmega \in {\mathbb {R}}^d\) is denoted by \(f|_\varOmega \).

Let F and \(F^*\) be normed vector spaces with norms \(\Vert \cdot \Vert _F\) and \(\Vert \cdot \Vert _{F^*}\), respectively. Then, F and \(F^*\) are said to be norm-equivalent, if \(F= F^*\) as a set, and there exist constants \(C_1, C_2 > 0\) such that \(C_1 \Vert f \Vert _{F^*} \le \Vert f \Vert _{F} \le C_2 \Vert f \Vert _{F^*}\) for all \(f \in F\). For a Hilbert space \({{\mathcal {H}}}\) with inner product \(\langle \cdot ,\cdot \rangle _{{\mathcal {H}}}\), the norm of \(f\in {{\mathcal {H}}}\) is denoted by \(\Vert f\Vert _{{\mathcal {H}}}\).

2.2 Sobolev Spaces and Reproducing Kernel Hilbert Spaces

Here we briefly review key facts regarding Sobolev spaces necessary for stating and proving our contributions; for details, we refer to [1, 6, 59]. We first introduce reproducing kernel Hilbert spaces. For details, see, e.g., [58, Section 4] and [61, Section 10].

Let \(\varOmega \) be a set. A Hilbert space \({{\mathcal {H}}}\) of real-valued functions on \(\varOmega \) is a reproducing kernel Hilbert space (RKHS) if the functional \(f\mapsto f(x)\) is continuous for any \(x\in \varOmega \). Let \(\langle \cdot ,\cdot \rangle _{{\mathcal {H}}}\) be the inner product of \({{\mathcal {H}}}\). Then, there is a unique function \(k_x\in {{\mathcal {H}}}\) such that \(f(x)=\langle f,k_x\rangle _{{\mathcal {H}}}\). The kernel defined by \(k(x,y):=k_x(y)\) is positive definite and called reproducing kernel of \({{\mathcal {H}}}\). It is known (Moore–Aronszajn theorem [2]) that for every positive definite kernel \(k: \varOmega \times \varOmega \rightarrow {\mathbb {R}}\) there exists a unique RKHS \({{\mathcal {H}}}\) with k as the reproducing kernel. Therefore, the notation \({{\mathcal {H}}}_k\) is used to the RKHS associated with k.

In the following, we will introduce two definitions of Sobolev spaces, i.e., (10) and (11), as both will be used throughout our analysis.

For a measurable set \(\varOmega \subset {\mathbb {R}}^d\) and \(r \in {\mathbb {N}}\), a Sobolev space \(W_2^r(\varOmega )\) of order r on \(\varOmega \) is defined by

$$\begin{aligned} W_2^r(\varOmega ) := \{ f \in L_2(\varOmega ): D^\alpha f \in L_2(\varOmega )\ \mathrm{exists\,\,for\,\,all\,}\ \alpha \in {\mathbb {N}}_0^d\ \mathrm{with}\ | \alpha | \le r \}, \end{aligned}$$
(10)

where \(D^\alpha f\) denotes the \(\alpha \)-th weak derivative of f. This is a Hilbert space with inner product

$$\begin{aligned} \langle f, g \rangle _{W_2^r(\varOmega )} = \sum _{ |\alpha | \le r} \langle D^\alpha f, D^\alpha g \rangle _{L_2(\varOmega )}, \quad f, g \in W_2^r(\varOmega ). \end{aligned}$$

For a positive real \(r > 0\), another definition of Sobolev space of order r on \({\mathbb {R}}^d\) is given by

$$\begin{aligned} H^r({\mathbb {R}}^d) := \left\{ f \in L_2({\mathbb {R}}^d): \int |{\hat{f}}(\xi )|^2 {\hat{\varPhi }}(\xi )^{-1} d\xi < \infty \right\} , \end{aligned}$$
(11)

where the function \({\hat{\varPhi }}: {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) is defined by

$$\begin{aligned} {\hat{\varPhi }}(\xi ) := (1 + \Vert \xi \Vert ^2 )^{-r}, \quad \xi \in {\mathbb {R}}^d. \end{aligned}$$

The inner product of \(H^r({\mathbb {R}}^d)\) is defined by

$$\begin{aligned} \langle f, g \rangle _{H^r({\mathbb {R}}^d)} := \int {\hat{f}}(\xi ) \overline{{\hat{g}}(\xi )}{\hat{\varPhi }}(\xi )^{-1} d\xi , \quad f,g \in H^r({\mathbb {R}}^d), \end{aligned}$$

where \(\overline{{\hat{g}}(\xi )}\) denotes the complex conjugate of \({\hat{g}} (\xi ) \).

For a measurable set \(\varOmega \) in \({\mathbb {R}}^d\), the (fractional order) Sobolev space \(H^r(\varOmega )\) is defined by the restriction of \(H^r({\mathbb {R}}^d)\); namely (see, e.g., [59, Eq. (1.8) and Definition 4.10])

$$\begin{aligned} H^r(\varOmega ) := \left\{ f: \varOmega \rightarrow {\mathbb {R}}: f = g|_\varOmega ,\ \exists \, g \in H^r({\mathbb {R}}^d) \right\} \end{aligned}$$

with its norm defined by

$$\begin{aligned} \Vert f \Vert _{H^r(\varOmega )} := \inf \left\{ \Vert g \Vert _{H^r({\mathbb {R}}^d)}: g \in H^r({\mathbb {R}}^d)\ \text {s.t.}\ f = g|_\varOmega \right\} . \end{aligned}$$

If \(r \in {\mathbb {N}}\) and \(\varOmega \) is an open set with Lipschitz boundary (see Definition 3), then \(H^r(\varOmega )\) is norm-equivalent to \(W_2^r(\varOmega )\) (see, e.g., [59, Eqs. (1.8), (4.20)]).

If \(r > d/2\), the Sobolev space \(H^r({\mathbb {R}}^d)\) is an RKHS [61, Section 10]. In fact, the condition \(r > d/2\) guarantees that the function \({\hat{\varPhi }}(\xi ) = (1 + \Vert \xi \Vert ^2)^{-r}\) is integrable, so that \({\hat{\varPhi }}(\xi )\) has a (inverse) Fourier transform

$$\begin{aligned} \varPhi (x) = \frac{2^{1-r}}{\varGamma (r)} \Vert x \Vert ^{r-d/2} K_{r-d/2} (\Vert x \Vert ), \end{aligned}$$

where \(\varGamma \) denotes the gamma function and \(K_{r-d/2}\) is the modified Bessel function of the third kind of order \(r-d/2\). The function \(\varPhi \) is positive definite, and the kernel \(\varPhi (x-y)\) gives \(H^r({\mathbb {R}}^d)\) as an RKHS. This kernel \(\varPhi (x-y)\) is essentially a Matérn kernel [33, 34] with specific parameters. A Wendland kernel [60] also defines an RKHS that is norm-equivalent to \(H^r({\mathbb {R}}^d)\).

2.3 Kernel-Based Quadrature Rules

We briefly review basic facts regarding kernel-based quadrature rules necessary to describe our results. For details, we refer to [9, 17].

Let \(\varOmega \subset {\mathbb {R}}^d\) be an open set, k be a measurable kernel on \(\varOmega \), and \({{\mathcal {H}}}_k(\varOmega )\) be the RKHS of k with inner product \(\langle \cdot , \cdot \rangle _{{{\mathcal {H}}}_k(\varOmega )}\). Suppose P is a Borel probability measure on \({\mathbb {R}}^d\) with its support contained in \(\varOmega \), and \(\{ (w_i, X_i) \}_{i=1}^n \subset ({\mathbb {R}}\times \varOmega )^n\) is weighted points, which serve for quadrature. For an integrand f, define \(Pf := \int f(x)\mathrm{d}P(x)\) and \(P_nf := \sum _{i=1}^n w_i f(X_i)\), respectively, as the integral and a quadrature estimate as in (2). As mentioned in Sect. 1, a kernel quadrature rule aims at minimizing the worst-case error

$$\begin{aligned} e_n(P;{{\mathcal {H}}}_k(\varOmega )) := \sup _{f \in {{\mathcal {H}}}_k: \Vert f \Vert _{{{\mathcal {H}}}_k(\varOmega )} \le 1} \left| Pf - P_n f \right| . \end{aligned}$$
(12)

Assume \(\int \sqrt{k(x,x)}\,\mathrm{d}P(x)<\infty \), and define \(m_P, m_{P_n}\)Footnote 1\(\in {{\mathcal {H}}}_k(\varOmega )\) by

$$\begin{aligned} m_P(y) := \int k(y,x)\mathrm{d}P(x), \quad m_{P_n}(y) := \sum _{i=1}^n w_i k(y,X_i), \quad y\in \varOmega , \end{aligned}$$
(13)

where the integral for \(m_P\) is understood as the Bochner integral. It is easy to see that, for all \(f\in {{\mathcal {H}}}\),

$$\begin{aligned} P f = \langle f,m_P\rangle _{{{\mathcal {H}}}_k(\varOmega )}, \quad P_n f = \langle f,m_{P_n}\rangle _{{{\mathcal {H}}}_k(\varOmega )}. \end{aligned}$$

Worst-case error (12) can then be written as

$$\begin{aligned} e_n(P;{{\mathcal {H}}}_k(\varOmega )) = \Vert m_P - m_{P_n} \Vert _{{{\mathcal {H}}}_k(\varOmega )}, \end{aligned}$$
(14)

and for any \(f\in {{\mathcal {H}}}_k(\varOmega )\)

$$\begin{aligned} | P_n f - P f | \le \Vert f \Vert _{{{\mathcal {H}}}_k(\varOmega )} e_n(P;{{\mathcal {H}}}_k(\varOmega )). \end{aligned}$$
(15)

It follows from (14) that

$$\begin{aligned} e^2_n(P;{{\mathcal {H}}}_k(\varOmega ))= & {} \int \int k(x,{\tilde{x}})\mathrm{d}P(x)\mathrm{d}P({\tilde{x}}) - 2 \sum _{i=1}^n w_i \int k(x,X_i)\mathrm{d}P(x) \nonumber \\&+ \sum _{i=1}^n \sum _{j=1}^n w_i w_j k(X_i,X_j). \end{aligned}$$
(16)

The integrals in (16) are known in closed form for many pairs of k and P (see, e.g., Table 1 of [9]); for instance, it is known if k is a Wendland kernel and P is the uniform distribution on a ball in \({\mathbb {R}}^d\). One can then explicitly use formula (16) in order to obtain weighted points \(\{ (w_i,X_i) \}\) that minimizes worst-case error (12).

2.4 Examples of Kernel-Based Quadrature Rules

Bayesian quadrature This is a class of kernel-based quadrature rules that has been studied extensively in the literature on statistics and machine learning [4, 7,8,9, 13, 22, 25, 27, 35, 46, 48, 49, 51]. In Bayesian quadrature, design points \(X_1,\dots ,X_n\) may be obtained jointly in a deterministic manner [9, 13, 35, 48, 51], sequentially (adaptively) [8, 25, 27, 49] or randomly [4, 7, 9, 22, 46]. For instance, [9] proposed to generate design points randomly as a Markov chain Monte Carlo sample, or deterministically by a quasi-Monte Carlo rule, specifically as a higher-order digital net [15].

Given the design points being fixed, quadrature weights \(w_1,\dots ,w_n\) are then obtained by the minimization of worst-case error (16), which can be done analytically by solving a linear system of size n. To describe this, let \(X_1,\dots ,X_n\) be design points such that the kernel matrix \(K := (k(X_i,X_j))_{i,j}^n \in {\mathbb {R}}^{n \times n}\) is invertible. The weights are then given by

$$\begin{aligned} {\varvec{w}} := (w_1,\dots ,w_n)^T = K^{-1} {\varvec{z}} \in {\mathbb {R}}^n, \end{aligned}$$
(17)

where \({\varvec{z}} := ( m_P(X_i) )_{i=1}^n \in {\mathbb {R}}^n\), with \(m_P\) defined in (13).

This way of constructing the estimate \(P_n f\) is called Bayesian quadrature, since \(P_n f\) can be seen as a posterior estimate in a certain Bayesian inference problem with f generated as sample of a Gaussian process (see, e.g., [27] and [9]).

Quasi-Monte Carlo Quasi-Monte Carlo (QMC) methods are equal-weight quadrature rules designed for the uniform distribution on a hypercube \([0,1]^d\) [17]. Modern QMC methods make use of RKHSs and the associated kernels to define and calculate the worst-case error in order to obtain good design points (e.g., [14, 18, 26, 54]). Therefore, such QMC methods are instances of kernel-based quadrature rules; see [42] and [17] for a review.

Kernel herding In the machine learning literature, an equal-weight quadrature rule called kernel herding [11] has been studied extensively [5, 27, 28, 32]. It is an algorithm that greedily searches for design points so as to minimize the worst-case error in an RKHS. In contrast to QMC methods, kernel herding may be used with an arbitrarily distribution P on a generic measurable space, given that the integral \(\int k(\cdot ,x)\mathrm{d}P(x)\) admits a closed-form solution with a reproducing kernel k. It has been shown that a fast rate \(O(n^{-1})\) is achievable for the worst-case error, when the RKHS is finite-dimensional [11]. While empirical studies indicate that the fast rate would also hold in the case of an infinite-dimensional RKHS, its theoretical proof remains an open problem [5].

3 Convergence Rates of Bayesian Quadrature

This section discusses the convergence rates of Bayesian quadrature in well-specified settings. It is shown that Bayesian quadrature can achieve the minimax optimal rates for deterministic quadrature rules in Sobolev spaces. The result also serves as a preliminary to Sect. 5, where misspecified cases are considered.

Let \(\varOmega \) be an open set in \({\mathbb {R}}^d\) and \(X^n := \{ X_1,\dots , X_n\}\subset \varOmega \). The main notion to express the convergence rate is fill distance \(h_{X^n,\varOmega }\) (8), which plays a central role in the literature on scattered data approximation [61] and has been used in the theoretical analysis of Bayesian quadrature in [9, 44].

It is necessary to introduce some conditions on \(\varOmega \). The first one is the interior cone condition [61, Definition 3.6], which is a regularity condition on the boundary of \(\varOmega \). A cone\(C(x, \xi (x), \theta , R)\) with vertex \(x \in {\mathbb {R}}^d\), direction \(\xi (x) \in {\mathbb {R}}^d\) (\(\Vert \xi (x) \Vert = 1\)), angle \(\theta \in (0,2\pi )\) and radius \(R > 0\) is defined by

$$\begin{aligned} C(x, \xi (x), \theta , R) := \{ x + \lambda y:\ y \in {\mathbb {R}}^d,\ \Vert y\Vert = 1,\ \langle y, \xi (x) \rangle \ge \cos \theta ,\ \lambda \in [0,R] \}. \end{aligned}$$

Definition 1

(Interior cone condition) A set \(\varOmega \subset {\mathbb {R}}^d\) is said to satisfy an interior cone condition if there exist an angle \(\theta \in (0,2\pi )\) and a radius \(R > 0\) such that every \(x \in \varOmega \) is associated with a unit vector \(\xi (x)\) so that the cone \(C(x, \xi (x), \theta , R)\) is contained in \(\varOmega \).

The interior cone condition requires that there is no ‘pinch point’ (i.e., a \(\prec \)-shape region) on the boundary of \(\varOmega \); see also [44].

Next, the notions of special Lipschitz domain [57, p.181] and Lipschitz boundaryFootnote 2 are defined as follows (see [57, p.189]; [6, Definition 1.4.4]).

Definition 2

(Special Lipschitz domain) For \(d \ge 2\), an open set \(\varOmega \subset {\mathbb {R}}^d\) is called a special Lipschitz domain, if there exists a rotation of \(\varOmega \), denoted by \({\tilde{\varOmega }}\), and a function \(\varphi : {\mathbb {R}}^{d-1} \rightarrow {\mathbb {R}}\) that satisfy the following:

  1. 1.

    \({\tilde{\varOmega }} = \{ (x,y) \in {\mathbb {R}}^d: y > \varphi (x) \}\);

  2. 2.

    \(\varphi \) is a Lipschitz function such that \(|\varphi (x) - \varphi (x') | \le M \Vert x - x' \Vert \) for all \(x,x' \in {\mathbb {R}}^{d-1}\), where \(M > 0\).

The smallest constant M for \(\varphi \) is called the Lipschitz bound of \(\varOmega \).

Definition 3

(Lipschitz boundary) Let \(\varOmega \subset {\mathbb {R}}^d\) be an open set and \(\partial \varOmega \) be its boundary. Then, \(\partial \varOmega \) is called a Lipschitz boundary, if there exist constants \(\varepsilon > 0\), \(N\in {\mathbb {N}}\), M > 0, and open sets \(U_1,U_2,\dots , U_L \subset {\mathbb {R}}^d\), where \(L \in {\mathbb {N}} \cup \{ \infty \}\), such that the following conditions are satisfied:

  1. 1.

    For any \(x \in \partial \varOmega \), there exists an index i such that \(B(x,\varepsilon ) \subset U_i\), where \(B(x,\varepsilon )\) is the ball centered at x and radius \(\varepsilon \);

  2. 2.

    \(U_{i_1} \cap \cdots \cap U_{i_{N+1}} = \emptyset \) for any distinct indices \(\{ i_1,\dots ,i_{N+1} \}\);

  3. 3.

    For each index i, there exists a special Lipschitz domain \(\varOmega _i \subset {\mathbb {R}}^d\) with Lipschitz bound b such that \(U_i \cap \varOmega = U_i \cap \varOmega _i\) and \(b\le M\).

Examples of a set \(\varOmega \) having a Lipschitz boundary include: (i) \(\varOmega \) is an open bounded set whose boundary \(\partial \varOmega \) is \(C^1\) embedded in \({\mathbb {R}}^d\); (ii) \(\varOmega \) is an open bounded convex set [57, p.189].

Proposition 1

Let \(\varOmega \subset {\mathbb {R}}^d\) be a bounded open set such that an interior cone condition is satisfied and the boundary \(\partial \varOmega \) is Lipschitz, and P be a probability distribution on \({\mathbb {R}}^d\) with a bounded density function p such that \(\mathrm{supp}(P) \subset \varOmega \). For \(r\in {\mathbb {R}}\) with \(\lfloor r \rfloor > d/2\), \(k_r\) is a kernel on \({\mathbb {R}}^d\) that satisfies Assumption 1 and \({{\mathcal {H}}}_{k_r}(\varOmega )\) is the RKHS of \(k_r\) restricted on \(\varOmega \). Suppose that \(X^n := \{X_1,\dots ,X_n\} \subset \varOmega \) are finite points such that \(G := (k_r(X_i,X_j))_{i,j=1}^n \in {\mathbb {R}}^{n \times n}\) is invertible, and \(w_1,\dots ,w_n\) are the quadrature weights given by (17). Then, there exist constants \(C > 0\) and \(h_0 > 0\) independent of \(X^n\), such that

$$\begin{aligned} e_n(P; {{\mathcal {H}}}_{k_r}(\varOmega )) \le C h_{X^n,\varOmega }^r, \end{aligned}$$

provided that \(h_{X^n,\varOmega } \le h_0\), where \(e_n(P; {{\mathcal {H}}}_{k_r}(\varOmega ))\) is the worst-case error for the quadrature rule \(\{ (w_i,X_i) \}_{i=1}^n\).

Proof

The proof idea is borrowed from [9, Theorem 1]. Let \(f \in {{\mathcal {H}}}_{k_r}(\varOmega )\) be arbitrary and fixed. Define a function \(f_n \in {{\mathcal {H}}}_{k_r}(\varOmega )\) by

$$\begin{aligned} f_n:= \sum _{i=1}^n \alpha _i k_r(\cdot ,X_i) \end{aligned}$$

where \({\varvec{\alpha }}:= (\alpha _1,\dots ,\alpha _n)^T = K^{-1} {\varvec{f}}\in {\mathbb {R}}^n\) and \({\varvec{f}}:= (f(X_1),\dots ,f(X_n)) \in {\mathbb {R}}^n\). This function is an interpolant of f on \(X^n\) such that \(f(X_i) = f_n(X_i)\) for all \(X_i \in X^n\)

It follows from the norm equivalence that \(f \in H^r(\varOmega )\) and

$$\begin{aligned} \Vert f \Vert _{H^r(\varOmega )} \le C_1 \Vert f \Vert _{{{\mathcal {H}}}_{k_r}(\varOmega )}, \end{aligned}$$
(18)

where \(C_1 > 0\) is a constant.

We see that \(\sum _{i=1}^n w_i f(X_i) = \int f_n(x) \mathrm{d}P(x)\). In fact, recalling that the weights \({\varvec{w}} := (w_1,\dots ,w_n)^T\) are defined as \({\varvec{w}} = K^{-1} {\varvec{z}}\), where \({\varvec{z}} := (z_1,\dots ,z_n)^T\) with \(z_i := \int k_r(x,X_i)\mathrm{d}P(x)\), it follows that

$$\begin{aligned} \sum _{i=1}^n w_i f(X_i)= & {} {\varvec{w}}^T {\varvec{f}}= {\varvec{z}}^T K^{-1} {\varvec{f}}= {\varvec{z}}^T {\varvec{\alpha }}\\= & {} \sum _{i=1}^n \alpha _i \int k_r(x,X_i)\mathrm{d}P(x)= \int f_n(x) \mathrm{d}P(x). \end{aligned}$$

Using this identity, we have

$$\begin{aligned} \left| \int f(x)\mathrm{d}P(x) - \sum _{i=1}^n w_i f(X_i) \right|= & {} \left| \int f(x)\mathrm{d}P(x) - \int f_n(x)\mathrm{d}P(x) \right| \nonumber \\\le & {} \Vert f - f_n \Vert _{L_1(\varOmega )} \Vert p \Vert _{L_\infty (\varOmega )} \nonumber \\\le & {} C_0 \Vert f \Vert _{H^r(\varOmega )} h_{X^n,\varOmega }^r \Vert p \Vert _{L_\infty (\varOmega )} \end{aligned}$$
(19)
$$\begin{aligned}\le & {} C_0 C_1 \Vert f \Vert _{{{\mathcal {H}}}_{k_r}(\varOmega )} h_{X^n,\varOmega }^r \Vert p \Vert _{L_\infty (\varOmega )}, \end{aligned}$$
(20)

where (19) follows from Theorem 11.32 and Corollary 11.33 in [61] (where we set \(m := 0\), \(p := 2\), \(q := 1\), \(k := \lfloor r \rfloor \) and \(s := r - \lfloor r \rfloor \)), and (20) from (18). Note that constant \(C_0\) depends only on r, d and the constants in the interior cone condition (which follows from the fact that Theorem 11.32 in [61] is derived from Proposition 11.30 in [61]). Setting \(C := C_0 C_1 \Vert p \Vert _{\infty }\) completes the proof. \(\square \)

Remark 1

  • Typically, the fill distance \(h_{X^n,\varOmega }\) decreases to 0 as the number n of design points increases. Therefore, the upper bound \(C h_{X^n \varOmega }^r\) provides a faster rate of convergence for \(e_n(P; W_2^r(\varOmega ))\) by a larger value of the degree r of smoothness.

  • The condition \(h_{X^n,\varOmega } \le h_0\) requires that the design points \(X^n = \{ X_1,\dots ,X_n \}\) must cover the set \(\varOmega \) to a certain extent in order to guarantee the error bound to hold. This requirement arises since we have used a result from the scattered data approximation literature [61, Corollary 11.33] to derive inequality (19) in our proof. In the literature, such a condition is necessary and we refer an interested reader to Section 11 of [61] and references therein.

  • The constant \(h_0 > 0\) depends only on the constants \(\theta \) and R in the interior cone condition (Definition 1). The explicit form is \(h_0 := Q(\lfloor r \rfloor , \theta ) R\), where \(Q(\lfloor r \rfloor ,\theta ) := \frac{ \sin \theta \sin \psi }{8 \lfloor r \rfloor ^2 (1 + \sin \theta ) (1 + \sin \psi ) }\) with \(\psi := 2 \arcsin \frac{\sin \theta }{4(1+\sin \theta )}\) [61, p.199].

The following is an immediate corollary to Proposition 1.

Corollary 1

Assume that \(\varOmega \), P and r satisfy the conditions in Proposition 1. Suppose that \(X^n := \{ X_1, \dots , X_n \} \subset \varOmega \) are finite points such that \(G := (k_r(X_i,X_j))_{i,j=1}^n \in {\mathbb {R}}^{n \times n}\) is invertible and \(h_{X^n, \varOmega } = O(n^{- \alpha })\) for some \(0 < \alpha \le 1/d\) as \(n \rightarrow \infty \), and \(w_1,\dots ,w_n\) are the quadrature weights given by (17) based on \(X^n\). Then, we have

$$\begin{aligned} e_n(P;{{\mathcal {H}}}_{k_r}(\varOmega )) = O(n^{- \alpha r}) \quad (n \rightarrow \infty ), \end{aligned}$$
(21)

where \(e_n(P; {{\mathcal {H}}}_{k_r}(\varOmega ))\) is the worst-case error of the quadrature rule \(\{ (w_i,X_i) \}_{i=1}^n\).

Remark 2

  • Result (21) implies that the same rate is attainable for the Sobolev space \(H^r(\varOmega )\) (instead of \(H_{k_r}(\varOmega ))\):

    $$\begin{aligned} e_n(P;H^r(\varOmega )) = O(n^{- \alpha r}) \quad (n \rightarrow \infty ) \end{aligned}$$
    (22)

    with (the sequence of) the same weighted points \(\{ (w_i,X_i) \}_{i=1}^\infty \). This follows from the norm equivalence between \({{\mathcal {H}}}_{k_r}(\varOmega )\) and \(H^r(\varOmega )\).

  • If the fill distance satisfies \(h_{X^n,\varOmega } = O(n^{-1/d})\) as \(n \rightarrow \infty \), then \(e_n(P; H^r(\varOmega )) = O(n^{- r/d})\). This rate is minimax optimal for the deterministic quadrature rules for the Sobolev space \(H^r(\varOmega )\) on a hypercube [40, Proposition 1 in Section 1.3.12]. Corollary 1 thus shows that Bayesian quadrature achieves the minimax optimal rate in this setting.

  • The decay rate for the fill distance \(h_{X^n,\varOmega } = O(n^{-1/d})\) holds when, for example, the design points \(X^n = \{ X_1,\dots ,X_n \}\) are equally spaced grid points in \(\varOmega \). Note that this rate cannot be improved: If the fill distance decreased at a rate faster than \(O(n^{-1/d})\), then \(e_n(P; H^r(\varOmega ))\) would decrease more quickly than the minimax optimal rate, which is a contradiction.

4 Main Results

This section presents the main results on misspecified settings. Two results based on different assumptions are discussed: one on the quadrature weights in Sect. 4.1 and the other on the design points in Sect. 4.2. The approximation theory for Sobolev spaces developed by [37] is employed in the results.

4.1 Convergence Rates Under an Assumption on Quadrature Weights

Theorem 1

Let \(\varOmega \subset {\mathbb {R}}^d\) be an open set whose boundary is Lipschitz, P be a probability distribution on \({\mathbb {R}}^d\) with \(\mathrm{supp}(P) \subset \varOmega \), r be a real number with \(r>d/2\), and s be a natural number with \(s \le r\). Let \(k_r\) denote a kernel on \({\mathbb {R}}^d\) satisfying Assumption 1, and \({{\mathcal {H}}}_{k_r}(\varOmega )\) the RKHS of \(k_r\) restricted on \(\varOmega \). Then, for any \(\{ (w_i,X_i) \}_{i=1}^n \in ({\mathbb {R}}\times \varOmega )^n\), \(f \in C_B^s (\varOmega ) \cap H^s (\varOmega ) \cap L_1(\varOmega )\), and \(\sigma > 0\), we have

$$\begin{aligned} | P_n f - P f |\le & {} c_1 \left( \sum _{i=1}^n |w_i| + 1 \right) \sigma ^{-s} \Vert f \Vert _{C_B^s(\varOmega )}\nonumber \\&+\,c_2 (1+\sigma ^2)^{\frac{r-s}{2}} e_n(P;{{\mathcal {H}}}_{k_r}(\varOmega )) \Vert f \Vert _{H^s(\varOmega )}, \end{aligned}$$
(23)

where \(c_1, c_2 > 0\) are constants independent of \(\{ (w_i,X_i) \}_{i=1}^n\), f and \(\sigma \).

Proof

We first derive some inequalities used for proving the assertion. It follows from norm equivalence that \(f \in W_2^s(\varOmega )\), where \(W_2^s(\varOmega )\) is the Sobolev space defined via weak derivatives. Since \(\varOmega \) has a Lipschitz boundary, Stein’s extension theorem [57, p.181] guarantees that there exists a bounded linear extension operator \({\mathfrak {E}}: W_2^s(\varOmega ) \rightarrow W_2^s({\mathbb {R}}^d)\) such that

$$\begin{aligned} {\mathfrak {E}}(f) (x)= & {} f(x), \quad \forall x \in \varOmega , \end{aligned}$$
(24)
$$\begin{aligned} \Vert {\mathfrak {E}}(f) \Vert _{W_2^s({\mathbb {R}}^d)}\le & {} C_1 \Vert f \Vert _{W_2^s(\varOmega )}, \end{aligned}$$
(25)

where \(C_1\) is a constant independent of the choice of f. From the norm equivalence and (25), there is a constant \(C_2\) such that

$$\begin{aligned} \Vert {\mathfrak {E}}f \Vert _{H^s({\mathbb {R}}^d)} \le C_2 \Vert f \Vert _{H^s(\varOmega )}. \end{aligned}$$
(26)

Since \(f \in L_1(\varOmega )\), the extension also satisfies \({\mathfrak {E}}(f) \in L_1({\mathbb {R}}^d)\) [57, p.181]. In addition, by the construction of \({\mathfrak {E}}\) [57, Eqs.(24)(31) on p.191], one can show [38, Section 3.2.2] that \({\mathfrak {E}}\) is also a linear bounded operator from \(C_B^s(\varOmega )\) to \(C_0^s({\mathbb {R}}^d)\), that is,

$$\begin{aligned} \Vert {\mathfrak {E}}f \Vert _{C_0^s({\mathbb {R}}^d)}\le & {} C_3 \Vert f \Vert _{C_B^s(\varOmega )}, \end{aligned}$$
(27)

for some constant \(C_3>0\). Below we write \({{\tilde{f}}}:= {\mathfrak {E}}(f)\) for notational simplicity.

Let \(g_{\sigma } \in H^r ({\mathbb {R}}^d)\) be the approximate function of \({{\tilde{f}}}\) defined as (B.10) by Calderón’s formula (“Appendix B.2”; we set \(f := {{\tilde{f}}}\)). The property \({{\tilde{f}}}\in C_0^s({\mathbb {R}}^d) \cap H^s({\mathbb {R}}^d) \cap L_1({\mathbb {R}}^d)\) enables the use of Proposition 3.7 of [37] (where we set \(k := s\) and \(\alpha := 0\); see Proposition A.1 in “Appendix A” for a review), which gives in combination with (27) that

$$\begin{aligned} \Vert {{\tilde{f}}}- g_{\sigma } \Vert _{L_\infty ({\mathbb {R}}^d)} \le C \sigma ^{-s} \Vert {{\tilde{f}}}\Vert _{C_0^s ({\mathbb {R}}^d)} \le C_4 \sigma ^{-s} \Vert f \Vert _{C_B^s(\varOmega )}, \end{aligned}$$
(28)

for some constant \(C_4>0\) which is independent of f.

From \({{\tilde{f}}}\in C_0^s({\mathbb {R}}^d) \cap H^s({\mathbb {R}}^d) \cap L_1({\mathbb {R}}^d)\), Lemma B.6 in “Appendix B.2” can be applied, by which together with (26) we have

$$\begin{aligned} \Vert g_{\sigma } \Vert _{H^r ({\mathbb {R}}^d)} \le C_5' (1+\sigma ^2)^{\frac{r-s}{2}} \Vert {{\tilde{f}}}\Vert _{H^s ({\mathbb {R}}^d)}\le C_5 (1+\sigma ^2)^{\frac{r-s}{2}} \Vert f \Vert _{H^s (\varOmega )} \end{aligned}$$
(29)

for some constants \(C_5\) and \(C_5'\), which are independent of \(\sigma \) and \({{\tilde{f}}}\).

With the decomposition

$$\begin{aligned} | P_n f - Pf | \le \underbrace{| P_n f - P_n g_{\sigma } |}_{(A)} + \underbrace{| P_n g_{\sigma } - P g_{\sigma }|}_{(B)} + \underbrace{| P g_{\sigma } - P f |}_{(C)}, \end{aligned}$$

each of the terms (A), (B) and (C) will be bounded in the following.

First, the term (A) is bounded as

$$\begin{aligned} (A)\le & {} \sum _{i=1}^n | w_i | \left| f(X_i) - g_{\sigma }(X_i) \right| \\= & {} \sum _{i=1}^n |w_i| \left| {{\tilde{f}}}(X_i) - g_{\sigma }(X_i) \right| \quad (\because \{ X_i \}_{i=1}^n \subset \varOmega \ \mathrm{and}\ (24) )\\\le & {} \left( \sum _{i=1}^n |w_i| \right) \Vert {{\tilde{f}}}- g_{\sigma } \Vert _{L_\infty ({\mathbb {R}}^d)} {\mathop {\le }\limits ^{(28)}} C_4 \left( \sum _{i=1}^n |w_i| \right) \sigma ^{-s} \Vert f \Vert _{C_B^s(\varOmega )}. \end{aligned}$$

For the term (B), it follows from the norm equivalence and restriction that for some constant D

$$\begin{aligned} \Vert g_{\sigma }|_\varOmega \Vert _{{{\mathcal {H}}}_{k_r}(\varOmega )} \le D \Vert g_{\sigma } \Vert _{H^r({\mathbb {R}}^d)}. \end{aligned}$$
(30)

This inequality and (29) give

$$\begin{aligned} (B)\le & {} \left\| g_{\sigma } |_{\varOmega } \right\| _{ {{\mathcal {H}}}_{k_r} (\varOmega ) } \left\| m_{P_n} - m_P \right\| _{ {{\mathcal {H}}}_{k_r} (\varOmega ) } \\\le & {} D \Vert g_{\sigma } \Vert _{H^r({\mathbb {R}}^d)} e_n(P;{{\mathcal {H}}}_{k_r} (\varOmega ) ) \\\le & {} D C_5 (1+\sigma ^2)^{\frac{r-s}{2}} e_n(P;{{\mathcal {H}}}_{k_r} (\varOmega ) ) \Vert f \Vert _{H^s (\varOmega )}. \end{aligned}$$

Finally, the term (C) is bounded as

$$\begin{aligned} (C) \le \int \left| g_{\sigma }(x) - {{\tilde{f}}}(x) \right| \mathrm{d}P(x) \le \Vert g_{\sigma } - {{\tilde{f}}}\Vert _{L_\infty ({\mathbb {R}}^d)} {\mathop {\le }\limits ^{(28)}} C_4 \sigma ^{-s} \Vert f \Vert _{C_B^s(\varOmega )}. \end{aligned}$$

Combining these three bounds, the assertion is obtained. \(\square \)

Remark 3

  • The integrand f is assumed to satisfy \(f \in H^s(\varOmega ) \cap C_B^s(\varOmega ) \cap L_1(\varOmega )\), which is slightly stronger than just assuming \(f \in H^s(\varOmega )\).

  • In upper bound (23), the constant \(\sigma > 0\) controls the trade-off between the two terms: \(c_2 (1+\sigma ^2)^{\frac{r-s}{2}} e_n(P;{{\mathcal {H}}}_{k_r}(\varOmega )) \Vert f \Vert _{H^s(\varOmega )}\) and \(c_1 \left( \sum _{i=1}^n |w_i| + 1 \right) \cdot \sigma ^{-s} \Vert f \Vert _{C_B^s(\varOmega )}\). In the proof, the integrand f is approximated by a band-limited function \(g_\sigma \in H^r(\varOmega )\), where \(\sigma \) is the highest spectrum that \(g_\sigma \) possesses. Thus, the trade-off in the upper bound corresponds to the trade-off between the accuracy of approximation of f by \(g_\sigma \) and the penalty incurred on the regularity of \(g_\sigma \).

The following result, which is a corollary of Theorem 1, provides a rate of convergence for the quadrature error in a misspecified setting. It is derived by assuming certain rates for the quantity \(\sum _{i=1}^n | w_i |\) and the worst-case error \(e_n(P;{{\mathcal {H}}}_{k_r})\).

Corollary 2

Let \(\varOmega \), P, r, s, \(k_r\), and \({{\mathcal {H}}}_{k_r}(\varOmega )\) be the same as Theorem 1. Suppose that \(\{ (w_i,X_i) \}_{i=1}^n\in ({\mathbb {R}}\times \varOmega )^n\) satisfies \(e_n(P;{{\mathcal {H}}}_{k_r}(\varOmega )) = O(n^{-b})\) and \(\sum _{i=1}^n | w_i | = O(n^c)\) for some \(b > 0\) and \(c \ge 0\), respectively, as \(n \rightarrow \infty \). Then, for any \(f \in C_B^s (\varOmega ) \cap H^s (\varOmega ) \cap L_1(\varOmega )\), we have

$$\begin{aligned} | P_n f - P f | = O( n^{ - bs/r + c (r-s)/r } ) \quad (n \rightarrow \infty ). \end{aligned}$$
(31)

Proof

Let \(\sigma _n := n^\theta > 0\), where \(\theta > 0\) will be determined later. Plugging \(e_n(P;{{\mathcal {H}}}_{k_r}(\varOmega )) = O(n^{-b})\) and \(\sum _{i=1}^n | w_i | = O(n^c)\) to (23) with \(\sigma := \sigma _n\) leads

$$\begin{aligned} | P_n f - P f| = O( n^{c - \theta s} ) + O( n^{ \theta (r-s) - b } ). \end{aligned}$$

Setting \(\theta = (b+c)/r\), which balances the two terms in the right-hand side, completes the proof. \(\square \)

Remark 4

  • The exponent of the rate in (31) consists of two terms: \(-bs/r\) and \(c(r-s)/r\). The first term \(-bs/r\) corresponds to a degraded rate from the original \(O(n^{-b})\) by the factor of smoothness ratio s / r, while the second term \(c(r-s)/r\) makes the rate slower. The effect of the second term increases as the constant c or the gap \((r-s)\) of misspecification becomes larger.

  • The obtained rate recovers \(O(n^{-b})\) for \(r=s\) (well-specified case) regardless of the value of c.

  • Consider the misspecified case \(r>s\). If \(c>0\), the term \(c(r-s)/r\) always makes the rate slower. It is thus better to have \(c=0\), as in this case we have the rate \(O(n^{-bs/r})\) in the misspecified setting. The weights with \(\max _{i=1,\dots ,n} |w_i| = O(n^{-1})\), such as equal weights \(w_i=1/n\), realize \(c=0\).

  • As mentioned earlier, the minimax optimal rate for the worst-case error in the Sobolev space \(H^r(\varOmega )\) with \(\varOmega \) being a cube in \({\mathbb {R}}^d\) and P being the Lebesgue measure on \(\varOmega \) is \(O(n^{-r/d})\) [40, Proposition 1 in Section 1.3.12]. If design points satisfy \(b = r/d\) and \(c = 0\) in this setting, Corollary 2 provides the rate \(O(n^{-s/d})\) for \(f \in H^s(\varOmega ) \cap C_B^s(\varOmega ) \cap L_1(\varOmega )\). This rate is the same as the minimax optimal rate for \(H^s(\varOmega )\) and hence implies some adaptivity to the order of differentiability.

  • The assumption \(\sum _{i=1}^n |w_i| = O(n^c)\) can be also interpreted from a probabilistic viewpoint. Assume that the observation involves noise, \(Y_i := f(X_i) + \varepsilon _i\ (i=1,\dots ,n)\), where \(\varepsilon _i\) is independent noise with \({{\mathbb {E}}}[ \varepsilon _i^2] = \sigma _\mathrm{noise}^2\) (\(\sigma _{\mathrm{noise}} > 0\) is a constant) for \(i=1,\dots ,n\), and that \(Y_i\) are used for numerical integration. The expected squared error is decomposed as

    $$\begin{aligned} {{\mathbb {E}}}_{\varepsilon _1,\dots ,\varepsilon _n} \left[ \left( \sum _{i=1}^n w_i Y_i - Pf \right) ^2 \right]= & {} {{\mathbb {E}}}_{\varepsilon _1,\dots ,\varepsilon _n} \left[ \left( P_n f - Pf + \sum _{i=1}^n w_i \varepsilon _i \right) ^2 \right] \nonumber \\= & {} \left| P_n f - Pf \right| ^2 + \sigma _{\mathrm{noise}}^2 \sum _{i=1}^n w_i^2. \end{aligned}$$

    In the last expression, the first term \(\left| P_n f - Pf \right| ^2\) is the squared error in the noiseless case, and the second term \(\sigma _{\mathrm{noise}}^2 \sum _{i=1}^n w_i^2 \) is the error due to noise. Since \(\sum _{i=1}^n w_i^2 \le ( \sum _{i=1}^n |w_i| )^2 = O(n^{2c})\), the error in the second term may be larger as c increases. Hence, quadrature weights having smaller c are preferable in terms of robustness to the existence of noise; this in turn makes the quadrature rule more robust to the misspecification of the degree of smoothness.

Theorem 1 and Corollary 2 require a control on the absolute sum of the quadrature weights \(\sum _{i=1}^n |w_i|\). This is possible with, for instance, equal-weight quadrature rules that seek for good design points. However, the control of \(\sum _{i=1}^n |w_i|\) could be difficult for quadrature rules that obtain the weights by optimization based on prefixed design points. This includes the case of Bayesian quadrature that optimizes the weights without any constraint. To deal with such methods, in the next section we will develop theoretical guarantees that do not rely on the assumption on the quadrature weights, but on a certain assumption on the design points.

4.2 Convergence Rates Under an Assumption on Design Points

This subsection provides convergence guarantees in a misspecified settings under an assumption on the design points. The assumption is described in terms of separation radius (6), which is (the half of) the minimum distance between distinct design points. The separation radius of points \(X^n := \{ X_1,\dots , X_n \} \subset {\mathbb {R}}^d\) is denoted by \(q_{X^n}\). Note that if \(X^n\subset \varOmega \) for some \(\varOmega \), then the separation radius lower bounds the fill distance, i.e., \(q_{X^n} \le h_{X^n,\varOmega }\).

Henceforth, we will consider a bounded domain \(\varOmega \), and without loss of generality, we assume that it satisfies \(\mathrm{diam}(\varOmega ) \le 1\).

Theorem 2

Let \(\varOmega \subset {\mathbb {R}}^d\) be an open bounded set with \(\mathrm{diam}(\varOmega ) \le 1\) such that the boundary is Lipschitz, P be a probability distribution on \({\mathbb {R}}^d\) such that \(\mathrm{supp}(P) \subset \varOmega \), r be a real number with \(r > d/2\), and s be a natural number with \(s \le r\). Let \(k_r\) denote a kernel on \({\mathbb {R}}^d\) satisfying Assumption 1, and \({{\mathcal {H}}}_{k_r}(\varOmega )\) the RKHS of \(k_r\) restricted on \(\varOmega \). For any \(\{ (w_i,X_i) \}_{i=1}^n \in ({\mathbb {R}}\times \varOmega )^n\) and \(f \in C_B^s (\varOmega ) \cap H^s (\varOmega )\), we have

$$\begin{aligned} \left| P_n f - Pf \right| \le C \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) \left( q_{X^n}^{- (r-s) } e_n(P; {{\mathcal {H}}}_{k_r}(\varOmega )) + q_{X^n}^{s} \right) , \end{aligned}$$
(32)

where \(C > 0\) is a constant depending neither on \(\{ (w_i,X_i) \}_{i=1}^n\) nor on the choice of f and \(e_n(P; {{\mathcal {H}}}_{k_r}(\varOmega ))\) is the worst-case error in \({{\mathcal {H}}}_{k_r}(\varOmega )\) for \(\{ (w_i, X_i) \}_{i=1}^n\).

Proof

By the same argument as the first part of the proof for Theorem 1, there exists an extension of f to \({{\tilde{f}}}\in W_2^s({\mathbb {R}}^d)\cap C_0^s({\mathbb {R}}^d)\) such that

$$\begin{aligned} {{\tilde{f}}}(x)= & {} f(x), \quad \forall x \in \varOmega , \end{aligned}$$
(33)
$$\begin{aligned} \Vert {{\tilde{f}}}\Vert _{H^s({\mathbb {R}}^d)}\le & {} C_1 \Vert f \Vert _{H^s(\varOmega )}, \end{aligned}$$
(34)
$$\begin{aligned} \Vert {{\tilde{f}}}\Vert _{C_0^s({\mathbb {R}}^d)}\le & {} C_2 \Vert f \Vert _{C_B^s(\varOmega )}, \end{aligned}$$
(35)

for some positive constants \(C_i\) (\(i=1,2\)). Note also that \(f\in L^1(\varOmega )\), since \(f\in C^s_B(\varOmega )\) and \(\varOmega \) is bounded. This implies \({{\tilde{f}}}\in L_1({\mathbb {R}}^d)\) [57, p.181].

From the above inequalities, there is a constant \(C_3>0\) independent of the choice of f such that

$$\begin{aligned} \max \left( \Vert {{\tilde{f}}}\Vert _{C_0^s({\mathbb {R}}^d)}, \Vert {{\tilde{f}}}\Vert _{H^s({\mathbb {R}}^d)} \right) \le C_3 \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) . \end{aligned}$$
(36)

For notational simplicity, write

$$\begin{aligned} \sigma _n := \frac{C_d}{q_{X^n}} \end{aligned}$$
(37)

where \(C_d := 24(\frac{\sqrt{\pi }}{3} \varGamma (\frac{d+2}{2}))^{\frac{2}{d+1}}\) with \(\varGamma \) being the Gamma function. From Theorems A.1 and A.2 in “Appendix A” (which are restatements of Theorems 3.5 and 3.10 of [37]), there exists a function \({{\tilde{f}}}_{\sigma _n} \in H^r ({\mathbb {R}}^d)\) such that

$$\begin{aligned} {{\tilde{f}}}_{\sigma _n}(X_i)= & {} {{\tilde{f}}}(X_i),\quad (i=1,\dots ,n), \end{aligned}$$
(38)
$$\begin{aligned} \Vert {{\tilde{f}}}- {{\tilde{f}}}_{\sigma _n} \Vert _{L_\infty ({\mathbb {R}}^d)}\le & {} C_{s,d} \sigma _n^{-s} \max ( \Vert {{\tilde{f}}}\Vert _{C_0^s({\mathbb {R}}^d)}, \Vert {{\tilde{f}}}\Vert _{H^s({\mathbb {R}}^d)}), \end{aligned}$$
(39)

where \(C_{s,d}\) is a constant depending only on s and d. Combining (39) and (36) obtains

$$\begin{aligned} \Vert {{\tilde{f}}}- {{\tilde{f}}}_{\sigma _n} \Vert _{L_\infty ({\mathbb {R}}^d)} \le C_4 \sigma _n^{-s} \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) , \end{aligned}$$
(40)

where \(C_4 := C_{s,d} C_3\).

From Assumption 1 and \({{\tilde{f}}}\in C_B^s({\mathbb {R}}^d) \cap H^s({\mathbb {R}}^d) \cap L_1({\mathbb {R}}^d)\), Lemma A.1 (see “Appendix A”) gives

$$\begin{aligned} \Vert {{\tilde{f}}}_{\sigma _n} \Vert _{ {{\mathcal {H}}}_{k_r} ({\mathbb {R}}^d) } \le C_{s,d,k_r} \sigma _n^{r-s} \max ( \Vert {{\tilde{f}}}\Vert _{C_0^s({\mathbb {R}}^d)}, \Vert {{\tilde{f}}}\Vert _{H^s({\mathbb {R}}^d)}), \end{aligned}$$

where \(C_{s,d,k_r}\) is a constant only depending on r, s, d and \(k_r\). It follows from this inequality and (36) that

$$\begin{aligned} \Vert {{\tilde{f}}}_{\sigma _n} \Vert _{ {{\mathcal {H}}}_{k_r} ({\mathbb {R}}^d) } \le C_5 \sigma _n^{r-s} \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) , \end{aligned}$$
(41)

where \(C_5 := C_{s,d,k_r} C_3\).

We are now ready to prove the assertion. In the decomposition

$$\begin{aligned} | P_n f - P f | = | P_n {{\tilde{f}}}- P {{\tilde{f}}}| \le \underbrace{| P_n {{\tilde{f}}}- P_n {{\tilde{f}}}_{\sigma _n} |}_{(A)} + \underbrace{| P_n {{\tilde{f}}}_{\sigma _n} - P {{\tilde{f}}}_{\sigma _n}|}_{(B)} + \underbrace{| P {{\tilde{f}}}_{\sigma _n} - P {{\tilde{f}}}|}_{(C)}, \end{aligned}$$

the term (A) is zero from (38).

With \(\Vert {{\tilde{f}}}_{\sigma _n} |_\varOmega \Vert _{{{\mathcal {H}}}_{k_r}(\varOmega )} \le \Vert {{\tilde{f}}}_{\sigma _n} \Vert _{{{\mathcal {H}}}_{k_r}({\mathbb {R}}^d)}\) ( [2], Section 5), the term (B) can be bounded as

$$\begin{aligned} (B)= & {} \left| \sum _{i=1}^n w_i {{\tilde{f}}}_{\sigma _n}|_\varOmega (X_i) - \int {{\tilde{f}}}_{\sigma _n} |_\varOmega (x) \mathrm{d}P(x) \right| \\&\le \left| \left\langle {{\tilde{f}}}_{\sigma _n} |_{\varOmega }, m_{P_n} - m_P \right\rangle _{{{\mathcal {H}}}_{k_r} (\varOmega )} \right| \quad (\because {{\tilde{f}}}_{\sigma _n} |_\varOmega \in {{\mathcal {H}}}_{k_r} (\varOmega )) \\&\le \left\| {{\tilde{f}}}_{\sigma _n} |_\varOmega \right\| _{{{\mathcal {H}}}_{k_r} (\varOmega )} e_n(P; {{\mathcal {H}}}_{k_r} (\varOmega ) ) \\&\le \left\| {{\tilde{f}}}_{\sigma _n} \right\| _{ {{\mathcal {H}}}_{k_r} ({\mathbb {R}}^d)} e_n(P; {{\mathcal {H}}}_{k_r} (\varOmega ) ) \nonumber \\&{\mathop {\le }\limits ^{(41)}} C_5 \sigma _n^{r-s} \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) e_n(P; {{\mathcal {H}}}_{k_r} (\varOmega ) ). \end{aligned}$$

The term (C) is upper bounded as

$$\begin{aligned} (C) \le \Vert {{\tilde{f}}}_{\sigma _n} - {{\tilde{f}}}\Vert _{L_\infty ({\mathbb {R}}^d)} {\mathop {\le }\limits ^{(39)}} C_4 \sigma _n^{-s} \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) . \end{aligned}$$

These bounds complete the proof. \(\square \)

Remark 5

  • From \(q_{X^n} \le h_{X^n}\), the separation radius \(q_{X^n}\) typically converges to zero as \(n\rightarrow \infty \). For the upper bound in (32), the factor \(q_{X^n}^{-(r-s)}\) in the first term diverges to infinity as \(n\rightarrow \infty \), while the second term goes to zero. Thus, \(q_{X^n}\) should decay to zero in an appropriate speed depending on the rate of \(e_n(P;{{\mathcal {H}}}_{k_r}(\varOmega ))\), in order to make the quadrature error small in the misspecified setting.

  • Note that as the gap between r and s becomes large, the effect of the separation radius becomes serious; this follows from the expression \(q_{X^n}^{-(r-s)}\).

Based on Theorem 2, we establish below a rate of convergence in a misspecified setting by assuming a certain rate of decay for the separation radius as the number of design points increases.

Corollary 3

Let \(\varOmega , P, r, s, k_r, {{\mathcal {H}}}_{k_r}(\varOmega )\) be the same as in Theorem 2. Suppose \(\{ (w_i,X_i) \}_{i=1}^n \in ({\mathbb {R}}\times \varOmega )^n\) is design points such that \(e_n(P;{{\mathcal {H}}}_{k_r}(\varOmega )) = O(n^{-b})\) and \(q_{X^n} = \varTheta (n^{-a})\) for some \(b>0\) and \(a > 0\), respectively, as \(n \rightarrow \infty \). Then, for any \(f \in C_B^s (\varOmega ) \cap H^s (\varOmega )\), we have

$$\begin{aligned} | P_n f - P f | = O(n^{- \min ( b - a(r-s), as)} ) \quad (n \rightarrow \infty ). \end{aligned}$$
(42)

In particular, the rate in the right-hand side is optimized when \(a = b/r\), which gives

$$\begin{aligned} | P_n f - P f | = O(n^{-\frac{bs}{r}}) \quad (n \rightarrow \infty ). \end{aligned}$$
(43)

Proof

Plugging \(e_n(P;{{\mathcal {H}}}_{k_r}(\varOmega )) = O(n^{-b})\) and \(q_{X^n} = \varTheta (n^{-a})\) into (32) yields

$$\begin{aligned} \left| P_n f - Pf \right|= & {} O(n^{a (r-s) -b }) + O(n^{-as})= O(n^{- \min ( b - a(r-s), as)} ), \end{aligned}$$

which proves (42). The second assertion is obvious. \(\square \)

Remark 6

As stated in the assertion, the best rate for the bound is achieved when \(a = b/r\). The resulting rate in (43) coincides with that of Corollary 2 (see (31)) with \(c = 0\). Therefore, observations similar to those for Theorem 1 can be made with the rate in (43).

5 Bayesian Quadrature in Misspecified Settings

To demonstrate the results of Sect. 4, a rate of convergence for Bayesian quadrature in misspecified settings is derived. To this end, an upper bound on the integration error of Bayesian quadrature is first provided, when the smoothness of an integrand is overestimated. It is obtained by combining Theorem 2 in Sect. 4 and Proposition 1 in Sect. 3.

Theorem 3

Let \(\varOmega \subset {\mathbb {R}}^d\) be a bounded open set with \(\mathrm{diam}(\varOmega ) \le 1\) such that an interior cone condition is satisfied and the boundary is Lipschitz, P be a probability distribution on \({\mathbb {R}}^d\) with a bounded density function p such that \(\mathrm{supp}(P) \subset \varOmega \), r be a real number with \(\lfloor r \rfloor > d/2\), and s be a natural number with \(s \le r\). Suppose that \(k_r\) is a kernel on \({\mathbb {R}}^d\) satisfying Assumption 1, \(X^n := \{X_1,\dots ,X_n\} \subset \varOmega \) is design points such that \(G := (k_r(X_i,X_j))_{i,j=1}^n \in {\mathbb {R}}^{n \times n}\) is invertible, and \(w_1,\dots ,w_n\) are the Bayesian quadrature weights in (17) based on \(k_r\). Assume that there exist constants \(c_q > 0\) and \(\delta > 0\) independent of \(X^n\), such that \(1- s/r < \delta \le 1\) and

$$\begin{aligned} h_{X^n,\varOmega } \le c_q q_{X^n}^\delta . \end{aligned}$$
(44)

Then, there exist positive constants C and \(h_0\) independent of \(X^n\), such that for any \(f \in C_B^s (\varOmega ) \cap H^s (\varOmega )\), we have

$$\begin{aligned} \left| P_n f - Pf \right| \le C \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) h_{X^n,\varOmega }^{r - (r-s)/\delta }, \end{aligned}$$
(45)

provided that \(h_{X^n,\varOmega } \le h_0\).

Proof

Under the assumptions, Theorem 2 gives that

$$\begin{aligned} \left| P_n f - Pf \right| \le C_1 \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) \left( q_{X^n}^{- (r-s) } e_n(P; {{\mathcal {H}}}_{k_r}(\varOmega )) + q_{X^n}^{s} \right) , \end{aligned}$$
(46)

where \(C_1 > 0\) is a constant and \(e_n(P; {{\mathcal {H}}}_{k_r}(\varOmega ))\) is the worst-case error of \(\{ (w_i,X_i) \}_{i=1}^n\) in \({{\mathcal {H}}}_{k_r}(\varOmega )\). On the other hand, Proposition 1 implies that there exist constants \(C_2 > 0\) and \(h_0 > 0\) independent of the choice of \(X^n\), such that

$$\begin{aligned} e_n(P; {{\mathcal {H}}}_{k_r}(\varOmega )) \le C_2 h_{X^n,\varOmega }^r, \end{aligned}$$
(47)

provided that \(h_{X^n,\varOmega } \le h_0\). Note also that (44) implies that

$$\begin{aligned} q_{X^n}^{-1} \le c_q^{1/\delta } h_{X^n, \varOmega }^{-1/\delta }. \end{aligned}$$
(48)

From \(q_{X^n} \le h_{X^n,\varOmega }\) and the above inequalities, it follows that

$$\begin{aligned}&\left| P_n f - Pf \right| \\&\quad {\mathop {\le }\limits ^{(46)}} C_1 \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) \left( q_{X^n}^{- (r-s) } e_n(P; {{\mathcal {H}}}_{k_r}(\varOmega )) + q_{X^n}^{s} \right) \\&\quad {\mathop {\le }\limits ^{(47)}} C_1 \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) \left( C_2 q_{X^n}^{- (r-s) } h_{X^n,\varOmega }^r + q_{X^n}^{s} \right) \\&\quad {\mathop {\le }\limits ^{(48)}} C_1 \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) \left( C_2 c_q^{(r-s)/\delta } h_{X^n,\varOmega }^{r- (r-s)/\delta } + q_{X^n}^{s} \right) \\&\quad {\mathop {\le }\limits ^{(\star )}} C_1 \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) \left( C_2 c_q^{(r-s)/\delta } h_{X^n,\varOmega }^{r- (r-s)/\delta } + h_{X^n}^{s} \right) \\&\quad {\mathop {\le }\limits ^{(\dagger )}} C_3 \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) h_{X^n,\varOmega }^{r- (r-s)/\delta }, \end{aligned}$$

where \(C_1\), \(C_2\) and \(C_3\) are positive constants independent of the choice of design points \(X^n\), and we used \(q_{X^n} \le h_{X^n,\varOmega }\) in \((\star )\), \( 0 < h_{X^n} \le 1\) and \(0 < r - (r-s)/\delta \le s\) in \((\dagger )\). \(\square \)

Remark 7

  • Condition (44) implies that

    $$\begin{aligned} c' h_{X^n,\varOmega }^{1/\delta } \le q_{X^n} \le h_{X^n,\varOmega }, \end{aligned}$$
    (49)

    where \(c' := c_q^{-1/\delta }\) is independent of \(X^n\). This condition is stronger for a larger value of \(\delta \), requiring that distinct design points should not be very close to each other. Note that the lower bound \(1-s/r<\delta \) is necessary for the upper bound of error (45) to have a positive exponent, while the upper bound \(\delta \le 1\) follows from \(q_{X^n} \le h_{X^n,\varOmega }\), which holds by definition. The constraint \(1-s/r<\delta \) and (49) thus imply that a stronger condition is required for \(X^n\) as the degree of misspecification becomes more serious (i.e., as the ratio s / r becomes smaller).

  • If condition (44) is satisfied for \(\delta = 1\), then the design points \(X^n\) are called quasi-uniform [53, Section 7.3]. In this case, the bound in (45) is

    $$\begin{aligned} | P_n f - Pf | \le C \max \left( \Vert f \Vert _{C_B^s(\varOmega )}, \Vert f \Vert _{H^s(\varOmega )} \right) h_{X,\varOmega }^s. \end{aligned}$$
    (50)

    This is the same order of approximation as that of Proposition 1 when \(r = s\). Proposition 1 provides an error bound for Bayesian quadrature in a well-specified case, where one knows the degree of smoothness s of the integrand. Therefore, (50) suggests that if the design points are quasi-uniform, then Bayesian quadrature can be adaptive to the (unknown) degree of the smoothness s of the integrand f, even in a situation where one only knows its upper bound \(r \ge s\).

We obtain the following as a corollary of Theorem 3. The proof is obvious and omitted.

Corollary 4

Let \(\varOmega , P, r, s, k_r, X^n, G\) and \(w_i\) (\(i=1,\ldots ,n\)) be the same as Theorem 3. Assume that there exist constants \(c_q > 0\) and \(\delta > 0\) independent of \(X^n\), such that \(1- s/r < \delta \le 1\) and

$$\begin{aligned} h_{X^n,\varOmega } \le c_q q_{X^n}^\delta , \end{aligned}$$

and further \(h_{X^n,\varOmega } = O(n^{- \alpha })\) as \(n \rightarrow \infty \) for some \(0 < \alpha \le 1/d\). Then, for all \(f \in C_B^s (\varOmega ) \cap H^s (\varOmega )\), we have

$$\begin{aligned} \left| P_n f - Pf \right| = O(n^{ - \alpha [ r - (r-s)/\delta ] }) \quad (n \rightarrow \infty ). \end{aligned}$$
(51)

In particular, the best possible rate in the right-hand side is achieved when \(\delta = 1\) and \(\alpha = 1/d\), giving that

$$\begin{aligned} \left| P_n f - Pf \right| = O(n^{ - s/d }) \quad (n \rightarrow \infty ). \end{aligned}$$
(52)

Remark 8

  • The rate \(O(n^{-s/d})\) in (52) matches the minimax optimal rate of deterministic quadrature rules for the worst-case error in the Sobolev space \(H^s(\varOmega )\) with \(\varOmega \) being a cube [40, Proposition 1 in Section 1.3.12]. Therefore, it is shown that the optimal rate may be achieved by Bayesian quadrature, even in the misspecified setting (under a slightly stronger assumption that \(f \in H^s(\varOmega ) \cap C_B^s(\varOmega )\)). In other words, Bayesian quadrature may achieve the optimal rate adaptively, without knowing the degree s of smoothness of a test function: One just needs to know its upper bound \(r \ge s\).

  • The main assumptions required for the optimal rate (52) are that (i) \(h_{X^n,\varOmega } = O(n^{-1/d})\) and that (ii) \(h_{X^n,\varOmega } \le c_q q_{X^n}^\delta \) for \(\delta = 1\). Recall that (i) is the same assumption that is required for the optimal rate \(O(n^{-r/d})\) in the well-specified setting \(f \in H^r(\varOmega )\) (Corollary 1). On the other hand, (ii) is the one required for the finite sample bound in Theorem 3. Both these assumptions are satisfied, for instance, if \(X_1,\dots ,X_n\) are grid points in \(\varOmega \).

6 Simulation Experiments

We conducted simulation experiments to empirically assess the obtained theoretical results. MATLAB code for reproducing the results is available at https://github.com/motonobuk/kernel-quadrature. We focus on Bayesian quadrature in these experiments.

6.1 Problem Setting

Domain, distribution and design points The domain is \(\varOmega := [0,1] \subset {\mathbb {R}}\), and the measure of quadrature P is the uniform distribution over [0, 1]. For design points, we consider the following two configurations:

  • Uniform\(X^n = \{X_1,\dots ,X_n\}\) are equally spaced grid points in [0, 1] with \(X_1 = 0\) and \(X_n = 1\), that is, \(X_i = (i-1) / (n-1)\) for \(i = 1,\dots ,n\).

  • Non-uniform\(X^n = \{X_1,\dots ,X_n \}\) are non-equally spaced points in [0, 1], such that \(X_i = (i-1)/(n-1)\) if i is odd, and \(X_i = X_{i-1} + (n-1)^{-2}\) if i is even.

For the uniform design points, both the fill distance \(h_{X^n,\varOmega }\) and the separation radius \(q_{X^n, \varOmega }\) decay at the rate \(O(n^{-1})\). On the other hand, for the non-uniform points the separation radius decays at the rate \(O(n^{-2})\), while the rate of the fill distance remains the same \(O(n^{-1})\) as for the uniform points. Using these two different sets of design points, we can observe the effect of the separation radius to the performance of kernel quadrature.

Kernels As before, r denotes the assumed degree of smoothness used for computing quadrature weights, and s denotes the true smoothness of test integrands, both expressed in terms of Sobolev spaces. As kernels of the corresponding Sobolev spaces, we used Wendland kernels [61, Definition 9.11], which are given as follows [61, Corollary 9.14]. Define the following univariate functions:

$$\begin{aligned} \phi _{1,0}(t):= & {} (1-t)_+, \quad \phi _{1,1}(t) := (1-t)_+^3 ( 3 t + 1 ), \\ \phi _{1,2}(t):= & {} (1-t)_+^5 ( 24 t + 15 t + 3), \\ \phi _{1,3}(t):= & {} (1-t)_+^7 ( 315 t^3 + 285 t^2 + 105 t + 15), \quad t \ge 0, \end{aligned}$$

where \((x)_+ := \min (0,x)\). The Wendland kernel \(k_r\) whose RKHS is norm-equivalent to the Sobolev space \(H^r ([0,1])\) of order \(r\ (= 1,2,3,4)\) is then defined by \(k_r(x,y) := \phi _{d,r-1}( | x - y | / \delta )\) for \(x, y \in [0,1]\) [61, Theorem 10.35], where \(\delta \) is a scale parameter and we set it to be 0.1.

Evaluation measure For each pair of \(r\ (= 1,2,3,4)\) and \(s\ (= 1,2,3,4)\), we first computed quadrature weights \(w_1,\dots ,w_n\) by minimizing the worst-case error in \(H^r([0,1])\) and then evaluated the quadrature rule \((w_i,X_i)_{i=1}^n\) by computing the worst-case error in \(H^s([0,1])\), that is, \(\sup _{\Vert f \Vert _{H^s([0,1])} \le 1} |P_n f- Pf|\). More concretely, we computed the weights \(w_1,\dots ,w_n\) by formula (17) for Bayesian quadrature using the kernel \(k_r\) and then evaluated worst-case error (12) by computing the square root of (16) using the kernel \(k_s\). In this way, one can evaluate the performance of kernel quadrature under various settings. For instance, the case \(s < r\) is a situation where the true smoothness s is smaller than the assumed one r, the misspecified setting we have dealt in the paper.

6.2 Results

The simulation results are shown in Fig. 1 (Uniform design points) and Fig. 2 (Non-uniform design points). In the figures, we also report the exponents in the empirical rates of the fill distance \(h_{X^n,\varOmega }\), the separation radius \(q_{X^n}\), and the absolute sum of weights \(\sum _{i=1}^n |w_i|\) in the top of each subfigure; see the captions of Figs. 1 and 2 for details. Based on these, we can draw the following observations.

Optimal rates in the well-specified case In both Figs. 1 and 2, the black solid lines are the worst-case errors in the well-specified case \(s = r\). The empirical convergence rates of these worst-case errors are very close to the optimal rates derived in Sect. 3 (see Corollary 1 and its remarks), confirming the theoretical results. Proposition 1 and Corollary 1 also show that the worst-case error in the well-specified case is determined by the fill distance and is independent of the separation radius. The simulation results are consistent with this, since for both Figs. 1 and 2 the fill distance decays essentially at the rate \(O(n^{-1})\), while the separation radius decays quicker for Fig. 2 than for Fig. 1.

Adaptability to lesser smoothness Let us look at Fig. 1 for the misspecified case \(s < r\), i.e., where the true smoothness s is smaller than the assumed one r. For every pair of \(s < r\), the rates are very close to the optimal ones, showing that adaptation to the unknown lesser smoothness in fact occurs. This is consistent with Corollaries 3 and 4, which imply that adaptation occurs if the design points are quasi-uniform. Figure 2 shows also some adaptability, but the rates for \(s = 1\) with \(r > s\) are slower than the optimal one. This will be discussed below, in a discussion on the effect of the separation radius.

Fig. 1
figure 1

Design points are Uniform, i.e., equally spaced grid points in [0, 1]; see Sect. 6.1 for details. The solid lines are the worst-case errors, and the dotted lines are the corresponding linear fits. The subfigures ad are, respectively, the results for the weights computed using the kernel \(k_r\) with \(r = 1, 2, 3, 4\). Black lines are the worst-case errors for the well-specified case \(s= r\) (i.e., the worst-case error is evaluated in the same Sobolev space where the weights are obtained). Note that black lines overlap the corresponding lines for \(s = r\) (e.g., in the subfigure a the red line for \(s=1\) does not appear since the black line completely overlaps it). In each legend, we report the exponents of the empirical rates of the worst-case errors. For instance, in the subfigure d, the worst-case error for \(s = 1\) decays at the rate \(O(n^{-1.055})\). On the top of each figure, the exponents in the empirical rates of the fill distance \(h_{X^n,\varOmega }\), the separation radius \(q_{X^n}\) and the absolute sum of weights \(\sum _{i=1}^n |w_i|\) are shown. For instance, for the subfigure (d), we have \(h_{X^n,\varOmega } = O(n^{-1.01})\), \(q_{X^n} = O(n^{-1.01})\) and \(\sum _{i=1}^n |w_i| = O(n^{0.00})\) (Color figure online)

Fig. 2
figure 2

Design points are Non-uniform, i.e., non-equally spaced points in [0, 1]; see Sect. 6.1 for details. The solid lines are the worst-case errors, and the dotted lines are the corresponding linear fits. The subfigures ad are, respectively, the results for the weights computed using the kernel \(k_r\) with \(r = 1, 2, 3, 4\). Black lines are the worst-case errors for the well-specified case \(s= r\) (i.e., the worst-case error is evaluated in the same Sobolev space where the weights are obtained). Note that black lines overlap the corresponding lines for \(s = r\) (e.g., in the subfigure a the red line for \(s=1\) does not appear since the black line completely overlaps it). In each legend, we report the exponents of the empirical rates of the worst-case errors. For instance, in the subfigure d, the worst-case error for \(s = 1\) decays at the rate \(O(n^{-0.748})\). On the top of each figure, the exponents in the empirical rates of the fill distance \(h_{X^n,\varOmega }\), the separation radius \(q_{X^n}\) and the absolute sum of weights \(\sum _{i=1}^n |w_i|\) are shown. For instance, for the subfigure (d), we have \(h_{X^n,\varOmega } = O(n^{-1.00})\), \(q_{X^n} = O(n^{-1.98})\) and \(\sum _{i=1}^n |w_i| = O(n^{0.47})\) (Color figure online)

Adaptability to greater smoothness While the case \(s > r\) is not covered by our theoretical analysis, Figs. 1 and 2 show some adaptation to the greater smoothness. This phenomenon is also observed by Bach [4, Section 5], who showed (for quadrature weights obtained with regularized matrix inversion) that if \(2r \ge s > r\), then the optimal rate is still attainable in an adaptive way. Bach [4, Section 6] verified this finding in experiments with quadrature weights without regularization. In our experiments, this phenomenon is observed for all cases of \(2 r \ge s > r\) expect for the case \(r = 2\) and \(s = 4\) in both Figs. 1 and 2. Note, however, that in [4], design points are assumed to be randomly generated from a specific proposal distribution, so the results there are not directly applicable to deterministic quadrature rules.

The effect of the separation radius In Fig. 1, the rate for \(s = 1\), that is, \(O(n^{-1.052})\), remains essentially the same for different values of \(r = 1,2,3,4\). This rate is essentially the optimal rate for \(s = 1\), thus showing the adaptability of Bayesian quadrature to the unknown lesser smoothness (for \(r = 2, 3, 4\)). On the other hand, in Fig. 2 on non-uniform design points, the rate for \(s = 1\) becomes slower as r increases. That is, the rates are \(O(n^{-1.035})\) for \(r = 1\) (the well-specified case), \(O(n^{-0.945})\) for \(r = 2\), \(O(n^{-0.919})\) for \(r = 3\) and \(O(n^{-0.748})\) for \(r = 4\). This phenomenon may be attributed to the fact that the separation radius of the design points for Fig. 2 decays faster than those for Fig. 1. Corollary 4 shows that the rates in the misspecified case \(s < r\) become slower as the separation radius decays more quickly and/or as the gap \(r-s\) (or the degree of misspecification) increases, and this is consistent with the simulation results.

The effect of the weights While the sum of absolute weights \(\sum _{i=1}^n |w_i|\) remains constant in Fig. 1, this quantity increases in Fig. 2. In the notation of Corollary 2, \(\sum _{i=1}^n |w_i| = O(n^c)\) with \(c = 0\) for Fig. 1 while \(c \approx 0.5\) for Fig. 2 with \(r = 2, 3, 4\). Therefore, the observation given in the preceding paragraph is also consistent with Corollary 2, since it states that larger c makes the rates slower in the misspecified case. Note that the separation radius and the quantity \(\sum _{i=1}^n |w_i|\) are intimately related in the case of Bayesian quadrature, since the weights are computed from the inverse of the kernel matrix as (17) and thus affected by the smallest eigenvalue of the kernel matrix, while this smallest eigenvalue strongly depends on the separation radius and the smoothness of the kernel; see, e.g., [52] [61, Section 12] and references therein.

7 Discussion

In this paper, we have discussed the convergence properties of kernel quadrature rules with deterministic design points in misspecified settings. In particular, we have focused on settings where quadrature weighted points are generated based on misspecified assumptions on the degree of smoothness, that is, the situation where the integrand is less smooth than assumed.

We have revealed conditions for quadrature rules under which adaptation to the unknown lesser degree of smoothness occurs. In particular, we have shown that a kernel quadrature rule is adaptive if the sum of absolute weights remains constant, or if the spacing between design points is not too small (as measured by the separation radius). Moreover, by focusing on Bayesian quadratures as working examples, we have shown that they can achieve minimax optimal rates of the unknown degree of smoothness, if the design points are quasi-uniform. We expect that this result provides a practical guide for developing kernel quadratures that are robust to the misspecification of the degree of smoothness; such robustness is important in modern applications of quadrature methods, such as numerical integration in sophisticated Bayesian models, since they typically involve complicated or black box integrands, and thus, misspecification is likely to happen.

There are several important topics to be investigated as part of future work.

Other RKHSs This paper has dealt with Sobolev spaces as RKHSs of kernel quadrature. However, there are many other important RKHSs of interest where similar investigation can be carried out. For instance, Gaussian RKHSs (i.e., the RKHSs of Gaussian kernels) have been widely used in the literature on Bayesian quadrature. Such an RKHS consists of functions with infinite degree of smoothness. This makes theoretical analysis challenging: Our analysis relies on the approximation theory developed by Narcowich and Ward [37], which only applies to the standard Sobolev spaces. Similarly, the theory of [37] is also not applicable to Sobolev spaces with dominating mixed smoothness, which have been popular in the QMC literature. In order to analyze quadrature rules in these RKHSs, we therefore need to extend the approximation theory of [37] to such spaces. Overall, this is an important but challenging theoretical problem. (We also mention that relevant results are available in follow-up papers [38, 39]. While these results do not directly provide the desired generalizations due to the same reasons mentioned above, these could still be potentially useful for our purpose.)

Sequential (adaptive) quadrature Another important direction is the analysis for kernel quadratures that sequentially select design points. Such methods are also called adaptive, since the selection of the next point \(X_{n+1}\) depends on the function values \(f(X_1), \dots , f(X_n)\) of the already selected points \(X_1,\dots ,X_n\). Note that the adaptability here is different from that of the current paper where we used it in the context of adaptability of quadrature to unknown degree of smoothness. For instance, the WSABI algorithm by [25] is an example of adaptive Bayesian quadrature which is considered as state of the art for the application of Bayesian model evidence calculation. Such adaptive methods have been known to be able to outperform non-adaptive methods in the following case: The hypothesis space is imbalanced or non-convex (see, e.g., Section 1 of [41]). In the worst-case error, the hypothesis space is the unit ball in the RKHS \({{\mathcal {H}}}\), which is balanced and convex and so adaptation does not help. In fact, it is known that the optimal rate can be achieved without adaptation. However, if the hypothesis space is imbalanced (i.e., f being in the hypothesis space does not imply that \(-f\) is in the hypothesis space), then adaptive methods may perform better. For instance, the WSABI algorithm focuses on nonnegative integrands, which means that the hypothesis is imbalanced, and thus, adaptive selection helps. Our analysis in this paper has focused on the worst-case error defined by the unit ball in an RKHS, which is balanced and convex. A future direction is thus to consider the setting of imbalanced or non-convex hypothesis spaces, such as the one consisting of nonnegative functions, which will enable us to analyze the convergence behavior of sequential or adaptive Bayesian quadrature in misspecified settings.

Random design points We have focused on deterministic quadrature rules in this paper. In the literature, however, the use of random design points has also been popular. For instance, the design points of Bayesian quadrature might be i.i.d. with a certain proposal distribution or generated as an MCMC sequence. Likewise, QMC methods usually apply randomization to deterministic design points. Our forthcoming paper will deal with such situations and provide more general results than the current paper.