Skip to main content
Log in

Testing serial independence with functional data

  • Original Paper
  • Published:
TEST Aims and scope Submit manuscript

Abstract

We consider tests of serial independence for a sequence of functional observations. The new methods are formulated as L2-type criteria based on empirical characteristic functions and are convenient from the computational point of view. We derive asymptotic normality of the proposed test statistics for both discretely and continuously observed functions. In a Monte Carlo study, we show that the new test is sensitive with respect to functional GARCH alternatives, investigate the choice of necessary tuning parameters, and demonstrate that critical values obtained by resampling lead to a test with good performance in any setup, whereas the asymptotic critical values may be recommended only for a sufficiently fine discretization grid. Finite-sample comparison with a distance (auto)covariance test criterion is also included, and the article concludes with application on a real data set.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Aneiros G, Cao R, Fraiman R, Vieu P (2019) Special issue on functional data analysis and related topics. J Multivar Anal 146:191

    MathSciNet  MATH  Google Scholar 

  • Aue A, Horváth L, Pellatt DF (2017) Functional generalized autoregressive conditional heteroskedasticity. J Time Ser Anal 38(1):3–21

    Article  MathSciNet  Google Scholar 

  • Billingsley P (1995) Probability and measure. Wiley, London

    MATH  Google Scholar 

  • Box GE, Pierce DA (1970) Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. J Am Stat Assoc 65(332):1509–1526

    Article  MathSciNet  Google Scholar 

  • Çapar U (1992) Empirical characteristic functional analysis and inference in sequence spaces. In: Probabilistic and stochastic methods in analysis, with applications. Springer, pp 517–534

  • Çapar U (1993) Weak convergence of probability measures along projective systems. Demonst Math 26(2):459–472

    MathSciNet  MATH  Google Scholar 

  • Cerovecki C, Francq C, Hörmann S, Zakoian JM (2019) Functional GARCH models: the quasi-likelihood approach and its applications. J Econom 209(2):353–375

    Article  MathSciNet  Google Scholar 

  • Chen F, Meintanis SG, Zhu L (2019) On some characterizations and multidimensional criteria for testing homogeneity, symmetry and independence. J Multivar Anal 173:125–144

    Article  MathSciNet  Google Scholar 

  • Csörgő S (1981) Multivariate empirical characteristic functions. Probab Theory Relat Fields 55(2):203–229

    MathSciNet  MATH  Google Scholar 

  • Csörgő S (1985) Testing for independence by the empirical characteristic function. J Multivar Anal 16:290–299

    Article  MathSciNet  Google Scholar 

  • Cuevas A (2014) A partial overview of the theory of statistics with functional data. J Stat Plan Inf 147:1–23

    Article  MathSciNet  Google Scholar 

  • Davidson J (1994) Stochastic limit theory. Oxford University Press, Oxford

    Book  Google Scholar 

  • Davis R, Matsui M, Mikosch T, Wan P (2018) Applications of distance correlation to time series. Bernoulli 24:3087–3116

    Article  MathSciNet  Google Scholar 

  • Edelmann D, Fokianos K, Pitsilou M (2020) An updated literature review of distance correlation and its applications to time series. Int Stat Rev 87:237–262

    Article  MathSciNet  Google Scholar 

  • Ferraty F, Vieu P (2006) Nonparametric functional data analysis: theory and practice. Springer, Berlin

    MATH  Google Scholar 

  • Feuerverger A (1990) An efficiency result for the empirical characteristic function in stationary time-series models. Can J Stat 18(2):155–161

    Article  MathSciNet  Google Scholar 

  • Gabrys R, Kokoszka P (2007) Portmanteau test of independence for functional observations. J Am Stat Assoc 102(480):1338–1348

    Article  MathSciNet  Google Scholar 

  • Giacomini R, Politis DN, White H (2013) A warp-speed method for conducting Monte Carlo experiments involving bootstrap estimators. Econom Theory 29(3):567–589

    Article  MathSciNet  Google Scholar 

  • Goia A, Vieu P (2016) Special issue on statistical models and methods for high or infinite dimensional spaces. J Multivar Anal 170:95

    Google Scholar 

  • Guidoum AC, Boukhetala K (2018) Sim.DiffProc: simulation of diffusion processes. https://cran.r-project.org/package=Sim.DiffProc, R package version 4.3

  • Gusak D, Kukush A, Kulik A, Mishura Y, Pilipenko A (2012) Theory of stochastic processess: with applications to financial mathematics and risk theory. Springer, Berlin

    MATH  Google Scholar 

  • Hall P, Van Keilegom I (2007) Two-sample tests in functional data analysis starting from discrete data. Stat Sin 17(4):1511–1531

    MathSciNet  MATH  Google Scholar 

  • Henze N, Hlávka Z, Meintanis S (2014) Testing for spherical symmetry via the empirical characteristic function. Statistics 48:1282–1296

    Article  MathSciNet  Google Scholar 

  • Hörmann S, Horváth L, Reeder R (2013) A functional version of the ARCH model. Econom Theory 29(2):267–288

    Article  MathSciNet  Google Scholar 

  • Horváth L, Kokoszka P (2012) Inference for functional data with applications. Springer, Berlin

    Book  Google Scholar 

  • Horváth L, Rice G (2015) Testing for independence between functional time series. J Econom 189(2):371–382

    Article  MathSciNet  Google Scholar 

  • Horváth L, Hušková M, Rice G (2013) Test of independence for functional data. J Multivar Anal 117:100–119

    Article  MathSciNet  Google Scholar 

  • Jiang Q, Hušková M, Meintanis SG, Zhu L (2019) Asymptotics, finite-sample comparisons and applications for two-sample tests with functional data. J Multivar Anal 170:202–220

    Article  MathSciNet  Google Scholar 

  • Kokoszka P, Reimherr M (2017) Introduction to functional data analysis. CRC Press, Baco Raton

    Book  Google Scholar 

  • Laha RG, Rohatgi VK (1979) Probability theory. Wiley, London

    MATH  Google Scholar 

  • Ljung GM, Box GE (1978) On a measure of lack of fit in time series models. Biometrika 65(2):297–303

    Article  Google Scholar 

  • Lyons R (2013) Distance covariance in metric spaces. Ann Probab 41:3284–3305

    Article  MathSciNet  Google Scholar 

  • Meintanis S (2007) A Kolmogorov-Smirnov type test for skew normal distributions based on the empirical moment generating function. J Stat Plan Inference 137:2681–2688

    Article  MathSciNet  Google Scholar 

  • Meyer D, Dimitriadou E, Hornik K, Weingessel A, Leisch F (2014) e1071: Misc functions of the Department of Statistics (e1071), TU Wien. https://CRAN.R-project.org/package=e1071, R package version 1.6-4

  • Nolan J (2013) Multivariate elliptically contoured stable distributions: theory and estimation. Comput Stat 28:2067–2089

    Article  MathSciNet  Google Scholar 

  • Prohorov YV (1961) The method of characteristic functionals. Proc Fourth Berkeley Symp Math Stat Probab Univ California Berkeley California 2:403–419

    MathSciNet  Google Scholar 

  • R Core Team (2019) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/

  • Ramsay J, Silverman BW (2005) Functional data analysis, 2nd edn. Springer, Berlin

    Book  Google Scholar 

  • Shen C, Priebe C, Vogelstein J (2020) From distance correlation to multiscale graph correlation. J Am Stat Assoc 115:280–291

    Article  MathSciNet  Google Scholar 

  • Székely G, Rizzo M (2009) Brownian distance covariance. Ann Appl Stat 3:1233–1303

    MathSciNet  MATH  Google Scholar 

  • Székely G, Rizzo M (2013) Energy statistics: a class of statistics based on distances. J Stat Plan Inferences 143:1249–1272

    Article  MathSciNet  Google Scholar 

  • Székely G, Rizzo M, Bakirov N (2007) Measuring and testing independence by correlation of distances. Ann Stat 35:2769–2794

    Article  Google Scholar 

  • Wang JL, Chiou JM, Müller HG (2016) Functional data analysis. Ann Rev Stat Appl 3:257–295

    Article  Google Scholar 

  • Zhang JT (2013) Analysis of variance for functional data. CRC Press, Baco Raton

    Book  Google Scholar 

Download references

Acknowledgements

The work of Z. Hlávka and M. Hušková has been supported by grant number GAČR 18-08888S provided by the Czech Science Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zdeněk Hlávka.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A Proofs

A Proofs

Proof of Theorem 1

As a shorthand, we set \(T_n:=\varDelta _{n,H; p}/\sqrt{H}\) for the test statistic. Also with the following simplified notation:

$$\begin{aligned} \widetilde{g}(\varvec{u},\varvec{v}; \varvec{X}_{j}, \varvec{X}_{j+h})=&g(\varvec{u}, \varvec{v}; \varvec{X}_{j},\varvec{X}_{j+h})-\mathbb {E} \left\{ g(\varvec{u},\varvec{v}; \varvec{X}_{j},\varvec{X}_{j+h})| \varvec{X}_j\right\} \\&- \mathbb {E}\left\{ g( \varvec{u},\varvec{v}; \varvec{X}_{j}, \varvec{X}_{j+h})|\varvec{X}_{j+h}\right\} +\mathbb {E} g(\varvec{u},\varvec{v};\varvec{X}_{j},\varvec{X}_{j+h}),\\ g(\varvec{u}, \varvec{v}; \varvec{X}_{j},\varvec{X}_{j+h})=&\cos (\varvec{u}^{\top } \varvec{X}_j+\varvec{v} ^{\top } \varvec{X}_{j+h}) +\sin (\varvec{u}^{\top } \varvec{X}_j+\varvec{v} ^{\top } \varvec{X}_{j+h}), \end{aligned}$$

repeated use of the simple properties \( \mathbb {E} \widetilde{g}(\varvec{u}, \varvec{v}; \varvec{X}_{j_1}, \varvec{X}_{j_1+h_1})\widetilde{g}(\varvec{u}, \varvec{v};\varvec{X}_{j_2}, \varvec{X}_{j_2+h_2})=0\ \text{ if } \ j_1\ne j_2, \ \text{ or } \text{ if } \ j_1=j_2\ \text{ and } \ h_1\ne h_2\) and \( \mathbb {E} \widetilde{g} ( \varvec{u},\varvec{v}; \varvec{X}_{j}, \varvec{X}_{j+h})=0\) is made. \(\square \)

The test statistic \(T_n\) will be decomposed into several summands, some of them negligible and some others influential. Notice that under the null hypothesis \(T_n\) can be rewritten as:

$$\begin{aligned}&T_n= \frac{1}{\sqrt{H}} \sum _{h=1}^H (n-h)\iint \Bigg |\frac{1}{n-h} \sum _{j=1}^{n-h}\left\{ \exp ( {\mathrm{i}}\varvec{u}^\top \varvec{X}_j) -\varphi _x(\varvec{u})\right\} \left\{ \exp ({\mathrm{i}} \varvec{v}^\top \varvec{X}_{j+h}) -\varphi _x(\varvec{v})\right\} \\&\quad -\frac{1}{(n-h)^2}\sum _{j=1}^{n-h}\left\{ \exp ({\mathrm{i}} \varvec{u}^\top \varvec{X}_j) -\varphi _x(\varvec{u})\right\} \sum _{r=1}^{n-h}\left\{ \exp ({\mathrm{i}} \varvec{v}^\top \varvec{X}_{r+h}) -\varphi _x(\varvec{v}) \right\} \Bigg |^2 w(\varvec{u}) w(\varvec{v}) \mathrm{d}\varvec{u} \mathrm{d}\varvec{v} , \end{aligned}$$

where \( \varphi _x(\varvec{u})\) is CF of \(\varvec{X}_j\). Since

$$\begin{aligned}&\iint \mathbb {E} \Bigg |\frac{1}{n-h} \sum _{j=1}^{n-h}\left\{ \exp ( {\mathrm{i}}\varvec{u}^\top \varvec{X}_j)-\varphi _x(\varvec{u})\right\} \sum _{s=1}^{n-h}\left\{ \exp ({\mathrm{i}} \varvec{v}^\top \varvec{X}_{s+h})-\varphi _x(\varvec{v})\right\} \Bigg |^2 w( \varvec{u}) w(\varvec{v}) \mathrm{d}\varvec{u} \mathrm{d}\varvec{v} \nonumber \\&\quad \le D\iint w( \varvec{u}) w( \varvec{v}) \mathrm{d}\varvec{u} \mathrm{d}\varvec{v}=O(1) \end{aligned}$$
(20)

for some \(D>0\) then also

$$\begin{aligned} T_{n}= T_{n,1} + O\big ( H^{1/4} n^{-1/2}\big ), \end{aligned}$$
(21)

where

$$\begin{aligned} T_{n,1}=&\frac{1}{\sqrt{H}} \sum _{h=1}^H (n-h)\iint \Bigg |\frac{1}{n-h} \sum _{j=1}^{n-h}\left\{ \exp ({\mathrm{i}} \varvec{u}^\top \varvec{X}_j) -\varphi _x(\varvec{u})\right\} \left\{ \exp ({\mathrm{i}} \varvec{v}^\top \varvec{X}_{j+h}) -\varphi _x(\varvec{v})\right\} \Bigg |^2 \\&\times w(\varvec{u}) w(\varvec{v}) \mathrm{d}\varvec{u} \mathrm{d}\varvec{v}\\ =&\frac{1}{\sqrt{H}} \sum _{h=1}^H (n-h)\iint \Bigg \{\frac{1}{n-h} \sum _{j=1}^{n-h} \widetilde{g}(\varvec{u},\varvec{v}; \varvec{X}_{j}, \varvec{X}_{j+h})\Bigg \}^2 w(\varvec{u}) w(\varvec{v}) \mathrm{d}\varvec{u} \mathrm{d}\varvec{v}. \end{aligned}$$

The latter expression for \(T_{n,1}\) is obtained after long but straightforward calculations using elementary properties of trigonometric functions and the assumptions on \(w(\cdot )\). Further, \(T_{n,1}\) can be decomposed as

$$\begin{aligned} T_{n,1}=&\frac{1}{\sqrt{H}} \sum _{h=1}^H \frac{1}{n-h}\iint \sum _{j=1}^{n-h} \widetilde{g}^2(\varvec{u},\varvec{v}; \varvec{X}_{j}, \varvec{X}_{j+h}) w(\varvec{u}) w(\varvec{v}) \mathrm{d}\varvec{u} \mathrm{d}\varvec{v}\\&+ \frac{1}{\sqrt{H}} \sum _{h=1}^H \iint \frac{1}{n-h} \sum _{j_1=1}^{n-h}\sum _{j_2=1}^{n-h} I\{|j_1-j_2|\ge H\} \widetilde{g}(\varvec{u},\varvec{v}; \varvec{X}_{j_1},\varvec{X}_{j_1+h})\\&\quad \times \widetilde{g}(\varvec{u},\varvec{v}; \varvec{X}_{j_2}, \varvec{X}_{j_2+h}) w(\varvec{u}) w(\varvec{v}) \mathrm{d}\varvec{u} \mathrm{d}\varvec{v}\\&+ \frac{1}{\sqrt{H}} \sum _{h=1}^H \frac{1}{n-h}\iint \sum _{j_1=1}^{n-h}\sum _{j_2=1}^{n-h} I\{j_1\ne j_2;|j_1-j_2|< H\} \widetilde{g}(\varvec{u},\varvec{v}; \varvec{X}_{j_1}, \varvec{X}_{j_1+h})\\&\quad \times \widetilde{g}(\varvec{u},\varvec{v};\varvec{X}_{j_2}, \varvec{X}_{j_2+h}) w(\varvec{u}) w(\varvec{v}) \mathrm{d}\varvec{u} \mathrm{d}\varvec{v} := T_{n,11} +T_{n,12}+T_{n,13}, \end{aligned}$$

where \(I\{A\}\) is the indicator of the set A. We study these terms separately and show that \(T_{n,11}\) and \(T_{n,12}\) are influential, while \( T_{n,13}\) is negligible. Their properties are formulated in the next two lemmas.

Lemma 1

Under the assumptions of Theorem 1, it holds that \(\mathbb {E} T_{n,13}=0\), \(\mathbb {E} T^2_{n,13}= O(H^3/n)\), and \( \mathbb {E} T_{n,11}=\sqrt{H} \mathbb {E} \iint \widetilde{g}^2(\varvec{u},\varvec{v}; \varvec{X}_{1}, \varvec{X}_{2}) w(\varvec{u}) w(\varvec{v}) \mathrm{d}\varvec{u} \mathrm{d}\varvec{v} =\sqrt{H} \gamma _p +O_P(H n^{-1/2})\).

Proof of Lemma 1

The assertions are obtained by directly calculating expectations and variances. \(\square \)

It remains to study \( T_{n,12}\) which is the most difficult part. The aim is to prove that \(T_{n,12}\) has asymptotically normal distribution with zero mean and a finite variance.

Lemma 2

Under the assumptions of Theorem 1, it holds that \( \frac{ T_{n,12}}{ \sqrt{ \nu _p}} {\mathop {\rightarrow }\limits ^{{\mathcal {L}}}} {\mathcal {N}}(0,1) \) as \(n\rightarrow \infty \).

Proof of Lemma 2

It suffices to investigate \(\widehat{T}_{n,12}= \sum _{j_2=H}^{n-H} Q_{j_2,n}\) and

$$\begin{aligned} Q_{j_2,n}=\frac{1}{\sqrt{H}} \sum _{h=1}^H \iint \frac{1}{n-H} \sum _{j_1=1}^{j_2-H} \widetilde{g}(\varvec{u},\varvec{v}; \varvec{X}_{j_1}, \varvec{X}_{j_1+h}) \widetilde{g}(\varvec{u},\varvec{v}; \varvec{X}_{j_2}, \varvec{X}_{j_2+h}) w(\varvec{u}) w(\varvec{v}) \mathrm{d}\varvec{u} \mathrm{d}\varvec{v}. \end{aligned}$$

Since \(\mathbb {E}\big ( Q_{j_2,n}|\varvec{X}_1,\ldots ,\varvec{X}_{j_2-1}\big )=0, \quad j_2=H,\dots , n-H\), \(\widehat{T}_{n,12}\) is the sum of martingale differences. To show the asymptotic normality of \(\widehat{T}_{n,12}\), we apply Theorem 24.3 (page 383) in Davidson (1994), which means to prove that as \(n\rightarrow \infty \)

$$\begin{aligned} \frac{\sum _{j_2=H}^{n-H} Q^2_{j_2,n}}{ \mathbb {E} \sum _{j_2=H}^{n-H} Q^2_{j_2,n}} {\mathop {\rightarrow }\limits ^{P}} 1 \quad {\text {and}}\quad \max _{j_2\le n-H} \frac{ Q^2_{j_2,n}}{\mathbb {E} \sum _{j_2=H}^{n-H} Q^2_{j_2,n}}{\mathop {\rightarrow }\limits ^{P}} 0. \end{aligned}$$

Towards this, we denote \(\widetilde{g}_{i,j,h}=\widetilde{g}(\varvec{u}_i,\varvec{v}_i; \varvec{X}_{j},\varvec{X}_{j+h})\) and investigate

$$\begin{aligned} \sum _{j_2=2}^{n-H} Q^2_{j_2}= & {} \frac{1}{H}\frac{1}{(n-H)^2}\sum _{j_2=2}^{n-H}\sum _{j_1=1}^{j_2-H} \sum _{j_3=1}^{j_2-H} \sum _{h_1=1}^H \sum _{h_2=1}^H \iiiint \widetilde{g}_{1,j_1,h_1} \widetilde{g}_{2,j_3,h_2} \big \{\widetilde{g}_{1,j_2,h_1} \widetilde{g}_{2,j_2,h_2}\nonumber \\&\pm \mathbb {E}\left( \widetilde{g}_{1,j_2,h_1} \widetilde{g}_{2,j_2,h_2}\right) \big \} w(\varvec{u}_1) w(\varvec{v}_1) w(\varvec{u}_2) w(\varvec{v}_2) \mathrm{d}\varvec{u}_1\mathrm{d}\varvec{v}_1 \mathrm{d}\varvec{u}_2 \mathrm{d}\varvec{v}_2, \end{aligned}$$
(22)

which will be again decomposed into several summands some of them negligible, while others are influential. Particularly, we investigate separately the terms \(L_{1,n},L_{2,n},L_{3,n},L_{4n}\) defined below. Defining

$$\begin{aligned} L_{1,n}= & {} \frac{1}{H}\frac{1}{(n-H)^2}\sum _{j_2=2}^{n-H} \sum _{j_1=j_3=1}^{j_2-H} \sum _{h_1=1}^H \sum _{h_2=1}^H \iiiint \widetilde{g}_{1,j_1,h_1} \widetilde{g}_{2,j_1,h_2} \big \{\widetilde{g}_{1,j_2,h_1} \widetilde{g}_{2,j_2,h_2}\\&-\mathbb {E}(\widetilde{g}_{1,j_2,h_1} \widetilde{g}_{2,j_2,h_2})\big \} w(\varvec{u}_1) w(\varvec{v}_1) w(\varvec{u}_2) w(\varvec{v}_2) \mathrm{d}\varvec{u}_1\mathrm{d}\varvec{v}_1 \mathrm{d}\varvec{u}_2 \mathrm{d}\varvec{v}_2 \end{aligned}$$

it can be shown by straightforward but long calculations that \(\mathbb {E}L_{1,n}= 0\), \(\mathbb {E}L_{1,n}^2=O\Big ( \frac{H n^2}{ H^2 n^4}\Big )\). Next, for

$$\begin{aligned} L_{2,n}= & {} \frac{1}{H}\frac{1}{(n-H)^2} \sum _{j_2=2}^{n-H}\sum _{j_1=j_3=1}^{j_2-H} \sum _{h=1}^H \iiiint \widetilde{g}_{1,j_1,h} \widetilde{g}_{2,j_1,h} \mathbb {E}(\widetilde{g}_{1,j_2,h} \widetilde{g}_{2,j_2,h})\\&\times w(\varvec{u}_1) w(\varvec{v}_1) w(\varvec{u}_2) w(\varvec{v}_2) \mathrm{d}\varvec{u}_1\mathrm{d}\varvec{v}_1 \mathrm{d}\varvec{u}_2 \mathrm{d}\varvec{v}_2 \end{aligned}$$

we have that

$$\begin{aligned} \mathbb {E} L_{2,n}= & {} \frac{1}{H}\frac{1}{(n-H)^2}\sum _{j_2=2}^{n-H} \sum _{j_1=j_3=1}^{j_2-H} \sum _{h=1}^H \iiiint \mathbb {E} ( \widetilde{g}_{1,j_1,h} \widetilde{g}_{2,j_1,h}) \mathbb {E}(\widetilde{g}_{1,j_2,h} \widetilde{g}_{2,j_2,h})\\&\times w(\varvec{u}_1) w(\varvec{v}_1) w(\varvec{u}_2) w(\varvec{v}_2) \mathrm{d}\varvec{u}_1\mathrm{d}\varvec{v}_1 d \varvec{u}_2 \mathrm{d}\varvec{v}_2,\\ \mathbb {E}( L_{2,n}-\mathbb {E}L_{2,n})^2= & {} O\Big (\frac{1}{H^2n^4}n^3 H\Big )=O\Big (\frac{1}{nH}\Big ). \end{aligned}$$

Likewise for

$$\begin{aligned} L_{3,n}= & {} \frac{1}{H}\frac{1}{(n-H)^2}\sum _{j_2=3}^{n-H}\sum _{j_1=1}^{j_3-1} \sum _{j_3=1}^{j_2-H} \sum _{h_1=1}^H \iiiint \widetilde{g}_{1,j_1,h_1} \widetilde{g}_{2,j_3,h_1} \{\widetilde{g}_{1,j_2,h_1} \widetilde{g}_{2,j_2,h_1}\\&- \mathbb {E} (\widetilde{g}_{1,j_2,h_1} \widetilde{g}_{2,j_2,h_1})\} w(\varvec{u}_1) w(\varvec{v}_1) w(\varvec{u}_2) w(\varvec{v}_2) \mathrm{d}\varvec{u}_1\mathrm{d}\varvec{v}_1 \mathrm{d}\varvec{u}_2 \mathrm{d}\varvec{v}_2, \end{aligned}$$

it may be shown that \(\mathbb {E} L_{3,n}=0\) and \(\mathbb {E} L^2_{3,n} =O\Big ( \frac{1}{H^2 n^4} n^3 H^2\Big )\). Finally for

$$\begin{aligned} L_{4,n}= & {} \frac{1}{H}\frac{1}{(n-H)^2}\sum _{j_2=3}^{n-H} \sum _{j_1=1}^{j_3-1} \sum _{j_3=1}^{j_2-H} \sum _{h=1}^H \iiiint \widetilde{g}_{1,j_1,h} \widetilde{g}_{2,j_3,h} \mathbb {E} (\widetilde{g}_{1,j_2,h} \widetilde{g}_{2,j_2,h})\\&\times w(\varvec{u}_1) w(\varvec{v}_1) w(\varvec{u}_2) w(\varvec{v}_2) \mathrm{d}\varvec{u}_1\mathrm{d}\varvec{v}_1 \mathrm{d}\varvec{u}_2 \mathrm{d}\varvec{v}_2 \end{aligned}$$

we have \(\mathbb {E} L_{4,n}=0\), \(\mathbb {E} L_{4,n}^2 =O\Big (\frac{n^2}{H^2n^4} H n^2\Big )=O(H^{-1})\). More detailed (long but straightforward) calculations give that \(\mathbb {E} L_{2,n}=\frac{1}{2} \nu (1+o_P(1))\).

Combining the above properties of \(L_{j,n}, \, j=1,\ldots ,4\), we obtain \(\sum _{j_2=H}^{n-H} Q_{j_2}^2= \nu (1+o_P(1))\) and \(\sum _{j_2=H}^{n-H}\mathbb {E} Q_{j_2}^2= \nu (1+o(1))\) and that for any \(c>0\),

$$\begin{aligned}&\max _{j_2\le n-H}\frac{\mathbb {E} Q^2_{j_2,n}}{ \mathbb {E} \sum _{j_2=H}^{n-H} Q^2_{j_2,n}}=o_P(1),\\&\quad P\Big (\max _{j_2\le n-H}\frac{| Q^2_{j_2,n}- \mathbb {E} Q^2_{j_2,n}| }{ \mathbb {E} \sum _{j_2=H}^{n-H}\mathbb {E} Q^2_{j_2,n}}\ge c\Big )\\&\quad \le \sum _{j_2=H}^{n-H}\frac{1}{c^2 \big (\mathbb {E} \sum _{j_2=H}^{n-H} Q^2_{j_2,n}\big )^2} \mathbb {E} \big (Q^2_{j_2,n}- Q^2_{j_2,n}\big )^2=o_P(1). \end{aligned}$$

So the requirements (22) are fulfilled and going through the whole proof we can conclude that the assertion of Lemma 2 holds true. \(\square \)

Proof of Theorem 1, continuation

Combining (21), Lemma 1 and Lemma 2 imply the assertion of Theorem 1. \(\square \)

Proof of Theorem 2

Since under the assumptions of stationarity and ergodicity (20), the next properties hold true

$$\begin{aligned}&\frac{1}{\sqrt{H}}\sum _{h=1}^H (n-h)\iint \mathbb {E}\Big |\frac{1}{n-h} \sum _{j=1}^{n-h}\big \{\exp ({\mathrm{i}} (\varvec{u}^\top \varvec{X}_j +\varvec{v}^\top \varvec{X}_{j+h}))\\&\quad - \mathbb {E}\exp ({\mathrm{i}} (\varvec{u}^\top \varvec{X}_j +\varvec{v}^\top \varvec{X}_{j+h}) )\big \} \Big |^2 w( \varvec{u}) w(\varvec{v}) \mathrm{d}\varvec{u} \mathrm{d}\varvec{v} =O(1), \\&\quad \frac{1}{\sqrt{H}}\sum _{h=1}^H (n-h)\iint \mathbb {E} \Big |\frac{1}{(n-h)^2}\sum _{j=1}^{n-h}\big \{\exp ({\mathrm{i}} \varvec{u}^\top \varvec{X}_j) -\varphi _x(\varvec{u})\big \} \\&\quad \times \sum _{r=1}^{n-h}\big \{\exp ({\mathrm{i}} \varvec{v}^\top \varvec{X}_{r+h}) -\varphi _x(\varvec{v})\big \} \Big |^2 w( \varvec{u}) w( \varvec{v}) \mathrm{d}\varvec{u} \mathrm{d}\varvec{v}=O(1) \end{aligned}$$

and since by assumption (16) we also have

$$\begin{aligned}&\frac{1}{\sqrt{H}} \sum _{h=1}^H (n-h)\iint \Big |\mathbb {E}\exp \{{\mathrm{i}} (\varvec{u}^\top \varvec{X}_j +\varvec{v}^\top \varvec{X}_{j+h}) \}\\&- \varphi _{x_1}(\varvec{u})\varphi _{x_{1+h}} (\varvec{v})\Big |^2 w(\varvec{u}) w( \varvec{v}) \mathrm{d}\varvec{u} \mathrm{d}\varvec{v}=O(n), \end{aligned}$$

the assertion of Theorem 2 directly follows. \(\square \)

Proof of Theorem 3

Going through the proof of Theorem 1 taking into account the considered setup, we can directly conclude Theorem 3. \(\square \)

Proof of Theorem 4

It is omitted since it follows the same line as that of Theorem 2. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hlávka, Z., Hušková, M. & Meintanis, S.G. Testing serial independence with functional data. TEST 30, 603–629 (2021). https://doi.org/10.1007/s11749-020-00732-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11749-020-00732-0

Keywords

Mathematics Subject Classification

Navigation