Skip to main content
Log in

Asymptotic theory for regression models with fractional local to unity root errors

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

This paper develops the asymptotic theory for parametric and nonparametric regression models when the errors have a fractional local to unity root (FLUR) model structure. FLUR models are stationary time series with semi-long range dependence property in the sense that their covariance function resembles that of a long memory model for moderate lags but eventually diminishes exponentially fast according to the presence of a decay factor governed by a an exponential tempering parameter. When this parameter is sample size dependent, the asymptotic theory for these regression models admit a wide range of stochastic processes with behavior that includes long, semi-long, and short memory processes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Abramowitz M, Stegun I (1965) Handbook of mathematical functions, 9th edn. Dover, Illinois

    MATH  Google Scholar 

  • Baillie R (1996) Long-memory processes and fractional integration in econometrics. J Econ 73:5–59

    Article  MathSciNet  MATH  Google Scholar 

  • Barndorff-Nielsen O (1998) Processes of normal inverse gaussian type. Finance Stoch 2:41–68

    Article  MathSciNet  MATH  Google Scholar 

  • Beran J, Feng Y (2013) Optimal convergence rates in non-parametric regression with fractional time series errors. J Time Ser Anal 34:30–39

    Article  MathSciNet  MATH  Google Scholar 

  • Beran J, Weiershäuser A, Galizia C, Rein J, Smith B, Strauch M (2014) On piecewise polynomial regression under general dependence conditions, with an application to calcium-imaging data. Sankhya B 76:49–81

    Article  MathSciNet  MATH  Google Scholar 

  • Billingsley P (1968) Convergence of probability measures. Wiley, Hoboken

    MATH  Google Scholar 

  • Bobkoski M (2011) Hypothesis testing in nonstationary time series. Dissertation, University of Wisconsin

  • Boor C (2001) A practical guide to splines. Springer, Berlin

    MATH  Google Scholar 

  • Brockwell P, Davis R (2012) Time series: theory and methods, 2nd edn. Springer, Berlin

    MATH  Google Scholar 

  • Chan N, Wei C (1987) Asymptotic inference for nearly nonstationary AR(1) processes. Ann Statist 15:1050–1063

    Article  MathSciNet  MATH  Google Scholar 

  • Csörgő M, Horváth L (1997) Limit theorems in change-point analysis. Wiley, Hoboken

    MATH  Google Scholar 

  • Csörgő S, Mielniczuk J (1995) Close short-range dependent sums and regression estimation. Acta Sci Math (Szeged) 60:177–196

    MathSciNet  MATH  Google Scholar 

  • Csörgő S, Mielniczuk J (1995) Nonparametric regression under long-range dependent normal errors. Ann Statist 23:1000–1014

    MathSciNet  MATH  Google Scholar 

  • Csörgő S, Mielniczuk J (1995) Distant long-range dependent sums and regression estimation. Stochastic Process Appl 59:143–155

    Article  MathSciNet  MATH  Google Scholar 

  • Dacorogna M, Moller U, Nagler R, Olsen R, Pictet O (1993) A geographical model for the daily and weekly seasonal volatility in the foreign exchange market. J Int Money Finance 12:413–443

    Article  Google Scholar 

  • De Brabanter K, Cao F, Gijbels I, Opsomer J (2018) Local polynomial regression with correlated errors in random design and unknown correlation structure. Biometrika 105:681–690

    Article  MathSciNet  MATH  Google Scholar 

  • Deo R (1997) Nonparametric regression with long-memory errors. Stat Probab Lett 33:89–94

    Article  MathSciNet  MATH  Google Scholar 

  • Feller W (1971) An introduction to probability theory and its applications, 2nd edn. Wiley, Hoboken

    MATH  Google Scholar 

  • Giraitis L, Kokoszka P, Leipus R (2000) Stationary arch models: dependence structure and central limit theorem. Econ Theory 16:3–22

    Article  MathSciNet  MATH  Google Scholar 

  • Giraitis L, Kokoszka P, Leipus R, Teyssière G (2003) On the power of R/S-type tests under contiguous and semi-long memory alternatives. Acta Appl Math 78:285–299

    Article  MathSciNet  MATH  Google Scholar 

  • Giraitis L, Koul H, Surgailis D (2012) Large sample inference for long memory processes. Imperial College Press, London

    Book  MATH  Google Scholar 

  • Granger C, Joyeux R (1980) An introduction to long-memory time series models and fractional differencing. J Time Ser Anal 1:15–29

    Article  MathSciNet  MATH  Google Scholar 

  • Granger C, Ding Z (1996) Varieties of long memory models. J Econ 73:61–77

    Article  MathSciNet  MATH  Google Scholar 

  • Guo H, Koul H (2007) Nonparametric regression with heteroscedastic long memory errors. J Statist Plann Inference 137:379–404

    Article  MathSciNet  MATH  Google Scholar 

  • Hall P, Hart J (1990) Nonparametric regression with long-range dependence. Stochast Process Appl 36:339–351

    Article  MathSciNet  MATH  Google Scholar 

  • Hosking J (1981) Fractional differencing. Biometrika 68:165–176

    Article  MathSciNet  MATH  Google Scholar 

  • Hosking J (1984) Modeling persistence in hydrological time series using fractional differencing. Water Resour Res 20:1898–1908

    Article  Google Scholar 

  • McLeod AI, Meerschaert MM, Sabzikar F (2016) Artfima v1.5. https://cran.r-project.org/web/packages/artfima/artfima.pdf

  • Meerschaert M, Sabzikar F (2014) Stochastic integration with respect to tempered fractional brownian motion. Stochast Process Appl 124:2363–2387

    Article  MATH  Google Scholar 

  • Müller U, Watson M (2014) Measuring uncertainty about long-run predictions. Rev Econ Stud 83:1711–1740

    Article  MathSciNet  MATH  Google Scholar 

  • Phillips P (1987) Time series regression with a unit root. Econometrica 55:277–301

    Article  MathSciNet  MATH  Google Scholar 

  • Phillips P, Yu J (2011) Dating the timeline of financial bubbles during the subprime crisis. Quantitat Econ 2:455–491

    Article  MathSciNet  MATH  Google Scholar 

  • Phillips P, Shi S, Yu J (2015) Testing for multiple bubbles: historical episodes of exuberance and collapse in the S&P 500. Int Econ Rev 56:1042–1076

    MATH  Google Scholar 

  • Phillips P, Shi S, Yu J (2015) Testing for multiple bubbles: limit theory of real time detectors. Int Econ Rev 56:1077–1131

    MathSciNet  MATH  Google Scholar 

  • Pipiras V, Taqqu M (1997) Asymptotic theory for certain regression models with long memory errors. J Time Ser Anal 18:385–393

    Article  MathSciNet  Google Scholar 

  • Pipiras V, Taqqu M (2000) Convergence of weighted sums of random variables with long range dependence. Stochast Process Appl 90:157–174

    Article  MathSciNet  MATH  Google Scholar 

  • Priestley M, Chao M (1972) Nonparametric function fitting. J R Stat Soc Ser B Stat Methodol 34:385–392

    MATH  Google Scholar 

  • Ray B, Tsay R (1997) Bandwidth selection for kernel regression with long-range dependent errors. Biometrika 84:791–802

    Article  MathSciNet  MATH  Google Scholar 

  • Robinson P (1997) Large sample inference for nonparametric regression with dependent errors. Ann Stat 25:2054–2083

    MathSciNet  MATH  Google Scholar 

  • Sabzikar F, Surgailis D (2018) Tempered fractional brownian and stable motions of second kind. Stat Probab Lett 132:17–27

    Article  MathSciNet  MATH  Google Scholar 

  • Sabzikar F, Surgailis D (2018) Invariance principles for tempered fractionally integrated processes. Stochast Process Appl 128:3419–3438

    Article  MathSciNet  MATH  Google Scholar 

  • Samorodnitsky G, Taqqu M (1994) Stable non-gaussian random processes: stochastic models with infinite variance. Chapman and Hall, Boca Raton

    MATH  Google Scholar 

  • Seber G, Wild C (1989) Nonlinear regression. Wiley, Hoboken

    Book  MATH  Google Scholar 

  • Wahba G (1990) Spline models for observational data. Society for Industrial and Applied Mathematics (SIAM), New Delhi

    Book  MATH  Google Scholar 

  • Weiershäuser A (2012) Piecewise polynomial regression with fractional residuals for the analysis of calcium imaging data. Dissertation. http://kops.uni-konstanz.de/handle/123456789/18867, University of Konstanz

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kris De Brabanter.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Proofs

Proofs

Before we prove the main results of the paper, we first state two technical lemmas upon which our results are based. Lemma 1 and Lemma 2, play an important role in establishing the asymptotic results in Sect. 4. Next, we introduce some notations that will be used in Lemma 2.

For the function f and \(m\in \mathbb {N}\cup \{\infty \}\), we define the approximation

$$\begin{aligned}&f^{+}_{N,m}(y)=\sum _{j=0}^{m} f\Big (\frac{j}{N}\Big )1_{[\frac{j}{N},\frac{j+1}{N}]}(y), \qquad f^{-}_{N,m}=\sum _{j=-m}^{-1} f\Big (\frac{j}{N}\Big )1_{[\frac{j}{N},\frac{j+1}{N}]}(y), \\&f^{+}_{N}=f^{+}_{N,\infty }, \qquad f^{-}_{N}=f^{+}_{N,\infty },\qquad f_{N}=f^{+}_{N}+f^{-}_{N}. \end{aligned}$$

Lemma 1

Let \(b_d(k)\) be defined as in (4). Then for any \(y \in \mathbb {R}\)

$$\begin{aligned} \biggl (\lambda _N + i\frac{y}{N}\biggr )^{d} \sum _{k=0}^{\infty } e^{- (\lambda _N + i\frac{y}{N})k}\, b_{d}(k) \sim 1 \end{aligned}$$
(43)

as \(N\rightarrow \infty \).

proof Lemma 1

For \(d>0\), since \(\sum _{k=0}^Nb_d(k) \sim 1/(d\varGamma (d)) N^d\) as \(N\rightarrow \infty \) according to (4) and \(e^{-(\lambda _N+\frac{ix}{N})} \le 1\), then according to the Tauberian theorem for power series (Feller 1971, p. 447 Theorem 5) we have

$$\begin{aligned} \sum _{k=0}^{\infty } e^{- (\lambda _N + i\frac{y}{N})k}\, b_{d}(k) \sim (1-e^{- (\lambda _N + i\frac{y}{N})k})^{-d} \text { as } N\rightarrow \infty \end{aligned}$$

and consequently \(\bigl (\lambda _N + i\frac{y}{N}\bigr )^{d}/(1-e^{- (\lambda _N + i\frac{y}{N})k})^{d}\sim 1\) for \(\lambda _N \rightarrow 0\) and \(N \rightarrow \infty \) as \(N\rightarrow \infty \), proving (43). For \(-1<d<0\), define \(\widetilde{b}_d(k) = \sum _{i=k}^\infty b_d(i) \sim -1/(d\varGamma (d))k^{\widetilde{d}-1}\) with \(\widetilde{d}=d+1 \in (0,1)\). Next, we have that

$$\begin{aligned} \widetilde{b}_d(0) = \sum _{i=0}^\infty b_d(i) = \sum _{i=0}^{k-1} b_d(i) + \sum _{i=k}^\infty b_d(i) = 0 \end{aligned}$$
(44)

and therefore \(\sum _{i=0}^{k-1} b_d(i) = -\sum _{i=k}^\infty b_d(i)\). Using summation by parts (Giraitis et al., 2012, p. 32, Eq. 2.5.8) yields

$$\begin{aligned} \lim _{s\rightarrow \infty } \sum _{j=0}^s e^{- (\lambda _N + i\frac{y}{N})j}\, b_{d}(j)= & {} \lim _{s\rightarrow \infty } \bigl [e^{-(\lambda _N +i\frac{y}{N})s} \sum _{j=0}^s b_d(j)\\&+ \sum _{j=0}^{s-1}e^{-(\lambda _N +i\frac{y}{N})s}-e^{-(\lambda _N +i\frac{y}{N})(s+1)} \sum _{i=0}^j b_d(i)\bigr ] \\= & {} 0 + \{1- e^{- (\lambda _N + i\frac{y}{N})}\}\lim _{s\rightarrow \infty } \sum _{j=0}^{s-1}e^{-(\lambda _N +i\frac{y}{N})j} \sum _{i=0}^j b_d(i). \end{aligned}$$

By setting \(j=k-1\) and using (44) we have

$$\begin{aligned}&\lim _{s\rightarrow \infty } \sum _{j=0}^s e^{- (\lambda _N + i\frac{y}{N})j}\, b_{d}(j) = \{1- e^{- (\lambda _N + i\frac{y}{N})}\}\lim _{s\rightarrow \infty } \sum _{k=1}^{s-1}e^{-(\lambda _N +i\frac{y}{N})(k-1)} \sum _{i=0}^{k-1} b_d(i) \\&\quad = -e^{(\lambda _N + i\frac{y}{N})} \{1- e^{- (\lambda _N + i\frac{y}{N})}\}\lim _{s\rightarrow \infty }\sum _{k=1}^{s-1}e^{-(\lambda _N +i\frac{y}{N})k}\sum _{i=k}^\infty b_d(i) \\&\quad = -e^{(\lambda _N + i\frac{y}{N})} \{1- e^{- (\lambda _N + i\frac{y}{N})}\}\lim _{s\rightarrow \infty }\sum _{k=1}^{s-1}e^{-(\lambda _N +i\frac{y}{N})k}\widetilde{b}_d(k). \end{aligned}$$

Application of the Tauberian theorem for power series (Feller, 1971, p. 447 Theorem 5) yields

$$\begin{aligned} \sum _{k=0}^{\infty } e^{- (\lambda _N + i\frac{y}{N})k}\, b_{d}(k) \sim \{1- e^{- (\lambda _N + i\frac{y}{N})}\}^{1-\widetilde{d}} = \{1- e^{- (\lambda _N + i\frac{y}{N})}\}^{-d} \end{aligned}$$

as \(N\rightarrow \infty \), proving (43). In the general case \(-j< d < -j +1\), \(j = 1,2,\ldots \) (43) follows similarly using summation by parts j times. For \(d=0\), it can be shown that the same result holds under an additional assumption on the sum of the \(b_d(k)\)’s Sabzikar and Surgailis (2018b).

Lemma 2

Let \(\{ X_{d,\lambda _N}(j)\}_{j\in \mathbb {Z}}\) be tempered linear process given by (3), \(N\lambda _N\rightarrow \lambda _*\in (0,\infty )\) and \(d>-1/2\). Let \({\mathcal {A}}_{d,\lambda _*}\) be the class of functions defined by (26) and let

$$\begin{aligned}&{\mathbf{Condition\ A}:}\ f,f^{\pm }_{N}\in { {\mathcal {A}}_{d,\lambda _N}}, \Vert f^{\pm }_{N} - f^{\pm }_{N,m}\Vert _{ {\mathcal {A}}_{d,\lambda _N} }\rightarrow 0,\ \mathrm{as}\ m\\&\quad \rightarrow \infty ,\ \Vert f-f_N\Vert _{ {\mathcal {A}}_{d,\lambda _*} }\rightarrow 0,\ \mathrm{as}\ N\rightarrow \infty \end{aligned}$$

be satisfied, then

$$\begin{aligned} \frac{1}{N^{d+1/2}}\sum _{j=-\infty }^{\infty }f\Big (\frac{j}{N}\Big )X_{d,\lambda _N}(j) {\mathop {\longrightarrow }\limits ^{\mathrm{f.d.d.}}}\int _{\mathbb {R}}f(u)\ dB^{II}_{d,\lambda _*}(u) \end{aligned}$$
(45)

as \(N\rightarrow \infty \).

proof Lemma 2

We first note that \(\{X_{d,\lambda _N}(j)\}_{j\in \mathbb {Z}}\) can be written as

$$\begin{aligned} X_{d,\lambda _N}(j) = \frac{1}{\sqrt{2\pi }} \int _{-\pi }^{\pi }e^{i\omega j}\sum _{k=0}^{\infty } e^{-i\omega k} e^{-\lambda _N k} b_{d}(k) \hat{B}(d\omega ), \end{aligned}$$
(46)

where \(\hat{B}(d \omega ) \) is complex-valued Gaussian noise with \(\mathbb {E}|\hat{B}(d \omega )|^2 = d \omega \), see Brockwell and Davis (2012, Sects. 4.6–4.7). Define

$$\begin{aligned} U_{N}=\frac{1}{N^{ d+ \frac{1}{2}}} \sum _{j= -\infty }^{\infty } f\Big (\frac{j}{N}\Big ) X_{d,\lambda _N}(j), \qquad U=\int _{\mathbb {R}}\ f(u)\ dB^{II}_{d,\lambda _*}(u). \end{aligned}$$
(47)

The Wiener integral U is well-defined, since \(f\in {\mathcal {A}}_{d,\lambda }\). To show that the series \(U_{N}\) is well-defined in the \(L^{2}(\varOmega )\), first apply the spectral representation of \(\{X_{d,\lambda _N}(j)\}_{j\in \mathbb {Z}}\) given by (46)

$$\begin{aligned} \begin{aligned}&\frac{1}{ {N}^{d+ \frac{1}{2}} } \sum _{j=0}^{m} f\Big (\frac{j}{N}\Big ) X_{d,\lambda _N}(j) = \frac{1}{ {N}^{d+ \frac{1}{2}} } \int _{-\pi }^{\pi } \Bigg [\sum _{j=0}^{m} \frac{1}{\sqrt{2\pi }} f\Big (\frac{j}{N}\Big ) e^{ij\omega }\Bigg ]\\&\qquad \sum _{k=0}^{\infty } e^{ -(\lambda _N + i\omega )k } b_{d}(k)\ d\widehat{B}(\omega )\\&\quad =\frac{1}{ {N}^{d+ \frac{1}{2}} } \int _{\mathbb {R}}\Bigg [\sum _{j=0}^{m} \frac{1}{\sqrt{2\pi }} f\Big (\frac{j}{N}\Big ) e^{\frac{i j y}{N}} \Bigg ]\mathbf{1}_{[-N\pi ,N\pi ]}(y)\\&\qquad \times \sum _{k=0}^{\infty } e^{ -(\lambda _N + i\frac{y}{N} )k } b_{d}(k) d\hat{B}\big (N^{-1}y\big )\\&\quad =\frac{1}{N^{d-\frac{1}{2}}}\int _{\mathbb {R}} \Bigg [ \sum _{j=0}^{m} \frac{1}{\sqrt{2\pi }} f\Big (\frac{j}{N}\Big ) \frac{e^{\frac{i(j+1)y}{N}}-e^{\frac{ijy}{N}}}{iy} \Bigg ]\\&\qquad \times \frac{\frac{iy}{N}}{e^{\frac{iy}{N}}-1} 1_{[-N\pi ,N\pi ]}(y) \sum _{k=0}^{\infty } e^{ -(\lambda _N + i\frac{y}{N} )k } b_{d}(k) d\hat{B}\big (N^{-1}y\big )\\&\quad =\frac{1}{N^{d-\frac{1}{2}}}\int _{\mathbb {R}} \widehat{f_{N,m}}(y) \frac{\frac{iy}{N}}{e^{\frac{iy}{N}}-1} \sum _{k=0}^{\infty } e^{ -(\lambda _N + i\frac{y}{N} )k } b_{d}(k) d\hat{B}\big (N^{-1}y\big ), \end{aligned} \end{aligned}$$
(48)

where \(\widehat{f_{N,m}}(y)=\sum _{j=0}^{m} f\Big (\frac{j}{N}\Big ) \frac{1}{\sqrt{2\pi }} \int _{\mathbb {R}}e^{i\omega y}\mathbf {1}_{(\frac{j}{N},\frac{j+1}{N})}(\omega )\ d\omega \) is the Fourier transform of \({f_{N,m}}\). We note

$$\begin{aligned} \sum _{k=0}^{\infty } e^{- (\lambda _N + \frac{iy}{N})k} b_{d}(k) < C \Big (\lambda _N + \frac{iy}{N}\Big )^{-d}, \end{aligned}$$
(49)

for \(d>-\frac{1}{2}\) and a constant C by Lemma 1. Using (48) and (49), we have

$$\begin{aligned}&\mathbb {E}\Bigg | \frac{1}{ {N}^{d+ \frac{1}{2}} }\sum _{j=0}^{m}f\Big (\frac{j}{N}\Big ) {X}_{d,\lambda _N}(j)\Bigg |^{2}\nonumber \\&\quad =\int _{\mathbb {R}}\Big | \widehat{f_{N,m}}(y)\Big |^{2} \Bigg | \frac{\frac{iy}{N}}{e^{\frac{iy}{N}}-1} \Bigg |^{2} \frac{1}{ {N}^{2d} }\ \!\!\Big | \sum _{k=0}^{\infty } e^{- (\lambda _N + \frac{iy}{N})k} b_{d}(k) \Big |^{2}\ dy \nonumber \\&\quad \le \frac{\pi ^2}{4}\, C \int _{\mathbb {R}}\Big | \widehat{f_{N,m}}(y)\Big |^{2} \Big [ (N\lambda _N)^2 + y^2 \Big ]^{-d}\ dy \nonumber \\&\quad = C' \Vert f_{N,m}\Vert ^{2}_{{\mathcal {A}}_{3,N\lambda _N}}, \end{aligned}$$
(50)

where \(C'\) is another constant. Now, for \(m_2>m_1\ge 1\), we have

$$\begin{aligned} \mathbb {E}\Bigg |\frac{1}{ {N}^{d+ \frac{1}{2}} }\sum _{j=m_1+1}^{m_2}f\Big (\frac{j}{N}\Big ) X_{d,\lambda _N}(j)\Bigg |^{2}\le C'\Vert {f}^+_{N,m_2}-{f}^+_{N,m_1}\Vert ^{2}_{{\mathcal {A}}_{3,\lambda _N}}\rightarrow 0 \end{aligned}$$

as \(m_1,m_2\rightarrow \infty \) and this shows the series is well-defined. The following remark illustrates the inequality in (50).

Remark 5

In (50) we used Lemma 1 and \(\Big |\frac{\frac{iy}{N}}{e^{\frac{iy}{N}}-1} \Big |^{2} \le \frac{\pi ^2}{4}\) for \(y \in [-N\pi ,N\pi ]\). This can be seen as follows

$$\begin{aligned} \Bigg |\frac{\frac{iy}{N}}{e^{\frac{iy}{N}}-1} \Bigg |^{2} = \frac{\frac{y^2}{N^2}}{|\cos \frac{y}{N}+i\sin \frac{y}{N}-1|^2} = \frac{\frac{y^2}{N^2}}{2(1-\cos \frac{y}{N})} = \frac{1}{4} \frac{\frac{y^2}{N^2}}{\sin ^2 \frac{y}{2N}}. \end{aligned}$$

Then taking the limit yields

$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{1}{4} \frac{\frac{N^2\pi ^2}{N^2}}{\sin ^2 \frac{\pm N\pi }{2N}} = \frac{1}{4} \lim _{N\rightarrow \infty } \frac{\pi ^2}{\sin ^2 \pm \frac{\pi }{2}}= \frac{\pi ^2}{4}. \end{aligned}$$

Next, we show that \(U_{N}\) converges in distribution to U as \(N\rightarrow \infty \). Similar to Meerschaert and Sabzikar (2014, Theorem 3.15), the set of elementary functions are dense in \({\mathcal {A}}_{d,\lambda }\) and then there exists a sequence of elementary functions \(f^{l}\) such that \(\Vert f-f^{l}\Vert _{{\mathcal {A}}_{3}}\rightarrow 0\), as \(l\rightarrow \infty \). Now, assume

$$\begin{aligned} U^{l}_{N}=\frac{1}{N^{ d+ \frac{1}{2}}}\sum _{j=-\infty }^{\infty } f^{l}\Big (\frac{j}{N}\Big ) X_{d,\lambda _N}(j),\quad \ U^{l}=\varGamma ^{-1}(d+1)\int _{\mathbb {R}}f^{l}(u)\ dB^{II}_{d,\lambda _*}(u). \end{aligned}$$
(51)

Observe that \(U^{l}_{N}\) is well-defined, since \(U^{l}_{N}\) has a finite number of terms and the elementary function \(f^{l}\) is in \({\mathcal {A}}_{3}\). According to Meerschaert and Sabzikar (1968, Theorem 4.2.), the series \(U_{N}\) converges in distribution to U if

Step 1:

\(U^{l}{\mathop {\longrightarrow }\limits ^{d}}U\), as \(l\rightarrow \infty \),

Step 2:

for all \(l\in \mathbb {N}\), \(U^{l}_{N}{\mathop {\longrightarrow }\limits ^{d}}U^{l}\), as \(N\rightarrow \infty \),

Step 3:

\(\limsup _{l\rightarrow \infty }\limsup _{N\rightarrow \infty }\mathbb {E}\Big |U^{l}_{N}-U_{N}\Big |^{2}=0\).

Step 1: The random variables \(U^{l}\) and U have normal distribution with mean zero and variances \(\Vert f^{l}\Vert _{{\mathcal {A}}_{3,\lambda _N}}\) and \(\Vert f\Vert _{{\mathcal {A}}_{3,\lambda _N}}\), respectively, since f and \(f^{l}\) are in \({{\mathcal {A}}_{3,\lambda _N}}\). Therefore \(\mathbb {E}\Big |U^{l}-U\Big |^{2}=\Vert f^{l}-f\Vert _{{\mathcal {A}}_{3,\lambda _N}}\rightarrow 0\) as \(l\rightarrow \infty \).

Step 2: Note that \(f^{l}\) is an elementary function and hence \(U^{l}_{N}\), given by (51), can be written as \(U^{l}_{N}=\frac{1}{ N^{d+\frac{1}{2} } }\int _{\mathbb {R}} f^{l}(u) dS_{d,\lambda _N}(u)\). Now, apply part (iii) of Theorem 4.3 in Sabzikar and Surgailis (2018b) to see that \(\frac{S_{d,\lambda _N}(u)}{ N^{d+\frac{1}{2} } }{\mathop {\longrightarrow }\limits ^{\mathrm{f.d.d.}}}\varGamma ^{-1}(d+1) B^{II}_{d,\lambda _*}(u)\), as \(N\rightarrow \infty \), and this implies that \(U^{l}_{N}{\mathop {\longrightarrow }\limits ^{\mathrm{f.d.d.}}}U^{l}\), as \(N\rightarrow \infty \).

Step 3: By a similar arguments of (49) and (50), we have

$$\begin{aligned} \begin{aligned} \mathbb {E}\Big |U^{l}_{N}-U_{N}\Big |^{2}&=\int _{\mathbb {R}} \Big | \widehat{f^{l}_{N}}(y)-\widehat{f_N}(y)\Big |^{2} \Big |\frac{\frac{iy}{N}}{e^{\frac{iy}{N}}-1} \Big |^{2} \frac{1}{N^{2d}}\ \Big |1-e^{-(\lambda _N+\frac{iy}{N})}\Big |^{-2d}\ dy\\&\le C \int _{\mathbb {R}} \Big | \widehat{f^{l}_{N}}(y)-\widehat{f_N}(y)\Big |^{2} \Big [ (N \lambda _N)^{2} + y^2 \Big ]^{-d}\ dy, \end{aligned} \end{aligned}$$
(52)

where \(\widehat{f^{l}_{N}}(y)\) and \(\widehat{f_N}(y)\) are the Fourier transforms of

$$\begin{aligned} f^{l}_{N}(u):=\sum _{j=0}^{\infty }f^{l}\Big (\frac{j}{N}\Big )\mathbf{1}_{ \Big (\frac{j}{N},\frac{Nx-j+1}{N}\Big )}(u) \end{aligned}$$

and \(f_{N}:=\sum _{j=0}^{\infty }f\Big (\frac{j}{N}\Big ) \mathbf{1}_{ \big (\frac{j}{N},\frac{Nx-j+1}{N}\big ) }(u)\) respectively. Note that \(f^{l}\) is an elementary function and therefore \(\widehat{f^{l}_{N}}\) converges to \(\widehat{f^{l}}\) at every point and \(\Big |\widehat{f^{l}_{N}}(\omega )-\widehat{f^{l}}(\omega )\Big |\le \widehat{g^{l}}(\omega )\) uniformly in N, for some function \(\widehat{g^{l}}(\omega )\) which is bounded by the minimum of \(C_1\) and \(C_2|\omega |^{-1}\) for all \(\omega \in \mathbb {R}\) (see Theorem 3.2. in Pipiras and Taqqu (2000) for more details). Let \(\mu _{d,\lambda }(d\omega )= (\lambda ^2+\omega ^2)^{-d}\ d\omega \) be the measure on the real line for \(d >-\frac{1}{2}\), then \(\widehat{g^{l}}(\omega )\in L^{2}(\mathbb {R},\mu _{d,\lambda })\). Now apply the Dominated Convergence Theorem to see that

$$\begin{aligned} \Vert f^{l}_{N}-f^{l}\Vert ^{2}_{{\mathcal {A}}_{3}}=\Vert \widehat{f^{l}_{N}}-\widehat{f^{l}}\Vert ^{2}_{L^{2}(\mathbb {R},\mu _{d,\lambda _*})}\rightarrow 0, \end{aligned}$$
(53)

as \(N\rightarrow \infty \). From (48) and (53), we have

$$\begin{aligned} \begin{aligned} \mathbb {E}\Big |U^{l}_{N}-U_{N}\Big |^{2}&\le C\Vert \widehat{ f^l_N } - \widehat{f_N} \Vert ^{2}_{{\mathcal {A}}_{3}}\\&\le C\Big [\Vert \widehat{ f^l_N }- \widehat{f^l }\Vert ^{2}_{{\mathcal {A}}_{3}}+\Vert \widehat{f} - \widehat{f_N }\Vert ^{2}_{{\mathcal {A}}_{3}}+\Vert \widehat{f^l }-\widehat{f}\Vert ^{2}_{{\mathcal {A}}_{3}}\Big ]. \end{aligned} \end{aligned}$$

The first two terms tend to zero as \(N\rightarrow \infty \) because of (53) and Condition A respectively, and the last term tends to zero as \(l\rightarrow 0\) (see step 1) and this completes the proof of Step 3. \(\square \)

proof Theorem 1

We prove only part (c) and omit the proofs of parts (a) and (b) due to the similarity of proofs. We first show that

$$\begin{aligned} \frac{1}{(Nh)^{d+\frac{1}{2}}}\sum _{j=1}^{N} K\Big (\frac{Nx-j}{Nh}\Big ) X_{d,\lambda _N}(j){\mathop {\longrightarrow }\limits ^{\mathrm{f.d.d.}}}\frac{1}{\varGamma (d+1)}\int _{0}^{2}K^{'}(1-t) B^{II}_{d,\lambda _*}(t) dt, \end{aligned}$$
(54)

where \(B^{II}_{d,\lambda _*}(t)\) is TFBMII. Starting from the l.h.s of (54), using Riemann sums to integrals and integration by parts

$$\begin{aligned} \begin{aligned}&\frac{1}{(Nh)^{d+\frac{1}{2}}} \sum _{j=1}^{N} K\Big (\frac{Nx-j}{Nh}\Big ) X_{d,\lambda _N}(j) = \frac{1}{(Nh)^{d+\frac{1}{2}}} \int _{0}^{1} K\Big (\frac{x-y}{h}\Big ) dS_{d,\lambda _N}(y)\\&\quad = \frac{1}{h(Nh)^{d+\frac{1}{2}}} \int _{0}^{1} K^{'}\Big (\frac{x-y}{h}\Big ) S_{d,\lambda _N}(y) dy\\&\quad = \frac{1}{(Nh)^{d+\frac{1}{2}}} \int _{-1}^{1} K^{'}(u) S_{d,\lambda _N}(x-hu) du\\&\quad = \frac{1}{(Nh)^{d+\frac{1}{2}}} \int _{-1}^{1} K^{'}(u) \sum _{j=\lfloor N(x-h)\rfloor }^{\lfloor N(x-hu)\rfloor } X_{d,\lambda _N}(j) du, \end{aligned} \end{aligned}$$
(55)

where we used

$$\begin{aligned} S_{d,\lambda _N}(x-hu) =\sum _{j=1}^{\lfloor N(x-h)\rfloor -1}X_{d,\lambda _N}(j)+ \sum _{j=\lfloor N(x-h)\rfloor }^{\lfloor N(x-hu)\rfloor } X_{d,\lambda _N}(j) \end{aligned}$$

and the assumptions on the kernel function K to see that

$$\begin{aligned} \frac{\sum _{j=1}^{\lfloor N(x-h)\rfloor -1}X_{d,\lambda _N}(j)}{(Nh)^{d+\frac{1}{2}}} \int _{-1}^{1} K^{'}(u) du=0. \end{aligned}$$

Next, by stationarity of \(X_{d,\lambda _N}(j)\) and a change of variable we have

$$\begin{aligned} \begin{aligned} \sum _{j=\lfloor N(x-h)\rfloor }^{\lfloor N(x-hu)\rfloor } X_{d,\lambda _N}(j)&= \sum _{j=1}^{ \lfloor N(x-hu)\rfloor - \lfloor N(x-h)\rfloor +1} X_{d,\lambda _N}(j+ \lfloor N(x-h)\rfloor -1)\\&{\mathop {=}\limits ^{f.d.d.}}\sum _{j=1}^{ \lfloor N(x-hu)\rfloor - \lfloor N(x-h)\rfloor +1} X_{d,\lambda _N}(j)= \sum _{j=1}^{ l_{x}(u) } X_{d,\lambda _N}(j), \end{aligned} \end{aligned}$$
(56)

where \(l_{x}(u)= \lfloor N(x-hu)\rfloor - \lfloor N(x-h)\rfloor +1\). Consequently, from (55) and (56), we get

$$\begin{aligned}&\frac{1}{(Nh)^{d+\frac{1}{2}}} \int _{-1}^{1} K^{'}(u)\sum _{j=\lfloor N(x-h)\rfloor }^{\lfloor N(x-hu)\rfloor }X_{d,\lambda _N}(j)\ du\nonumber \\&\quad {\mathop {=}\limits ^{f.d.d.}}\frac{1}{(Nh)^{d+\frac{1}{2}}} \int _{-1}^{1}K^{'}(u)\sum _{j=1}^{ l_{x}(u) } X_{d,\lambda _N}(j)\ du \nonumber \\&\quad =\frac{1}{(Nh)^{d+\frac{1}{2}}} \int _{0}^{2}K^{'}(1-t)\sum _{j=1}^{ \lfloor Nht\rfloor } X_{d,\lambda _N}(j) dt + o_{p}(1) \end{aligned}$$
(57)

According to Sabzikar and Surgailis (2018b, Theorem 4.3) we have

$$\begin{aligned} \frac{1}{(Nh)^{d+\frac{1}{2}}} \sum _{j=1}^{ \lfloor Nh t\rfloor } X_{d,\lambda _N}(j){\mathop {\longrightarrow }\limits ^{\mathrm{f.d.d.}}}\frac{1}{\varGamma (d+1)}B^{II}_{d,\lambda _*}(t) \end{aligned}$$
(58)

in D[0, 2] provided \(Nh\rightarrow \infty \). Now, the desired result (54) follows from (57), (58), and the continuous mapping theorem. \(\square \)

proof of Theorem 2

The proof of this theorem follows by Proposition 1 and Theorem 1 and hence we omit the details. \(\square \)

proof of Theorem 3

For brevity, we restrict the proof of the theorem to \(k=2\). Moreover, we just prove part (c) of the theorem since the proofs of the other two cases are similar. We first note that for each \(0<x_i<1\),

$$\begin{aligned} \widehat{A}_{N,i}= (Nh)^{\frac{1}{2}-d}[\hat{m}(x_i)-\mathbb {E}\hat{m}(x_i)]=\frac{1}{(Nh)^{d+\frac{1}{2}}}\sum _{1\le s \le N}K\Big (\frac{Nx_i -s}{Nh}\Big )X_{d,\lambda _N}(s). \end{aligned}$$
(59)

Let \(j_i\) be integers such that \(|Nx_i-j_i|\le 1\) for \(i=1,2\) and (similar to Deo (1997)) define

$$\begin{aligned} \widetilde{A}_{Ni}= \frac{1}{(Nh)^{d+\frac{1}{2}}}\sum _{s=|j_i|-\lfloor Nh \rfloor }^{|j_i|+\lfloor Nh \rfloor } K\Big (\frac{j_i -s}{Nh}\Big )X_{d,\lambda _N}(s) \end{aligned}$$
(60)

for \(i=1, 2\). Since the kernel function K vanishes in \(\mathbb {R}\setminus [-1,1]\) and \(|K^{'}(x)|\le C\) for all \(x\in [-1,1]\), it follows that

$$\begin{aligned} \widehat{A}_{N,i}-\widetilde{A}_{N,i}= o_{p}(1) \end{aligned}$$
(61)

for \(i=1,2\). By a change of variable \(s=\nu +j_1 -Nh\) and \(s=\nu +j_2 -Nh-\lfloor N\delta \rfloor \), with \(\delta = x_2-x_1\), for \(\widetilde{A}_{N,1}\) and \(\widetilde{A}_{N,2}\) respectively, use the fact that \(X_{d,\lambda _N}\) is stationary, and \(|K^{'}(x)|\le C\) to see that

$$\begin{aligned} \Big ( \widetilde{A}_{N,1}, \widetilde{A}_{N,2} \Big ){\mathop {=}\limits ^{f.d.d.}}\Big ( A^{*}_{N,1}, A^{*}_{N,2} \Big )+ o_{p}(1), \end{aligned}$$
(62)

where

$$\begin{aligned} A^{*}_{N,1} = \sum _{\nu =1}^{2\lfloor Nh\rfloor } K\Big (\frac{\nu }{Nh}-1\Big ) X_{d,\lambda _N}(\nu ) \end{aligned}$$
(63)

and

$$\begin{aligned} A^{*}_{N,2} = \sum _{\nu =\lfloor N\delta \rfloor }^{ \lfloor N\delta \rfloor + 2\lfloor Nh\rfloor } K\Big (\frac{\nu - \lfloor N\delta \rfloor }{Nh}-1\Big ) X_{d,\lambda _N}(\nu ). \end{aligned}$$
(64)

We use the partial sums \(A^{*}_{N,i}\), for \(i=1,2\), to establish the functional limit theorems. Let \(\{ K_m\}\) be a sequence of elementary functions such that \(K_m\rightarrow K\) in \(L^{2}\) as \(m\rightarrow \infty \). Define \(A^{*}_{m,Ni}\) be as (63) and (64) with \(K_m(x)=\sum _{i=1}^{m} a_i \mathbf{1}( t_{i-1},t_{i} )(x)\), where \(a_i\) are some constants and \(-1\le t_i \le 1\) for \(i=0, \ldots , m\). We can rewrite \(A^{*}_{m,N,i}\) as

$$\begin{aligned} A^{*}_{m,N,i} = 2^{d+\frac{1}{2}} \int _{0}^{2} K_{m}(u-1) dS^{*}_{Ni}(u) + o_{p}(1), \end{aligned}$$
(65)

where

$$\begin{aligned} S^{*}_{N,1}(s) = \frac{1}{(2Nh)^{d+\frac{1}{2}}}\sum _{t=1}^{\lfloor \lfloor Nh\rfloor s\rfloor } X_{d,\lambda _N}(t) \end{aligned}$$
(66)

and

$$\begin{aligned} S^{*}_{N,2}(s) = \frac{1}{(2Nh)^{d+\frac{1}{2}}}\sum _{t=1}^{\lfloor \lfloor Nh\rfloor s\rfloor } X_{d,\lambda _N}(t+ \lfloor N\delta \rfloor ). \end{aligned}$$
(67)

Using Sabzikar and Surgailis (2018b, Theorem 4.3) and the continuous mapping theorem yields

$$\begin{aligned} A^{*}_{m,N,i} {\mathop {\longrightarrow }\limits ^{\mathrm{f.d.d.}}}A^{*}_{m} = \int _{0}^{2} K_{m}(u-1) dB^{II}_{d,\lambda _*}(u), \end{aligned}$$
(68)

as \(N\rightarrow \infty \) and hence

$$\begin{aligned} \begin{aligned} \sigma ^{2}_{ii}&= \int _{0}^{2} \int _{0}^{2} K_{m}(u-1) K_{m}(v-1) \mathrm{Cov}\Big ( B^{II}_{d,\lambda _*}(u), B^{II}_{d,\lambda _*}(v) \Big ) du\ dv\\&= \int _{0}^{2} \int _{0}^{2} K_{m}(u-1) K_{m}(v-1) |u-v|^{d-\frac{1}{2}} K_{d-\frac{1}{2}}(\lambda _* |u-v|) du\ dv. \end{aligned} \end{aligned}$$
(69)

Next, we need to show that \(A^{*}_{m,N,1}\) and \(A^{*}_{m,N,2}\) are asymptotically independent (i.e. \(\sigma ^{2}_{12} = \sigma ^{2}_{21} =0\) ). Observe that

$$\begin{aligned} A^{*}_{m,N,1} = 2^{d+\frac{1}{2}} \sum _{j=1}^{m} a_{j} [ S^{*}_{N1}(t_{j}) - S^{*}_{N1}(t_{j-1})] + o_{P}(1) \end{aligned}$$
(70)

and

$$\begin{aligned} S^{*}_{N,1}(t_j) - S^{*}_{N,1}(t_{j-1}) = \frac{1}{(2Nh)^{d+\frac{1}{2}}}\sum _{s=\lfloor \lfloor Nh \rfloor t_{j-1}\rfloor }^{\lfloor \lfloor Nh\rfloor t_j\rfloor } X_{d,\lambda _N}(s) =\sum _{p=-\infty }^{\infty } d_{pN}\zeta (p), \end{aligned}$$
(71)

where

$$\begin{aligned} d_{pN}=\frac{1}{(2Nh)^{d+\frac{1}{2}}} \sum _{t= \lfloor \lfloor Nh\rfloor t_{j-1}\rfloor }^{\lfloor \lfloor Nh\rfloor t_j\rfloor } b_{d}(p-t)e^{-\lambda _N (p-t)}. \end{aligned}$$
(72)

Since \(b_{d}(j)e^{-\lambda _N j}\sim C j^{d-1} e^{-\lambda _N j}\) for large lag j, see (4), then for \(p> {\lfloor \lfloor Nh\rfloor t_j\rfloor }\), we have

$$\begin{aligned} |d_{pN}| \le C |Nh|^{d+\frac{1}{2}} Nh (p-{\lfloor \lfloor Nh\rfloor t_j\rfloor })^{d-1} e^{-\lambda _N (p-{\lfloor \lfloor Nh\rfloor t_j\rfloor })}, \end{aligned}$$
(73)

where C is a constant. Therefore, we get

$$\begin{aligned} \lim _{N \rightarrow \infty }\sum _{|p|>M} d^2_{pN}= 0, \end{aligned}$$
(74)

since \(h\log (Nh) \rightarrow 0\) and \(M=Nh\log (Nh)\). Consequently,

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathbb {E}\Bigg [ S^{*}_{N1}(t_j) - S^{*}_{N1}(t_{j-1}) - \sum _{|p|\le M} d_{pN} \zeta (p) \Bigg ]^{2} = 0 \end{aligned}$$
(75)

and by a similar argument

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathbb {E}\Bigg [ S^{*}_{N2}(t_j) - S^{*}_{N2}(t_{j-1}) - \sum _{|p|\le M} d_{pN} \zeta (p+\lfloor N\delta \rfloor ) \Bigg ]^{2} = 0 \end{aligned}$$
(76)

From (75), (76), and \( \lfloor N\delta \rfloor - 2N\rightarrow \infty \), we conclude that \( S^{*}_{N,1}(t_j) - S^{*}_{N,1}(t_{j-1})\) and \( S^{*}_{N,2}(t_{j^{'}}) - S^{*}_{N,2}(t_{{ j^{'} } -1})\) are asymptotically independent for all \(j, j^{'}\) and this implies that \(A^{*}_{m,N,1}\) and \(A^{*}_{m,N,2}\) are asymptotically independent. Thus

$$\begin{aligned} \Big (A^{*}_{m,N,1}, A^{*}_{m,N,2}\Big ){\mathop {\longrightarrow }\limits ^{\mathrm{f.d.d.}}}N_{2}\Big ( 0, \varSigma \Big ), \end{aligned}$$
(77)

where \(\sigma ^{2}_{ii}\) is given by (69) and \(\sigma ^{2}_{12}=\sigma ^{2}_{21}=0\). Since

$$\begin{aligned} \lim _{m\rightarrow \infty }\sigma ^{2}_{ii} = \int _{0}^{2} \int _{0}^{2} K(u-1) K(v-1) |u-v|^{d-\frac{1}{2}} K_{d-\frac{1}{2}}(\lambda _* |u-v|) du\ dv \end{aligned}$$
(78)

and by Pipiras and Taqqu (1997, Theorem 2), one obtains

$$\begin{aligned} \lim _{m\rightarrow \infty }\lim _{N\rightarrow \infty } \mathrm{Var}(A^{*}_{m,N,1}- A^{*}_{N,1})=0,\ \mathrm{for}\ i=1, 2, \end{aligned}$$
(79)

then the desired results follows by (61), (62), (78), and (79). \(\square \)

proof of Theorem 4

The proofs of part (a) and part (b) are similar and we just prove part (b) for \(\lambda _*\in (0,\infty )\). Using the triangle inequality yields

$$\begin{aligned}&\mathbb {P}\Big ( N^{\frac{1}{2} - d}\big \Vert \hat{\theta }-\theta - ({{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {M}}\,}}_{N+})^{-1} {{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {e}}\,}}_{N}\big \Vert>\varDelta \Big ) \\&\quad \le \mathbb {P}\Big ( N^{\frac{1}{2} - d}\Big \Vert \hat{\theta }-\theta \Big \Vert>\frac{\varDelta }{2} \Big ) +\mathbb {P}\Big ( N^{\frac{1}{2} - d}\big \Vert ({{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {M}}\,}}_{N+})^{-1} {{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {e}}\,}}_{N}\big \Vert >\frac{\varDelta }{2} \Big ). \end{aligned}$$

For the first term, we have by Weiershäuser (2012, p. 95 Theorem 5.1.9), (3) and Markov’s inequality,

$$\begin{aligned} \mathbb {P}\big ( N^{\frac{1}{2} - d}\big \Vert \hat{\theta }-\theta \big \Vert >\frac{\varDelta }{2} \Big ) \le \frac{4\mathbb {E}\big \Vert \hat{\theta }-\theta \big \Vert ^2}{\varDelta ^2 N^{1-2d}} = \frac{O[\min (N^{-\beta }N^{2d-1},N^{4d-2})]}{\varDelta ^2}. \end{aligned}$$

Since \(\varDelta \) is arbitrary, \(\lim _{N\rightarrow \infty }\mathbb {P}\big ( N^{\frac{1}{2} - d}\big \Vert \hat{\theta }-\theta \big \Vert >\frac{\varDelta }{2} \Big ) = 0\) for \(d < 1/2\). For the second term and again by Markov’s inequality

$$\begin{aligned} \mathbb {P}\Big ( N^{\frac{1}{2} - d}\big \Vert ({{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {M}}\,}}_{N+})^{-1} {{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {e}}\,}}_{N}\big \Vert >\frac{\varDelta }{2} \Big )\le & {} \frac{4\mathbb {E}\big \Vert ({{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {M}}\,}}_{N+})^{-1} {{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {e}}\,}}_{N}\big \Vert ^2}{\varDelta ^2 N^{1-2d}}. \nonumber \\ \end{aligned}$$
(80)

Let \(\varOmega = ({{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {M}}\,}}_{N+})^{-1} {{\,\mathrm{\mathbf {M}}\,}}_{N+}^T\), then \(\mathbb {E}\Vert \varOmega {{\,\mathrm{\mathbf {e}}\,}}_N\Vert ^2 = {{\,\mathrm{\text {tr}}\,}}(\varOmega \varSigma _{{{\,\mathrm{\mathbf {e}}\,}}_N{{\,\mathrm{\mathbf {e}}\,}}_N}) + \{\mathbb {E}({{\,\mathrm{\mathbf {e}}\,}}_N)\}^T\varSigma _{{{\,\mathrm{\mathbf {e}}\,}}_N{{\,\mathrm{\mathbf {e}}\,}}_N}\mathbb {E}({{\,\mathrm{\mathbf {e}}\,}}_N)\) where \({{\,\mathrm{\text {tr}}\,}}(A)\) and \(\varSigma _{{{\,\mathrm{\mathbf {e}}\,}}_N{{\,\mathrm{\mathbf {e}}\,}}_N}\) denote the trace of the matrix A and the variance-covariance matrix of \({{\,\mathrm{\mathbf {e}}\,}}_N\) respectively. Since \(\{ X_{d,\lambda }(t) \}_{t\in \mathbb {Z}}\) is a tempered mean zero linear process we have \(\mathbb {E}\Vert \varOmega {{\,\mathrm{\mathbf {e}}\,}}_N\Vert ^2 = {{\,\mathrm{\text {tr}}\,}}(\varOmega \varSigma _{{{\,\mathrm{\mathbf {e}}\,}}_N{{\,\mathrm{\mathbf {e}}\,}}_N})\). Further, the variance-covariance matrix of \({{\,\mathrm{\mathbf {e}}\,}}_N\) is finite, see (17). Consequently, the numerator of (80) is finite since \({{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {M}}\,}}_{N+}\) is full rank for \(N\rightarrow \infty \) and hence the second term goes to zero for \(d<1/2\). \(\square \)

proof of Theorem 5

The proofs of part (a) and part (b) are similar and hence we just give the proof for part (b). We first note that \(\int _\mathbb {R}\mu _{(i+)}(u) dB^{II}_{d,\lambda _*}(u) = \int _{\mathbb {R}} {\mathbb {I}}^{d,\lambda _*}_{-}\mu _{(i+)}(u) dB(u) \). Observe that \(\mu _{(i+)}\in L^{p}(\mathbb {R})\) for \(p\ge 1\) and hence \({\mathbb {I}}^{d,\lambda _*}_{-}\mu _{(i+)}\in L^{p}\). In particular, let \(p=2\) and apply the Ito-isometry to conclude that \(\int _{\mathbb {R}} {\mathbb {I}}^{d,\lambda _*}_{-}\mu _{(i+)}(u) dB(u)\) is well-defined. Because of Theorem 4, we need to show that

$$\begin{aligned} N^{\frac{1}{2} - d} ({{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {M}}\,}}_{N+})^{-1} {{\,\mathrm{\mathbf {M}}\,}}_{N+}^T {{\,\mathrm{\mathbf {e}}\,}}_N {\mathop {\longrightarrow }\limits ^{\mathrm{f.d.d.}}}{{\varvec{\Lambda }}} \Big [ \int _\mathbb {R}\mu _{(i+)}(u)\ dB^{II}_{d,\lambda _*}(u) \Big ]_{i= 1, \ldots , p+1} \end{aligned}$$
(81)

as \(N\rightarrow \infty \). Observe that \(N ({{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {M}}\,}}_{N+})^{-1} \rightarrow \varLambda \) as \(N\rightarrow \infty \). Therefore the RHS of (81) follows if we show

$$\begin{aligned} \frac{1}{N^{ d+ \frac{1}{2} } } {{\,\mathrm{\mathbf {M}}\,}}_{N+}^T {{\,\mathrm{\mathbf {e}}\,}}_N {\mathop {\longrightarrow }\limits ^{\mathrm{f.d.d.}}}\Big [\int _{\mathbb {R}} \mu _{(i+)}(s)\ dB^{II}_{d,\lambda _*}(s)\Big ]_{i=1,\ldots , p+1}. \end{aligned}$$

But this is equivalent to show that

$$\begin{aligned} \frac{1}{N^{ d+ \frac{1}{2} } } {\langle \alpha , {{\,\mathrm{\mathbf {M}}\,}}_{N+}^T {{\,\mathrm{\mathbf {e}}\,}}_N \rangle } \rightarrow {\Bigl \langle \alpha , \Big [ \int _\mathbb {R}\mu _{(i+)}(u)\ dB^{II}_{d,\lambda _*}(u) \Big ]_{i= 1, \ldots , p+1} \Bigr \rangle }, \alpha \in \mathbb {R}^{p+1}. \end{aligned}$$
(82)

Note that

$$\begin{aligned} {\langle \alpha , {{\,\mathrm{\mathbf {M}}\,}}_{N+}^T {{\,\mathrm{\mathbf {e}}\,}}_N \rangle } = \sum _{i=1}^{p+1}\sum _{j=1}^{n} \alpha _i \mu _{(i+)}\big (\frac{j}{N}\big ) X_{d,\lambda _N}(j) \end{aligned}$$
(83)

and by Lemma 2

$$\begin{aligned} N^{-(d+1/2)}\sum _{i=1}^{p+1}\sum _{j=1}^{N} \alpha _i \mu _{(i+)}(\frac{j}{N}) X_{d,\lambda _N}(j){\mathop {\longrightarrow }\limits ^{\mathrm{f.d.d.}}}\int _{\mathbb {R}} m_{\alpha }(u) dB^{II}_{d,\lambda _*}(u) \end{aligned}$$

where \(m_{\alpha }(u) := \sum _{i=1}^{p+1} \alpha _i \mu _{(i+)}(u) \) and this completes the proof. \(\square \)

proof of Theorem 6

We only proof part (c) since the other parts follow a similar procedure. Let \(\varXi \) be the random vector

$$\begin{aligned} \varXi = \Big [\int _{\mathbb {R}} \mu _{(i+)}(s) dB^{II}_{d,\lambda _*}(s)\Big ]_{i=1,\ldots , p+1}. \end{aligned}$$
(84)

Then we can write

$$\begin{aligned} \int _\mathbb {R}\mu _{(i+)}(s) dB^{II}_{d,\lambda _*}(s) = \int _\mathbb {R}\Big ( {\mathbb {I}}^{d,\lambda _*}_{-}\mu _{(i+)} \big )(s) dB(s) \end{aligned}$$
(85)

for \(i=1,\ldots , p+1\). We observe that \(\int _\mathbb {R}\big ( {\mathbb {I}}^{d,\lambda _*}_{-}\mu _{(i+)} \big )(s) dB(s)\) is a Gaussian stochastic process with mean zero and finite variance \(\int _\mathbb {R}\big | {\mathbb {I}}^{d,\lambda _*}_{-}\mu _{(i+)}(s) \big |^2 \ ds\). Using the Ito-isometry for the Wiener integrals, one can see \(\varXi \) has the covariance matrix

$$\begin{aligned} \varSigma _{0}=\Bigg [\int _{\mathbb {R}} \Big ( {\mathbb {I}}^{d,\lambda _*}_{-} \mu _{(i+)} \Big )(s) \Big ( {\mathbb {I}}^{d,\lambda _*}_{-} \mu _{(k+)} \Big )(s)\ ds\Bigg ]_{i,k=1,\ldots ,p+1} \end{aligned}$$
(86)

and consequently \(\varLambda \varXi \) has normal distribution with covariance matrix \(\varLambda \varSigma _0 \varLambda \) and this completes the proof of the first part. Next, we have

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}} \Big ( {\mathbb {I}}^{d,\lambda _*}_{-} \mu _{(i+)} \Big )(s) \Big ( {\mathbb {I}}^{d,\lambda _*}_{-} \mu _{(k+)} \Big )(s)\ ds= \int _{\mathbb {R}} \mathcal {F}[{\mathbb {I}}^{d,\lambda _*}_{-} \mu _{(i+)}](\omega ) \overline{\mathcal {F}[{\mathbb {I}}^{d,\lambda _*}_{-} \mu _{(i+)}](\omega )} \ d\omega \\&\quad = \int _\mathbb {R}\widehat{\mu _{(i+)}}(\omega ) \overline{\widehat{\mu _{(k+)}}(\omega )} (\lambda _*^2 + \omega ^2)^{-d}d\omega \\&\quad =\int _\mathbb {R}\int _\mathbb {R}\mu _{(i+)}(t) \mu _{(k+)}(s) \int _{\mathbb {R}} e^{i\omega (t-s)} (\lambda _*^2 + \omega ^2)^{-d} \ d\omega ds\ dt\\&\quad =2\int _\mathbb {R}\int _\mathbb {R}\mu _{(i+)}(t) \mu _{(k+)}(s) \int _{0}^{\infty } \cos (\omega (t-s)) (\lambda _*^2 + \omega ^2)^{-d} d\omega \ ds\ dt\\&\quad =C \int _\mathbb {R}\int _\mathbb {R}\mu _{(i+)}(t) \mu _{(k+)}(s) |t-s|^{d-\frac{1}{2}} K_{d-\frac{1}{2}}(\lambda _*|t-s|) ds \ dt, \end{aligned} \end{aligned}$$
(87)

where \(C = \frac{2}{\varGamma (d) \sqrt{\pi } (2\lambda )^{d-\frac{1}{2}}}\) and we used

$$\begin{aligned} \int _{0}^{\infty } \frac{\cos (\omega x)}{(\lambda ^2+x^2)^{\nu +\frac{1}{2}} } \ dx = \frac{\sqrt{\pi }}{\varGamma (\nu +\frac{1}{2})} \Big ( \frac{|x|}{2\lambda } \Big )^{\nu } K_{\nu }(\lambda |x|) \end{aligned}$$
(88)

for \(\nu > -\frac{1}{2}\) and \(\lambda >0\) in (87). This completes the proof of the second part and Theorem. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

De Brabanter, K., Sabzikar, F. Asymptotic theory for regression models with fractional local to unity root errors. Metrika 84, 997–1024 (2021). https://doi.org/10.1007/s00184-021-00812-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-021-00812-7

Keywords

Mathematics Subject Classification

Navigation