Skip to main content
Log in

Fixed accuracy estimation of parameters in a threshold autoregressive model

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

For parameters in a threshold autoregressive process, the paper proposes a sequential modification of the least squares estimates with a specific stopping rule for collecting the data for each parameter. In the case of normal residuals, these estimates are exactly normally distributed in a wide range of unknown parameters. On the base of these estimates, a fixed-size confidence ellipsoid covering true values of parameters with prescribed probability is constructed. In the i.i.d. case with unspecified error distributions, the sequential estimates are asymptotically normally distributed uniformly in parameters belonging to any compact set in the ergodicity parametric region. Small-sample behavior of the estimates is studied via simulation data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Chan, K. S. (1993). Consistency and limiting distribution of the least squares estimator of a threshold autoregressive model. The Annals of Statistics, 21(1), 520–533.

    Article  MathSciNet  Google Scholar 

  • Chigansky, P., Kutoyants, Y. A. (2013). Estimation in threshold autoregressive models with correlated innovations. Annals of the Institute of Statistical Mathematics, 65(5), 959–992.

    Article  MathSciNet  Google Scholar 

  • Freedman, D. (1983). Markov chains. New York: Springer.

    Book  Google Scholar 

  • Galtchouk, L., Pergamenshchikov, S. (2014). Geometric ergodicity for classes of homogenous Markov chains. Stochastic Processes and Their Applications, 124(10), 3362–3391.

    Article  MathSciNet  Google Scholar 

  • Gao, J., Tjostheim, D., Yin, J. (2013). Estimation in threshold autoregressive models with a stationary and a unit root regime. Journal of Econometrics, 172(1), 1–13.

    Article  MathSciNet  Google Scholar 

  • Lai, T., Siegmund, D. (1983). Fixed accuracy estimation of an autoregressive parameter. Annals of Statistics, 11(2), 478–485.

    Article  MathSciNet  Google Scholar 

  • Lai, T., Wei, C. (1983). Asymptotic properties of general autoregressive models and strong consistency of least-squares estimates of their parameters. Journal of Multivariate Analysis, 13(1), 1–23.

    Article  MathSciNet  Google Scholar 

  • Lee, S., Sriram, T. (1999). Sequential point estimation of parameters in a threshold AR(1) model. Stochastic Processes and Their Applications, 84(2), 343–355.

    Article  MathSciNet  Google Scholar 

  • Li, D., Ling, S. (2012). On the least squares estimation of multiple-regime threshold autoregressive models. Journal of Econometrics, 167(1), 240–253.

    Article  MathSciNet  Google Scholar 

  • Pergamenshchikov, S. (1992). Asymptotic properties for estimating the parameter in a first-order autoregression. Theory of Probability and Its Applications, 36(1), 36–46.

    Article  Google Scholar 

  • Petruccelli, J., Woolford, S. (1984). A threshold AR(1) model. Journal of Applied Probability, 21(2), 270–286.

    Article  MathSciNet  Google Scholar 

  • Pham, D., Chan, K., Tong, H. (1991). Strong consistency of the least squares estimator for a non-ergodic threshold autoregressive model. Statistica Sinica, 1(2), 381–369.

    MATH  Google Scholar 

  • Shiryaev, A. (1996). Probability (2nd ed.). New York: Springer.

    Book  Google Scholar 

  • Sriram, T., Laci, R. (2014). Editors’s special invited paper: Sequential estimation for time series models. Sequential Analysis, 33(2), 136–157.

  • Tong, H. (1978). On a threshold model. In Chen, C. (Ed.), Pattern recognition and signal processing, pp. 575–586. Alphen aan den Rijn, The Netherlands: Sijthoff & Noordhof.

  • Yau, C. Y., Tang, C. M., Lee, T. C. M. (2015). Estimation of multiple-regime threshold autoregressive models with structural breaks. Journal of the American Statistical Association, 110(511), 1175–1186.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Research was supported by RSF, Project No 20-61-47043.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sergey E. Vorobeychikov.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Theorem 1

Let the filtration \(\{ \mathcal{F}\}_{n\ge 0}\) be given by (17). We will show that the characteristic function of vector \(\xi =(\xi _1,\xi _2)\) with the coordinates defined in (6) has the form

$$\begin{aligned} \varphi _{\xi }(u)=Ee^{i(u,\xi )}=Ee^{i(u_1 \xi _1+u_2 \xi _2)}=e^{-\frac{u_1^2}{2}}e^{-\frac{u_2^2}{2}}, \end{aligned}$$

\(u=(u_1,u_2),\ \ -\infty<u_j<\infty , \ j=1,2.\) Taking into account (11), we introduce two sequences

$$\begin{aligned} \xi _1(N)= & {} \frac{1}{\sqrt{h}}\sum _{k=1}^N \chi _{(k\le \tau _1(h))}\beta _{1,k}x_{k-1}^+\varepsilon _k, \\ \xi _2(N)= & {} \frac{1}{\sqrt{h}}\sum _{j=1}^N \chi _{(j\le \tau _2(h))}\beta _{2,j}x_{j-1}^+\varepsilon _j, \ \ N\ge 1. \end{aligned}$$

Consider the characteristic function of the vector \(\xi (N)=(\xi _1(N),\xi _2(N))\):

$$\begin{aligned} \varphi _{\xi (N)}(u)=Ee^{i(u,\xi (N))}=E\exp \left( \sum _{k=1}^N \frac{i}{\sqrt{h}}y_{k-1}\varepsilon _k \right) \end{aligned}$$

where

$$\begin{aligned} y_{k-1}=\chi _{(k\le \tau _1(h))}\beta _{1,k}u_1x_{k-1}^+\ +\ \chi _{(k\le \tau _2(h))}\beta _{2,k}u_2x_{k-1}^-. \end{aligned}$$

Since

$$\begin{aligned} \lim _{N\rightarrow \infty }\xi _1(N)=\xi _1, \ \ \ \lim _{N\rightarrow \infty }\xi _2(N)=\xi _2, \end{aligned}$$

we have

$$\begin{aligned} \varphi _{\xi }(u)=\lim _{N\rightarrow \infty }\varphi _{\xi (N)}(u). \end{aligned}$$

Now, we represent \(\varphi _{\xi (N)}(u)\) as

$$\begin{aligned}&\varphi _{\xi (N)}(u)=E\exp \left( \left( \sum _{k=1}^N \frac{i}{\sqrt{h}}y_{k-1}\varepsilon _k +\frac{1}{2h}y_{k-1}^2\right) - \sum _{k=1}^N \frac{1}{2h}y_{k-1}^2\right) \nonumber \\&\quad =\exp \left( -\frac{u_1^2}{2}-\frac{u_2^2}{2}\right) E\exp \left( \sum _{k=1}^N \frac{i}{\sqrt{h}}y_{k-1}\varepsilon _k +\frac{1}{2h}y_{k-1}^2\right) \ +\ R_N, \end{aligned}$$
(52)

where

$$\begin{aligned} R_N= & {} E\left[ \exp (\eta _N)\cdot S_N\right] , \\ \eta _N= & {} \sum _{k=1}^N \left( \frac{i}{\sqrt{h}}y_{k-1}\varepsilon _k +\frac{1}{2h}y_{k-1}^2\right) , \\ S_N= & {} E\exp \left( \sum _{k=1}^N \frac{1}{2h}y_{k-1}^2\right) -\exp \left( -\frac{u_1^2}{2}-\frac{u_2^2}{2}\right) . \end{aligned}$$

Taking repeatedly conditional expectation yields

$$\begin{aligned}&E e^{\eta (N)}=E \left( E \left( e^{\eta (N)}|\mathcal{F}_{N-1}\right) \right) \nonumber \\&\quad =E\exp \left( \sum _{k=1}^{N-1} \frac{i}{\sqrt{h}}y_{k-1}\varepsilon _k +\sum _{k=1}^{N}\frac{1}{2h}y_{k-1}^2\right) E\exp \left( \frac{i}{\sqrt{h}}y_{N-1}\varepsilon _N|\mathcal{F}_{N-1}\right) \nonumber \\&\quad =E\left[ \exp \left( \sum _{k=1}^{N-1} \left( \frac{i}{\sqrt{h}}y_{k-1}\varepsilon _k +\frac{1}{2h}y_{k-1}^2\right) \right) \right] = E e^{\eta (N-1)}=\cdots =1. \end{aligned}$$
(53)

Further, we note that

$$\begin{aligned}&\sum _{k=1}^{N}\frac{1}{2h}y_{k-1}^2 \le \frac{1}{2}\left( u_1^2+u_2^2\right) , \\&\lim _{N\rightarrow \infty }\sum _{k=1}^{N}\frac{1}{2h}y_{k-1}^2 =\frac{1}{2}\left( u_1^2+u_2^2\right) . \end{aligned}$$

Using the estimate

$$\begin{aligned} E e^{\eta (N)}\le \exp \left( \frac{1}{2}\left( u_1^2+u_2^2\right) \right) \end{aligned}$$

and applying the theorem of dominated convergence, one gets

$$\begin{aligned} \lim _{N\rightarrow \infty }R_N=0. \end{aligned}$$

Substituting (53) in (52) and limiting \(N\rightarrow \infty \), we arrive at the desired result. Thus, Theorem 2.1 is proved. \(\square \)

Proof of Lemma 2

Noting that

$$\begin{aligned} |x_k|\ \le \ \lambda |x_{k-1}|+|\varepsilon _k|, \ \ k\ge 1, \end{aligned}$$

and applying this inequality repeatedly, one gets

$$\begin{aligned} |x_k|\ \le \ \lambda ^n|x_{0}|+\sum _{j=1}^n \lambda ^{n-j}|\varepsilon _j. \end{aligned}$$

This implies the following estimate \(N<n\)

$$\begin{aligned}&\frac{|x_k|}{\sqrt{n}}\ \le \ \frac{\lambda ^n}{\sqrt{n}}|x_{0}|+\frac{1}{\sqrt{n}}\sum _{j=0}^N \lambda ^{n-j}|\varepsilon _j|+\frac{1}{\sqrt{n}}\sum _{j=N+1}^n \lambda ^{n-j}|\varepsilon _j| \\&\quad \le \frac{\lambda ^n}{\sqrt{n}}|x_{0}|+\frac{1}{\sqrt{n}}\sum _{j=0}^N \lambda ^{n-j}|\varepsilon _j|+\frac{1}{1-\lambda }\sup \frac{|\varepsilon _j|}{\sqrt{j}}. \end{aligned}$$

Limiting \(n\rightarrow \infty ,\) then \(N\rightarrow \infty \) and thanks the strong law of large numbers one comes to (31). \(\square \)

Proof of Lemma 5

For each \(\theta \in \varTheta _{\lambda }\), the process \(M_n'\) in decomposition (25) is a square integrable martingale subjected to the strong law of large numbers:

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{M_n'}{n}\rightarrow 0\ \ (P_{\theta }-\ \text{ a.s.}). \end{aligned}$$

Moreover, this convergence is uniform in \(\theta \in \varTheta _{\lambda }\), i.e., for any \(\mu >0\)

$$\begin{aligned} \sup _{\theta \in \varTheta _{\lambda }}P_{\theta }\left\{ \sup _{n\ge m}\frac{|M_n'|}{n}\ge \mu \right\} \rightarrow \ 0\ \ \text{ a.s. }\ m\rightarrow \infty . \end{aligned}$$
(54)

This can be checked by making use of the inequality (see, e.g., Shiryaev (1996))

$$\begin{aligned}&\mu ^2 P_{\theta }\left\{ \sup _{n\ge m}\frac{|M_n'|}{n}\ge \mu \right\} = \mu ^2 \lim _{l\rightarrow \infty }P_{\theta }\left\{ \max _{m\le n\le l}\frac{(M_n')^2}{n^2}\ge \mu ^2 \right\} \nonumber \\&\quad \le \frac{1}{m^2}E_{\theta }(M_n')^2\ +\ \sum _{n\ge m} E_{\theta }\left( (M_n')^2-(M_{n-1}')^2 \right) \nonumber \\&\quad =\ \sum _{n\ge m+1} \left( \frac{1}{(n-1)^2}-\frac{1}{n^2}\right) E_{\theta }(M_{n-1})^2. \end{aligned}$$
(55)

From the definition of \(M_n'\), it follows that

$$\begin{aligned} E_{\theta }(M_j')^2=\sum _{i=1}^j E_{\theta }(\varDelta M_j')^2\le j\cdot E\varepsilon _1^4. \end{aligned}$$

Using the estimate in (54), one gets

$$\begin{aligned} \sup _{\theta \in \varTheta _{\lambda }}P_{\theta }\left\{ \sup _{n\ge m}\frac{|M_n'|}{n}\ge \mu \right\} \ \le \ \frac{2E \varepsilon _1^4}{\mu ^2m}. \end{aligned}$$
(56)

This inequality provides the rate of convergence in (54). Now, we are ready to show (39). It remains to notice that the numerator of (38), thanks to Lemma 2 and the strong law of large numbers, tends to zero uniformly in \(\theta \in \varTheta _{\lambda }\) and the denominator of (38), in view of (54), is bounded away from zero below by some positive constant uniformly in \(\theta \in \varTheta _{\lambda }\). Thus, we arrive at (39). Lemma 5 is proved. \(\square \)

Proof of Proposition 2

As in Lai and Siegmund (1983), we need the following martingale central limit theorem from Freedman (1983), pages 90–92.

Lemma 6

Let \(0<\delta <1\) and \(r>0.\) Assume that \((u_n,\mathcal{F}_n)_{n\ge 0}\) is a martingale difference sequence satisfying

$$\begin{aligned} |u_n|\le \delta \ \ \text{ for } \text{ all }\ n \end{aligned}$$

and

$$\begin{aligned} \sum E\left( u_n^2|\mathcal{F}_{n-1}\right) >r \ \ \text{ a.s. } \end{aligned}$$

Let

$$\begin{aligned} \tau (h)=\inf \left\{ n\ge 1: \ \sum _{k=1}^n E\left( u_k^2|\mathcal{F}_{k-1}\right) \ge r \right\} . \end{aligned}$$

There exists a function \(\rho :\ (0,\infty ) \rightarrow \left[ 0,2\right] \), not depending on the distribution of martingale difference sequence, such that \(\lim \rho (x)=0\) as \(x\rightarrow 0\) and

$$\begin{aligned} \sup \limits _{x\in R} \left| P\left( \sum _{k=1}^{\tau } u_k\le x\right) -\varPhi \left( \frac{x}{\sqrt{\rho }}\right) \right| \le \rho \left( \frac{\delta }{\sqrt{\rho }}\right) . \end{aligned}$$

Proof of Proposition 2.

For each \(0<\delta <1\), we define truncated versions for both processes \(\{x_k^+\}_{k\ge 0}\) and \(\{x_k^-\}_{k\ge 0}\):

$$\begin{aligned}&{\tilde{x}}_k^+ = \left\{ \begin{array}{lll} x_k^+ &{} \text{ if } &{} (x_k^+)^2\le \delta ^2 h,\\ \delta \sqrt{h}&{} \text{ if } &{} (x_k^+)^2>\delta ^2 h; \end{array} \right. \\&{\tilde{x}}_k^- = \left\{ \begin{array}{lll} x_k^- &{} \text{ if } &{} (x_k^-)^2\le \delta ^2 h,\\ -\delta \sqrt{h}&{} \text{ if } &{} (x_k^-)^2>\delta ^2 h. \end{array} \right. \end{aligned}$$

Then, we introduce the counterparts of stopping times (16) as

$$\begin{aligned}&T_1(h)=\inf \left\{ n\ge 1: \sum _{k=1}^n \left( {\tilde{x}}^+_{k-1} \right) ^2\ge h \right\} , \nonumber \\&T_2(h)=\inf \left\{ n\ge 1: \sum _{k=1} ^n\left( {\tilde{x}}^-_{k-1} \right) ^2\ge h \right\} , \nonumber \\&T(h)=T_1(h)\vee T_2(h). \end{aligned}$$
(57)

Let \({\tilde{\alpha }}_{1,T_1}\) and \({\tilde{\alpha }}_{2,T_2}\) be correction factors compensating the overshots in (57) computed from the equations

$$\begin{aligned}&\sum _{k=1}^{T_1(h) -1} \left( x_{k-1}^+ \right) ^2+{\tilde{\alpha }}_{1,T_1(h)}\left( x_{T_1(h)-1}^+ \right) ^2=h, \\&\sum _{k=1}^{T_2(h) -1} \left( x_{k-1}^- \right) ^2+{\tilde{\alpha }}_{2,T_2(h)}\left( x_{T_2(h)-1}^- \right) ^2=h. \end{aligned}$$

Denote

$$\begin{aligned} {\tilde{y}}_{k-1}={\tilde{\beta }}_{1,k}u_1{\tilde{x}}_{k-1}^+\ +\ {\tilde{\beta }}_{2,k}u_2{\tilde{x}}_{k-1}^-, \ \ 1\le k \le \tau (h), \end{aligned}$$

where

$$\begin{aligned} {\tilde{\beta }}_{i,k}=\left\{ \begin{array}{lll} 1 &{} \text{ if } &{} k<T_i(h),\\ \sqrt{{\tilde{\alpha }}_{i,T_i}}&{} \text{ if } &{} k=T_i(h), \\ 0 &{} \text{ if } &{} k>T_i(h); \ \ i=1,2, \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} {\tilde{\varepsilon }}_k=\varepsilon _k\chi _{(|\varepsilon _k|\le 1/\sqrt{\delta })}, \ \ \tilde{{\tilde{\varepsilon }}}_k=\varepsilon _k-{\tilde{\varepsilon }}_k. \end{aligned}$$

Then, under \(P_{\theta }\)\(\left\{ \frac{1}{\sqrt{h}}{\tilde{y}}_{k-1}\left( {\tilde{\varepsilon }}_k- E {\tilde{\varepsilon }}_k\right) ,\ \mathcal{F}_k\right\} _{k\ge 0}\) is a martingale difference such that

$$\begin{aligned}&\left| \frac{1}{\sqrt{h}}{\tilde{y}}_{k-1}\left( {\tilde{\varepsilon }}_k- E {\tilde{\varepsilon }}_k\right) \right| \\&\quad =\left| \frac{1}{\sqrt{h}}\left( {\tilde{\beta }}_{1,k}u_1{\tilde{x}}_{k-1}^+\ +\ {\tilde{\beta }}_{2,k}u_2{\tilde{x}}_{k-1}^-\right) \left( {\tilde{\varepsilon }}_k- E {\tilde{\varepsilon }}_k\right) \right| \\&\quad \le \frac{1}{\sqrt{h}}2\delta \sqrt{h}\frac{2}{\sqrt{\delta }}=4\sqrt{\delta }. \end{aligned}$$

By Lemma 6

$$\begin{aligned} \left| P_{\theta }\left( \frac{1}{\sqrt{h}}\sum _{k=1}^{T(h)} {\tilde{y}}_{k-1}\left( {\tilde{\varepsilon }}_k- E {\tilde{\varepsilon }}_k\right) \le t\right) -\varPhi \left( \frac{t}{\sqrt{v_{\theta }(\delta )}}\right) \right| \le \rho \left( 4\frac{\delta }{\sqrt{v_{\theta }(\delta )}}\right) , \end{aligned}$$
(58)

where \(v_{\theta }(\delta )=\text{ Var}_{\theta }{\tilde{\varepsilon }}_1\rightarrow 1\) uniformly in \(\varTheta \) as \(\delta \rightarrow 0.\) We need the following sets

$$\begin{aligned} \varOmega _{1,h}= & {} \left\{ x_k^+={\tilde{x}}_k^+ \ \text{ for } \text{ all } \ k<\tau _1(h)\right\} , \\ \varOmega _{2,h}= & {} \left\{ x_k^-={\tilde{x}}_k^- \ \text{ for } \text{ all } \ k<\tau _2(h)\right\} , \\ \varOmega _{h}= & {} \varOmega _{1,h}\bigcap \varOmega _{2,h}. \end{aligned}$$

We will show that

$$\begin{aligned} \lim \limits _{h\rightarrow \infty }\sup \limits _{\theta \in \varTheta }P_{\theta }\left( \varOmega _{h}^c\right) =0. \end{aligned}$$

It suffices to check that

$$\begin{aligned} \lim \limits _{h\rightarrow \infty }\sup \limits _{\theta \in \varTheta }P_{\theta }\left( \varOmega _{i,h}^c\right) =0, \ \ i=1,2. \end{aligned}$$
(59)

For all \(\theta \) and \(h>0\), one has the inequality

$$\begin{aligned}&P_{\theta }\left( \varOmega _{i,h}^c\right) = P_{\theta }\left\{ x_k^+\ne {\tilde{x}}_k^+ \ \text{ for } \text{ some } \ k<\tau _1(h)\right\} \\&\quad \le \sum _{k=1}^m P_{\theta }\left\{ \left( x_{k-1}^+\right) ^2>\delta ^2 h\right\} \ + \ P_{\theta }\left\{ \tau _1(h), \ x_k^+\ne {\tilde{x}}_k^+ \ \text{ for } \text{ some } \ m\le k<\tau _1(h)\right\} \\&\quad \le \sum _{k=1}^m P_{\theta }\left\{ \left( x_{k-1}^+\right) ^2>\delta ^2 h\right\} \ + \ P_{\theta }\left\{ \left( x_n^+\right) ^2\ge \delta ^2\sum _{k=1}^n \left( x_{k-1}^+\right) ^2 \ \text{ for } \text{ some } \ n\ge m\right\} . \end{aligned}$$

From here, it follows

$$\begin{aligned}&\sup \limits _{\theta \in \varTheta }P_{\theta }\left( \varOmega _{i,h}^c\right) \ \le \sum _{k=1}^m \sup \limits _{\theta \in \varTheta }P_{\theta }\left\{ \left( x_{k-1}^+\right) ^2>\delta ^2 h\right\} \\&\quad + \sup \limits _{\theta \in \varTheta }\ P_{\theta }\left\{ \left( x_n^+\right) ^2\ge \delta ^2\sum _{k=1}^n \left( x_{k-1}^+\right) ^2 \ \text{ for } \text{ some } \ n\ge m\right\} . \end{aligned}$$

Limiting \(h\rightarrow \infty \) and then \(m\rightarrow \infty \) and taking into account conditions (4) and (6) and comes to (59) with \(i=1.\) Similarly, one obtains (59) for \(i=2.\)

It will be noted that, on the set \(\varOmega _h\) one has \(y_{k-1}={\tilde{y}}_{k-1}, \ \tau (h)=T(h)\),

$$\begin{aligned} \frac{1}{\sqrt{h}}\sum _{k=1}^{\tau (h)}y_{k-1}\varepsilon _k = \frac{1}{\sqrt{h}}\sum _{k=1}^{T(h)}{\tilde{y}}_{k-1}\varepsilon _k. \end{aligned}$$

This implies the equation

$$\begin{aligned} P_{\theta } \left( \frac{1}{\sqrt{h}}\sum _{k=1}^{\tau (h)}y_{k-1}\varepsilon _k \le t \right) = P_{\theta } \left( \frac{1}{\sqrt{h}}\sum _{k=1}^{T(h)}{\tilde{y}}_{k-1}\varepsilon _k\le t \right) \ +\ r_{\theta }(h) \end{aligned}$$

where \(r_{\theta }(h)\) is such that

$$\begin{aligned} \sup \limits _{\theta \in \varTheta }|r_{\theta }(h)|\ \le \ \sup \limits _{\theta \in \varTheta }P_{\theta }\left( \varOmega _{h}^c\right) \rightarrow \ 0\ \ \text{ as } \ \ h\rightarrow \infty . \end{aligned}$$

Using the presentation

$$\begin{aligned} \frac{1}{\sqrt{h}}\sum _{k=1}^{T(h)}{\tilde{y}}_{k-1}\varepsilon _k = \xi _h\ + \ \eta _h \end{aligned}$$

where

$$\begin{aligned} \xi _h= & {} \frac{1}{\sqrt{h}}\sum _{k=1}^{T(h)}{\tilde{y}}_{k-1}\left( {\tilde{\varepsilon }}_k- E {\tilde{\varepsilon }}_k\right) , \\ \eta _h= & {} \frac{1}{\sqrt{h}}\sum _{k=1}^{T(h)}{\tilde{y}}_{k-1}\left( \tilde{{\tilde{\varepsilon }}}_k- E {\tilde{\varepsilon }}_k\right) \end{aligned}$$

one can show that

$$\begin{aligned}&P_{\theta }\left( \xi _h+\eta _h \le t \right) \ \le P_{\theta }\left( \xi _h \le t+\varDelta \right) \ + \ P_{\theta }\left( |\eta _h|\ge \varDelta \right) , \\&P_{\theta }\left( \xi _h+\eta _h \le t \right) \ \ge P_{\theta }\left( \xi _h \le t-\varDelta \right) \ - \ P_{\theta }\left( |\eta _h|\ge \varDelta \right) , \end{aligned}$$

where \(\varDelta >0.\) Taking into account (58), one gets

$$\begin{aligned}&P_{\theta }\left( \xi _h+\eta _h \le t \right) \ - \ \varPhi \left( \frac{t}{v_{\theta }(\delta )}\right) \ \le \varPhi \left( \frac{t+\varDelta }{v_{\theta }(\delta )}\right) \ - \varPhi \left( \frac{t}{v_{\theta }(\delta )}\right) \ \\&\qquad +\ P_{\theta }\left( \xi _h \le t+\varDelta \right) \ - \varPhi \left( \frac{t+\varDelta }{v_{\theta }(\delta )}\right) \ \ + \ P_{\theta }\left( |\eta _h|\ge \varDelta \right) \\&\quad \le \omega \left( \varPhi ;\frac{\varDelta }{v_{\theta }(\delta )}\right) \ +\ \rho \left[ 4\left( \frac{\delta }{v_{\theta }(\delta )}\right) \right] \ +\ P_{\theta }\left( |\eta _h|\ge \varDelta \right) \end{aligned}$$

where \(\omega \left( \varPhi ;\delta \right) \) is the oscillation of function \(\varPhi \) of radius \(\delta .\) Similarly, one derives

$$\begin{aligned}&P_{\theta }\left( \xi _h+\eta _h \le t \right) \ - \ \varPhi \left( \frac{t}{v_{\theta }(\delta )}\right) \ \ge \ -\omega \left( \varPhi ;\frac{\varDelta }{v_{\theta }(\delta )}\right) \ -\ \rho \left[ 4\left( \frac{\delta }{v_{\theta }(\delta )}\right) \right] \ \\&\quad -\ P_{\theta }\left( |\eta _h|\ge \varDelta \right) . \end{aligned}$$

Combining these inequalities yields

$$\begin{aligned}&\left| P_{\theta }\left( \xi _h+\eta _h \le t \right) \ - \ \varPhi \left( \frac{t}{v_{\theta }(\delta )}\right) \right| \ \le \ \omega \left( \varPhi ;\frac{\varDelta }{v_{\theta }(\delta )}\right) \ +\ \rho \left[ 4\left( \frac{\delta }{v_{\theta }(\delta )}\right) \right] \ \\&\quad +\ P_{\theta }\left( |\eta _h|\ge \varDelta \right) , \end{aligned}$$

where

$$\begin{aligned} P_{\theta }\left( |\eta _h|\ge \varDelta \right) \ \le \ \frac{1}{\varDelta ^2}E_{\theta }\eta _h^2 = \frac{1}{\varDelta ^2} \left( 1-v_{\theta }(\delta )\right) . \end{aligned}$$

Therefore,

$$\begin{aligned}&\left| P_{\theta }\left( \frac{1}{\sqrt{h}}\sum _{k=1}^{\tau (h)} y_{k-1}\varepsilon _k\le t\right) \ - \ \varPhi (t)\right| \ \le \ \omega \left( \varPhi ;\frac{\varDelta }{v_{\theta }(\delta )}\right) \ +\ \rho \left[ 4\left( \frac{\delta }{v_{\theta }(\delta )}\right) \right] \ \\&\quad +\ |r_{\theta }(h)|\ +\ P_{\theta }\left( |\eta _h|\ge \varDelta \right) +\ \sup \limits _{t\in R} \left| \varPhi \left( \frac{\delta }{\sqrt{v_{\theta }(\delta )}}\right) \ - \varPhi (t)\right| . \end{aligned}$$

Taking supremum with respect to \(\theta \) in both sides of this inequality and limiting \(h\rightarrow \infty , \ \delta \rightarrow 0\) and then \(\varDelta \rightarrow 0,\) one arrives at the desired result. \(\square \)

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Konev, V.V., Vorobeychikov, S.E. Fixed accuracy estimation of parameters in a threshold autoregressive model. Ann Inst Stat Math 74, 685–711 (2022). https://doi.org/10.1007/s10463-021-00812-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-021-00812-4

Keywords

Navigation