Skip to main content
Log in

Inference on a structural break in trend with mildly integrated errors

  • Research Article
  • Published:
Journal of the Korean Statistical Society Aims and scope Submit manuscript

Abstract

In this paper, we study a regression model with a break in trend regressor, in which the model errors are assumed to be mildly integrated. To be precise, we suppose the model errors are generated by an AR(1) process with the autoregressive coefficient \(\rho _{T}=1+{c}/{k_{T}}\), where T is the sample size, c is a negative constant, and \(\{k_T, T\in {\mathbb {N}}\}\) is a sequence of positive constants diverging to infinity such that \(k_T=o(T)\). We estimate the break date/break fraction and other parameters in the model using the least squares method. The asymptotic properties, including the consistency, rates of convergence as well as the limiting distributions, of the estimates are examined. The results derived in this paper bridge the findings in Perron and Zhu (Journal of Econometrics 129:65–119, 2005) who estimated the break date/break fraction in trend regressor under I(0) and I(1) model errors. We also show that the phase transition for the estimation error of the least squares estimate of the break date occurs when \(k_{T}\) has the same order of magnitude as \(T^{1/2}\). Monte Carlo simulations and an empirical study are given to illustrate the finite-sample performance of estimates.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Bai, J. (1994). Least squares estimation of a shift in linear processes. Journal of Time Series Analysis, 15(5), 453–472.

    Article  MathSciNet  Google Scholar 

  • Bai, J. (1997). Estimation of a change point in multiple regressions. Review of Economics and Statistics, 79(4), 551–563.

    Article  Google Scholar 

  • Bai, J., & Perron, P. (1998). Estimating and testing linear models with multiple structural changes. Econometrica, 66(1), 47–78.

    Article  MathSciNet  Google Scholar 

  • Billingsley, P. (1999). Convergence of probability measures (2nd ed.). New York: Wiley.

    Book  Google Scholar 

  • Bolt, J., van Zanden, J. L. (2020). Maddison style estimates of the evolution of the world economy. A new 2020 update. Maddison Project Database, version 2020.

  • Chan, N. H., & Wei, C. Z. (1987). Asymptotic inference for nearly nonstationary AR(1) processes. The Annals of Statistics, 15(3), 1050–1063.

    Article  MathSciNet  Google Scholar 

  • Chang, S. Y., & Perron, P. (2016). Inference on a structural break in trend with fractionally integrated errors. Journal of Time Series Analysis, 37(4), 555–574.

    Article  MathSciNet  Google Scholar 

  • Chong, T. T. L. (2001). Structural change in AR(1) models. Econometric Theory, 17(1), 87–155.

    Article  MathSciNet  Google Scholar 

  • Enikeeva, F., & Harchaoui, Z. (2019). High-dimensional change-point detection under sparse alternatives. The Annals of Statistics, 47(4), 2051–2079.

    Article  MathSciNet  Google Scholar 

  • Fryzlewicz, P., & Rao, S. S. (2014). Multiple-change-point detection for auto-regressive conditional heteroscedastic processes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(5), 903–924.

    Article  MathSciNet  Google Scholar 

  • Halunga, A. G., & Osborn, D. R. (2012). Ratio-based estimators for a change point in persistence. Journal of Econometrics, 171(1), 24–31.

    Article  MathSciNet  Google Scholar 

  • Hansen, B. E. (2001). The new econometrics of structural change: Dating breaks in US labor productivity. Journal of Economic Perspectives, 15(4), 117–128.

    Article  Google Scholar 

  • Harvey, D. I., Leybourne, S. J., & Taylor, A. M. R. (2006). Modified tests for a change in persistence. Journal of Econometrics, 134(2), 441–469 (Corrigendum, Journal of Econometrics, 168(2):407).

    Article  MathSciNet  Google Scholar 

  • Iacone, F., Leybourne, S. J., & Taylor, A. M. R. (2019). Testing the order of fractional integration of a time series in the possible presence of a trend break at an unknown point. Econometric Theory, 35, 1201–1233.

    Article  MathSciNet  Google Scholar 

  • Kejriwal, M., & Lopez, C. (2013). Unit roots, level shifts, and trend breaks in per capita output: A robust evaluation. Econometric Reviews, 32(8), 892–927.

    Article  MathSciNet  Google Scholar 

  • Kejriwal, M., Perron, P., & Zhou, J. (2013). Wald tests for detecting multiple structural changes in persistence. Econometric Theory, 29(2), 289–323.

    Article  MathSciNet  Google Scholar 

  • Kim, D. (2011). Estimating a common deterministic time trend break in large panels with cross sectional dependence. Journal of Econometrics, 164(2), 310–330.

    Article  MathSciNet  Google Scholar 

  • Kim, D., & Perron, P. (2009). Unit root tests allowing for a break in the trend function at an unknown time under both the null and alternative hypotheses. Journal of Econometrics, 148(1), 1–13.

    Article  MathSciNet  Google Scholar 

  • Kim, J., & Pollard, D. (1990). Cube root asymptotics. The Annals of Statistics, 18(1), 191–219.

    Article  MathSciNet  Google Scholar 

  • Lee, S., Seo, M. H., & Shin, Y. (2016). The lasso for high dimensional regression with a possible change point. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(1), 193–210.

    Article  MathSciNet  Google Scholar 

  • Pang, T., Chong, T. T. L., Zhang, D., & Liang, Y. (2018). Structural change in nonstationary AR(1) models. Econometric Theory, 34(5), 985–1017.

    Article  MathSciNet  Google Scholar 

  • Perron, P., & Yabu, T. (2009). Testing for shifts in trend with an integrated or stationary noise component. Journal of Business and Economic Statistics, 27(3), 369–396.

    Article  MathSciNet  Google Scholar 

  • Perron, P., & Zhu, X. (2005). Structural breaks with deterministic and stochastic trends. Journal of Econometrics, 129(1), 65–119.

    Article  MathSciNet  Google Scholar 

  • Phillips, P. C. B. (1987). Towards a unified asymptotic theory for autoregression. Biometrika, 74(3), 535–547.

    Article  MathSciNet  Google Scholar 

  • Phillips, P. C. B., & Magdalinos, T. (2007). Limit theory for moderate deviations from a unit root. Journal of Econometrics, 136(1), 115–130.

    Article  MathSciNet  Google Scholar 

  • Phillips, P. C. B., & Shi, S. P. (2018). Financial bubble implosion and reverse regression. Econometric Theory, 34(4), 705–753.

    Article  MathSciNet  Google Scholar 

  • Phillips, P. C. B., Shi, S., & Yu, J. (2015). Testing for multiple bubbles: Historical episodes of exuberance and collapse in the S&P 500. International Economic Review, 56(4), 1043–1078.

    Article  Google Scholar 

  • Phillips, P. C. B., Wu, Y., & Yu, J. (2011). Explosive behavior in the 1990s Nasdaq: When did exuberance escalate asset values? International Economic Review, 52(1), 201–226.

    Article  Google Scholar 

  • Stock, J. (1991). Confidence intervals for the largest autoregressive root in US macroeconomic time series. Journal of Monetary Economics, 28(3), 435–459.

    Article  Google Scholar 

  • Wang, T., & Samworth, R. J. (2018). High dimensional change point estimation via sparse projection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(1), 57–83.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xu Zhu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The study is partially supported by the National Natural Science Foundation of China (No. 11871425), the Zhejiang Provincial Natural Science Foundation of China (No. LY19A010022), and the Fundamental Research Funds for the Central Universities (No. 2021XZZX002).

Appendix

Appendix

In this section, we provide the proofs of results in Sect. 2. To start with, we provide a lemma regarding the asymptotic properties of mildly integrated AR(1) processes, which is independently interested and probably has potential applications in other work.

Lemma 6.1

Under Assumption 1, the following results hold jointly:

  1. (1)

    \(\frac{1}{k_{T}^{1 / 2}} u_{\lfloor T s \rfloor } \Rightarrow \sigma \int _{0}^{\infty } \exp (-c r) d W(r)\), \(0< s\le 1\);

  2. (2)

    \(\frac{1}{k_{T}T^{1 / 2}} \sum _{t=1}^{\lfloor T s \rfloor } u_{t} \Rightarrow -\frac{\sigma }{c} W(s)\), \(0\le s\le 1\);

  3. (3)

    \(\frac{1}{k_{T}T^{3 / 2}} \sum _{t=1}^{\lfloor T s \rfloor } {t}u_{t} \Rightarrow -\frac{\sigma }{c}\int _{0}^{s} {r} d W(r)\), \(0\le s\le 1\).

Proof

Note that part (1) is taken from Pang et al. (2018), and we only need to prove parts (2) and (3).

To prove part (2). Firstly, it is easy to see that \(u_t-u_{t-1}=-(1-\rho _T)u_{t-1}+\varepsilon _t\), which is equivalent to

$$\begin{aligned} (1-\rho _T)u_{t-1}=u_{t-1}-u_t+\varepsilon _t. \end{aligned}$$
(5)

Since \(u_{0}=o_{p}(\sqrt{k_{T}})\), using (5) and the fact \(u_{\lfloor T s \rfloor }=O_p(\sqrt{k_T})\) showed in part (1), one has

$$\begin{aligned} (1-\rho _{T})\frac{1}{\sqrt{T}} \sum _{t=1}^{\lfloor T s \rfloor } u_{t}&=(1-\rho _{T})\frac{1}{\sqrt{T}} \left( \sum _{t=1}^{\lfloor T s \rfloor }u_{t-1}-u_{0}+u_{\lfloor T s \rfloor }\right) \\&=\frac{1}{\sqrt{T}}\sum _{t=1}^{\lfloor T s \rfloor }(u_{t-1}-u_{t})+\frac{1}{\sqrt{T}}\sum _{t=1}^{\lfloor T s \rfloor } \varepsilon _{t}+o_p(1)\\&=\frac{1}{\sqrt{T}}\sum _{t=1}^{\lfloor T s \rfloor } \varepsilon _{t}+o_p(1). \end{aligned}$$

Then, applying the functional central limit theorem to the sequence \(\{\varepsilon _t, t\ge 1\}\) leads to

$$\begin{aligned} \frac{1}{k_{T}T^{1 / 2}} \sum _{t=1}^{\lfloor T s \rfloor } u_{t} \Rightarrow -\frac{\sigma }{c} W(s),~~0\le s\le 1, \end{aligned}$$

as desired.

To prove part (3). Denote \(S_T(r)=\frac{1}{k_{T}T^{1/2}}\sum _{t=1}^{{\lfloor T r \rfloor }}u_{t}\), \(0\le r\le 1\). Then, we have

$$\begin{aligned} \frac{1}{k_{T}T^{3 / 2}} \sum _{t=1}^{\lfloor T s \rfloor } {t}u_{t}=&\sum _{t=1}^{\lfloor T s \rfloor } \frac{t}{T}\frac{u_{t}}{k_{T}T^{1/2}}=\sum _{t=1}^{\lfloor T s \rfloor }\int _{(t-1)/T}^{t/T} \frac{t}{T}d S_T(r)\nonumber \\ =&\sum _{t=1}^{\lfloor T s \rfloor }\int _{(t-1)/T}^{t/T} \frac{\lfloor T r \rfloor }{T}d S_T(r)\cdot (1+o_p(1))\nonumber \\ =&\int _{0}^{s} \frac{\lfloor T r \rfloor }{T}d S_T(r)\cdot (1+o_p(1))\nonumber \\ \Rightarrow&-\frac{\sigma }{c}\int _{0}^{s} {r} d W(r),~~0\le s\le 1 \end{aligned}$$

by part (2) just proved and the continuous mapping theorem. The proof is complete. \(\square\)

Next, we introduce an inequality which is taken from Perron and Zhu (2005) and plays an essential role in the proof of asymptotic theory for Model (2). Since \(\varvec{P}_{T_{1}^{0}} \varvec{X}_{T_{1}^{0}}=\varvec{X}_{T_{1}^{0}}\) and \(\varvec{X}_{{\hat{T}}_{1}}'(\varvec{I}-\varvec{P}_{{\hat{T}}_{1}})=0\), it is true for all \({\hat{T}}_1\) that

$$\begin{aligned}&{\text {SSR}}({\hat{\lambda }})- {\text {SSR}}(\lambda ^{0}) \nonumber \\&\quad =\varvec{Y}'(\varvec{I}-\varvec{P}_{{\hat{T}}_{1}}) \varvec{Y} - \varvec{Y}'(\varvec{I}-\varvec{P}_{T_{1}^{0}}) \varvec{Y}\nonumber \\&\quad = (\gamma ^{0 \prime } \varvec{X}_{T_{1}^{0}}'+\varvec{U}')(\varvec{P}_{T_{1}^{0}}-\varvec{P}_{{\hat{T}}_{1}})(\varvec{X}_{T_{1}^{0}} \gamma ^{0}+\varvec{U})\nonumber \\&\quad = \gamma ^{0 \prime } \varvec{X}_{T_{1}^{0}}'(\varvec{P}_{T_{1}^{0}}-\varvec{P}_{{\hat{T}}_{1}}) \varvec{X}_{T_{1}^{0}} \gamma ^{0}+2 \gamma ^{0 \prime } \varvec{X}_{T_{1}^{0}}'(\varvec{P}_{T_{1}^{0}}-\varvec{P}_{{\hat{T}}_{1}}) \varvec{U}+\varvec{U}'(\varvec{P}_{T_{1}^{0}}-\varvec{P}_{{\hat{T}}_{1}}) \varvec{U}\nonumber \\&\quad = \gamma ^{0 \prime }(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{{\hat{T}}_{1}})'(\varvec{I}-\varvec{P}_{{\hat{T}}_{1}})(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{{\hat{T}}_{1}}) \gamma ^{0}+2 \gamma ^{0 \prime }(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{{\hat{T}}_{1}})'(\varvec{I}-\varvec{P}_{{\hat{T}}_{1}}) \varvec{U} \nonumber \\&\qquad +\varvec{U}'(\varvec{P}_{T_{1}^{0}}-\varvec{P}_{{\hat{T}}_{1}}) \varvec{U}\le 0. \end{aligned}$$
(6)

Define

$$\begin{aligned} S_{X X}&=\gamma ^{0 \prime }(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}})'(\varvec{I}-\varvec{P}_{T_{1}})(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}}) \gamma ^{0}, \\ S_{{\hat{X}} {\hat{X}}}&=\gamma ^{0 \prime }(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{{\hat{T}}_{1}})'(\varvec{I}-\varvec{P}_{{\hat{T}}_{1}})(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{{\hat{T}}_{1}}) \gamma ^{0},\\ S_{X U}&=\gamma ^{0 \prime }(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}})'(\varvec{I}-\varvec{P}_{T_{1}}) \varvec{U}, \\ S_{{\hat{X}} {\hat{U}}}&=\gamma ^{0 \prime }(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{{\hat{T}}_{1}})'(\varvec{I}-\varvec{P}_{{\hat{T}}_{1}}) \varvec{U}, \\ S_{U U}&=\varvec{U}'(\varvec{P}_{T_{1}^{0}}-\varvec{P}_{T_{1}}) \varvec{U},\\ S_{{\hat{U}} {\hat{U}}}&=\varvec{U}'(\varvec{P}_{T_{1}^{0}}-\varvec{P}_{{\hat{T}}_{1}}) \varvec{U}. \end{aligned}$$

Inequality (6) implies that for all \({\hat{T}}_1\),

$$\begin{aligned} S_{{\hat{X}} {\hat{X}}}+2S_{{\hat{X}} {\hat{U}}}+S_{{\hat{U}} {\hat{U}}}\le 0. \end{aligned}$$
(7)

Proof of Theorem 2.1

We deduce the conclusion by using a contradiction argument. Recalling the definition of \({\hat{T}}_1\) in (1), it is true that

$$\begin{aligned} {\hat{T}}_{1}=\underset{T_{1} \in [\pi T, (1-\pi )T]}{\arg \min }[\mathrm{SSR}({\lambda })-\mathrm{SSR}({\lambda }^{0})] \end{aligned}$$

since \(\mathrm{SSR}({\lambda }^{0})=\varvec{Y}^{\prime }(\varvec{I}-\varvec{P}_{T_{1}^{0}}) \varvec{Y}\) is independent of \(T_1\). Note that

$$\begin{aligned} \mathrm{SSR}({\lambda })-\mathrm{SSR}({\lambda }^{0})=S_{XX}+2S_{XU}+S_{UU}. \end{aligned}$$

In what follows, we only consider the case where \(T_{1}\ge T_{1}^{0}\) since the case where \(T_{1}<T_{1}^{0}\) can be handled similarly.

For \(T_{1}\ge T_{1}^{0}\), we define

$$\begin{aligned} {\tilde{\iota }}_{b}(t; T_1)=\left\{ \begin{array}{ll} {0,} &{} { 1 \le t \le T_{1}^{0}}, \\ {\frac{t-T_{1}^{0}}{T_{1}-T_{1}^{0}},} &{} { T_{1}^{0}<t\le T_{1}}, \\ {1,} &{} { T_{1}< t \le T}. \end{array} \right. \end{aligned}$$

Note that when \(T_1=T_{1}^{0}\), \({\tilde{\iota }}_{b}(t; T_1)={\tilde{\iota }}_{b}(t; T_1^0)\) is understood as

$$\begin{aligned} {\tilde{\iota }}_{b}(t; T_1^0)=\left\{ \begin{array}{ll} {0,} &{} {1 \le t \le T_{1}^{0}},\\ {1,} &{} { T_{1}^{0}<t \le T}. \end{array} \right. \end{aligned}$$

Denote

$$\begin{aligned} {\tilde{\iota }}_{b}=({\tilde{\iota }}_{b}(1; T_1),\ldots ,{\tilde{\iota }}_{b}(T; T_1))'. \end{aligned}$$

It is clear that \({\tilde{\iota }}_{b}(\lfloor T r\rfloor ; T_1)\) converges to a continuous function \(f_{{\tilde{\iota }}_{b}}(r)\) over [0, 1], where for \(\lambda >\lambda ^{0}\),

$$\begin{aligned} f_{{\tilde{\iota }}_{b}}(r)=\left\{ \begin{array}{ll} {0,} &{} { 0 \le r \le \lambda ^{0}}, \\ {\frac{r-\lambda ^{0}}{\lambda -\lambda ^{0}},} &{} { \lambda ^{0}<r\le \lambda }, \\ {1,} &{} { \lambda < r \le 1}, \end{array} \right. \end{aligned}$$

and for \(\lambda =\lambda ^{0}\),

$$\begin{aligned} f_{{\tilde{\iota }}_{b}}(r)=\left\{ \begin{array}{ll} {0,} &{} { 0 \le r \le \lambda ^{0}}, \\ {1,} &{} { \lambda ^{0}<r \le 1}. \end{array} \right. \end{aligned}$$

Next, we shall deal with \(S_{XX}\), \(S_{XU}\) and \(S_{UU}\) separately, and aim to find the dominating term/terms among them.

To analyze the term \(S_{XX}\). Observing that

$$\begin{aligned} (\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}})\gamma ^{0}=\beta _{b}^{0}(\varvec{B}_{T_{1}^{0}}-\varvec{B}_{T_{1}})=\beta _{b}^{0}(T_1-T_{1}^{0}) {\tilde{\iota }}_{b}, \end{aligned}$$
(8)

one has, uniformly in \(\lambda \in (0,1)\),

$$\begin{aligned} S_{X X}&=\gamma ^{0\prime }(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}})'(\varvec{I}-\varvec{P}_{T_{1}})(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}}) \gamma ^{0}\nonumber \\&=(T_{1}-T_{1}^{0})^{2}(\beta _{b}^{0})^{2}{\tilde{\iota }}_{b}'(\varvec{I}-\varvec{P}_{T_{1}}) {\tilde{\iota }}_{b}\nonumber \\&=(T_{1}-T_{1}^{0})^{2} O(T) \end{aligned}$$
(9)

since \({\tilde{\iota }}_{b}'(\varvec{I}-\varvec{P}_{T_{1}}) {\tilde{\iota }}_{b}=O(T)\) (cf. Perron and Zhu 2005, p. 97).

Consider the term \(S_{X U}\). Firstly, applying (8) leads to

$$\begin{aligned} S_{X U}=\gamma ^{0\prime }(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}})'(\varvec{I}-\varvec{P}_{T_{1}}) \varvec{U}=(T_{1}-T_{1}^{0}) \beta _{b}^{0}{\tilde{\iota }}_{b}'(\varvec{I}-\varvec{P}_{T_{1}}) \varvec{U}. \end{aligned}$$
(10)

Define \(f_{{\tilde{\iota }}_{b}}^{*}(r)\) as the projection residual of a least squares regression of \(f_{{\tilde{\iota }}_{b}}(r)\) on \((1, r, f_{B}(r))\), where \(f_{B}(r)=(r-\lambda )I\{r\ge \lambda \}\). By the continuous mapping theorem and part (2) of Lemma 6.1, we have

$$\begin{aligned} \frac{1}{k_{T}T^{1/2}}{\tilde{\iota }}_{b}^{\prime }(\varvec{I}-\varvec{P}_{T_{1}}) \varvec{U} {\mathop {\rightarrow }\limits ^{d}}-\frac{\sigma }{c} \int _{0}^{1} f_{{\tilde{\iota }}_{b}}^{*}(r) d W(r). \end{aligned}$$
(11)

Similar to the proof of Lemma 1.a in Perron and Zhu (2005), we have \(\int _{0}^{1} f_{{\tilde{\iota }}_{b}}^{*}(r) d r=\int _{0}^{1}(f_{{\tilde{\iota }}_{b}}(r)-{\hat{\alpha }}-{\hat{\beta }} r-{\hat{\psi }} f_{B}(r)) d r=O(1)\), where \({\hat{\alpha }}\), \({\hat{\beta }}\) and \({\hat{\psi }}\) are the estimated coefficients of the regression model mentioned above, and \(\int _{0}^{1}(f_{{\tilde{\iota }}_{b}}^{*}(r))^{2} d r=O(1)\) uniformly in \(\lambda \in (0,1)\). Therefore, it is easy to deduce that \(E\left( \int _{0}^{1}f_{{\tilde{\iota }}_{b}}^{*}(r) d W(r)\right) =0\) and \(Var\left( \int _{0}^{1} f_{{\tilde{\iota }}_{b}}^{*}(r) d W(r)\right) =\int _{0}^{1}(f_{{\tilde{\iota }}_{b}}^{*}(r))^{2} d r=O(1)\). The above arguments imply that \(\int _{0}^{1} f_{{\tilde{\iota }}_{b}}^{*}(r)d W(r)= O_p(1)\), which together with (11) further imply that \({\tilde{\iota }}_{b}'(\varvec{I}-\varvec{P}_{T_{1}}) \varvec{U}=O_p(k_{T}T^{1/2})\). Thus,

$$\begin{aligned} S_{X U}=|T_{1}-T_{1}^{0}| O_p(k_{T}T^{1/2}) \end{aligned}$$
(12)

uniformly in \(\lambda \in (0,1)\).

Next, we consider the term \(S_{U U}\). Define

$$\begin{aligned} \varvec{D}_{T}=\text{ diag }(T, T^{3}, T^{3}). \end{aligned}$$

We have

$$\begin{aligned} S_{U U}&=\varvec{U}'(\varvec{P}_{T_{1}^{0}}-\varvec{P}_{T_{1}})\varvec{U}\nonumber \\&= \varvec{U}'\left[ \varvec{X}_{T_{1}^{0}}(\varvec{X}_{T_{1}^{0}}' \varvec{X}_{T_{1}^{0}})^{-1} \varvec{X}_{T_{1}^{0}}'-\varvec{X}_{T_{1}}(\varvec{X}_{T_{1}}' \varvec{X}_{T_{1}})^{-1} \varvec{X}_{T_{1}}'\right] \varvec{U}\nonumber \\&= \varvec{U}'(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}}) \varvec{D}_{T}^{-1/2}(\varvec{D}_{T}^{-1/2} \varvec{X}_{T_{1}^{0}}' \varvec{X}_{T_{1}^{0}} \varvec{D}_{T}^{-1/2})^{-1} \varvec{D}_{T}^{-1/2} \varvec{X}_{T_{1}^{0}}' \varvec{U}\nonumber \\&\quad + \varvec{U}' \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1/2}(\varvec{D}_{T}^{-1/2} \varvec{X}_{T_{1}}' \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1/2})^{-1} \varvec{D}_{T}^{-1/2}(\varvec{X}_{T_{1}}' \varvec{X}_{T_{1}}-\varvec{X}_{T_{1}^{0}}' \varvec{X}_{T_{1}^{0}}) \varvec{D}_{T}^{-1/2}\nonumber \\&\quad (\varvec{D}_{T}^{-1/2} \varvec{X}_{T_{1}^{0}}' \varvec{X}_{T_{1}^{0}} \varvec{D}_{T}^{-1/2})^{-1} \varvec{D}_{T}^{-1/2} \varvec{X}_{T_{1}^{0}}' \varvec{U}\nonumber \\&\quad + \varvec{U}' \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1/2}(\varvec{D}_{T}^{-1/2} \varvec{X}_{T_{1}}' \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1/2})^{-1} \varvec{D}_{T}^{-1/2}(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}})' \varvec{U}. \end{aligned}$$
(13)

Applying Lemma 6.1, we have

$$\begin{aligned} \left\{ \begin{array}{l} {k_{T}^{-1}T^{-1 / 2}} \sum _{t=1}^{T} u_{t} {\mathop {\rightarrow }\limits ^{d}}-\frac{\sigma }{c} W(1), \\ {k_{T}^{-1}T^{-3 / 2}} \sum _{t=1}^{T} {t}u_{t} {\mathop {\rightarrow }\limits ^{d}}-\frac{\sigma }{c}\int _{0}^{1} {r} d W(r), \\ {k_{T}^{-1}T^{-3 / 2}} \sum _{t=T_{1}+1}^{T} (t-T_{1}) u_{t} \Rightarrow -\frac{\sigma }{c}\int _{\lambda }^{1} (r-\lambda )d W(r),~0<\lambda <1. \end{array} \right. \end{aligned}$$
(14)

Additionally, it is easy to see that, uniformly in \(\lambda \in (0,1)\),

$$\begin{aligned} \left\{ \begin{array}{l} T^{-3} \sum _{t=T_{1}+1}^{T}(t-T_{1})^{2} \rightarrow \int _{\lambda }^{1}(r-\lambda )^{2} d r, \\ T^{-3} \sum _{t=T_{1}+1}^{T}(t-T_{1}) t \rightarrow \int _{\lambda }^{1}(r-\lambda ) r d r, \\ T^{-2} \sum _{t=T_{1}+1}^{T}(t-T_{1}) \rightarrow \int _{\lambda }^{1}(r-\lambda ) d r. \end{array} \right. \end{aligned}$$

Then, we have the following consequences:

  1. (1)

    \(\varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}}' \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1 / 2}\) is of order O(1) uniformly in \(\lambda \in (0,1)\), and \(\varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}^{0}}^{\prime } \varvec{X}_{T_{1}^{0}} \varvec{D}_{T}^{-1 / 2}\) is of order O(1); cf. Perron and Zhu (2005), p. 98.

  2. (2)

    \(\varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}}' \varvec{U}\) is of order \(O_p(k_{T})\) uniformly in \(\lambda \in (0,1)\), and \(\varvec{U}^{\prime } \varvec{X}_{T_{1}^{0}} \varvec{D}_{T}^{-1 / 2}\) is of order \(O_p(k_{T})\), since it follows from (14) that

    $$\begin{aligned} k_{T}^{-1}\varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}}^{\prime } \varvec{U}= \left( \begin{array}{c} {k_T^{-1}T^{-1 / 2} \sum _{t=1}^{T} u_{t}} \\ {k_T^{-1}T^{-3 / 2} \sum _{t=1}^{T} t u_{t}} \\ {k_T^{-1}T^{-3 / 2} \sum _{t=T_{1}+1}^{T}(t-T_{1}) u_{t}} \end{array} \right) =O_p(1), \end{aligned}$$
    (15)

    and \(k_{T}^{-1}\varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}^0}^{\prime } \varvec{U}=O_p(1)\) by similar arguments.

  3. (3)

    The order of \(\varvec{D}_{T}^{-1/2}(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}})' \varvec{U}\) is not higher than \(|T_{1}-T_{1}^{0}| O_{p}(k_{T}T^{-1})\) uniformly in \(\lambda \in (0,1)\). The reason is as follows. Since the first two columns in \(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}}\) are zero, we only need to consider the third column. Firstly, we write

    $$\begin{aligned}&T^{-3 / 2} (\varvec{B}_{T_{1}^{0}}-\varvec{B}_{T_{1}})'\varvec{U}\nonumber \\&\quad = T^{-3 / 2}(T_{1}-T_{1}^{0}){\tilde{\iota }}'_{b}\varvec{U}\nonumber \\&\quad =T^{-3 / 2} \sum _{t=T_{1}^{0}+1}^{T_{1}}(t-T_{1}^{0}) u_{t} + T^{-3 / 2}(T_{1}-T_{1}^{0}) \sum _{t=T_{1}+1}^{T} u_{t}. \end{aligned}$$
    (16)

    Then, we shall show that the stochastic order of \(\sum _{t=T_{1}^{0}+1}^{T_{1}}(t-T_{1}^{0}) u_{t}\) is not higher than that of \(\sum _{t=T_{1}^{0}+1}^{T_{1}}(T_{1}-T_{1}^{0}) u_{t}\), that is,

    $$\begin{aligned} \frac{\sum _{t=T_{1}^{0}+1}^{T_{1}}(t-T_{1}^{0}) u_{t}}{\sum _{t=T_{1}^{0}+1}^{T_{1}}(T_{1}-T_{1}^{0}) u_{t}}\le O_p(1). \end{aligned}$$
    (17)

    Write

    $$\begin{aligned} \sum _{t=T_{1}^{0}+1}^{T_{1}}(t-T_{1}^{0}) u_{t}=&\sum _{k=1}^{T_{1}-T_{1}^{0}} k u_{T_{1}^{0}+k}\nonumber \\ =&\sum _{k=1}^{T_{1}-T_{1}^{0}} k \rho _{T}^{k} u_{T_{1}^{0}}+\sum _{k=1}^{T_{1}-T_{1}^{0}} k\sum _{j=T_{1}^{0}+1}^{T_{1}^{0}+k} \rho _{T}^{T_{1}^{0}+k-j} \varepsilon _{j}\nonumber \\ =&\sum _{k=1}^{T_{1}-T_{1}^{0}} k \rho _{T}^{k} u_{T_{1}^{0}}+\sum _{j=T_{1}^{0}+1}^{T_{1}}\left( \sum _{k=j-T_{1}^{0}}^{T_{1}-T_{1}^{0}} k \rho _{T}^{T_{1}^{0}+k-j}\right) \varepsilon _{j}, \end{aligned}$$

    and

    $$\begin{aligned} \sum _{t=T_{1}^{0}}^{T_{1}}(T_{1}-T_{1}^{0}) u_{t} =&\sum _{k=1}^{T_{1}-T_{1}^{0}} (T_{1}-T_{1}^{0}) u_{T_{1}^{0}+k} \nonumber \\ =&\sum _{k=1}^{T_{1}-T_{1}^{0}} (T_{1}-T_{1}^{0}) \rho _{T}^{k} u_{T_{1}^{0}}+(T_{1}-T_{1}^{0})\sum _{k=1}^{T_{1}-T_{1}^{0}} \sum _{j=T_{1}^{0}+1}^{T_{1}^{0}+k} \rho _{T}^{T_{1}^{0}+k-j} \varepsilon _{j} \nonumber \\ =&\sum _{k=1}^{T_{1}-T_{1}^{0}}(T_{1}-T_{1}^{0}) \rho _{T}^{k} u_{T_{1}^{0}}+\sum _{j=T_{1}^{0}+1}^{T_{1}}\left( \sum _{k=j-T_{1}^{0}}^{T_{1}-T_{1}^{0}} (T_{1}-T_{1}^{0}) \rho _{T}^{T_{1}^{0}+k-j}\right) \varepsilon _{j}. \end{aligned}$$

    It is easy to see that

    $$\begin{aligned} 0\le \sum _{k=1}^{T_{1}-T_{1}^{0}} k \rho _{T}^{k}\le \sum _{k=1}^{T_{1}-T_{1}^{0}}(T_{1}-T_{1}^{0}) \rho _{T}^{k} \end{aligned}$$

    and

    $$\begin{aligned} \sum _{j=T_{1}^{0}+1}^{T_{1}}\left( \sum _{k=j-T_{1}^{0}}^{T_{1}-T_{1}^{0}} k \rho _{T}^{T_{1}^{0}+k-j}\right) ^2\le \sum _{j=T_{1}^{0}+1}^{T_{1}}\left( \sum _{k=j-T_{1}^{0}}^{T_{1}-T_{1}^{0}} (T_{1}-T_{1}^{0}) \rho _{T}^{T_{1}^{0}+k-j}\right) ^2. \end{aligned}$$

    Thus, the stochastic orders of \(\sum _{k=1}^{T_{1}-T_{1}^{0}} k \rho _{T}^{k} u_{T_{1}^{0}}\) and \(\sum _{j=T_{1}^{0}+1}^{T_{1}}(\sum _{k=j-T_{1}^{0}}^{T_{1}-T_{1}^{0}} k \rho _{T}^{T_{1}^{0}+k-j}) \varepsilon _{j}\) are not higher than those of \(\sum _{k=1}^{T_{1}-T_{1}^{0}} (T_{1}-T_{1}^{0}) \rho _{T}^{k} u_{T_{1}^{0}}\) and \(\sum _{j=T_{1}^{0}+1}^{T_{1}}(\sum _{k=j-T_{1}^{0}}^{T_{1}-T_{1}^{0}} (T_{1}-T_{1}^{0}) \rho _{T}^{T_{1}^{0}+k-j}) \varepsilon _{j}\) respectively. That is, (17) is true. Next, we consider the following term:

    $$\begin{aligned} T^{-3 / 2} \sum _{t=T_{1}^{0}}^{T_{1}}(T_{1}-T_{1}^{0}) u_{t} + T^{-3 / 2}(T_{1}-T_{1}^{0}) \sum _{t=T_{1}+1}^{T} u_{t}=T^{-3 / 2} \sum _{t=T_{1}^{0}+1}^{T}(T_{1}-T_{1}^{0}) u_{t}. \end{aligned}$$

    Recalling part (2) of Lemma 6.1, we have

    $$\begin{aligned}&T^{-3 / 2}(T_{1}-T_{1}^{0}) \sum _{t=T_{1}^{0}+1}^{T} u_{t}\nonumber \\&\quad =T^{-3 / 2}(T_{1}-T_{1}^{0}) \sum _{t=1}^{T} u_{t}-T^{-3 / 2}(T_{1}-T_{1}^{0}) \sum _{t=1}^{T_{1}^{0}} u_{t}\nonumber \\&\quad =|T_{1}-T_{1}^{0}| O_{p}(k_T T^{-1}). \end{aligned}$$

    This implies that the stochastic order of \(\varvec{D}_{T}^{-1/2}(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}})' \varvec{U}\) is not higher than \(|T_{1}-T_{1}^{0}|O_{p}(k_{T}T^{-1})\) uniformly in \(\lambda \in (0,1)\), as desired.

  4. (4)

    \(\varvec{D}_{T}^{-1 / 2}(\varvec{X}_{T_{1}}' \varvec{X}_{T_{1}}-\varvec{X}_{T_{1}^{0}}' \varvec{X}_{T_{1}^{0}}) \varvec{D}_{T}^{-1 / 2}\) is of order \(|T_{1}-T_{1}^{0}| O(T^{-1})\) uniformly in \(\lambda \in (0,1)\); cf. Perron and Zhu (2005), pp. 98–99.

Combining (13) and the above results (1)–(4) together, we have

$$\begin{aligned} S_{U U} \le |T_{1}-T_{1}^{0}|O_p(k_{T}^{2}T^{-1}) \end{aligned}$$
(18)

uniformly in \(\lambda \in (0,1)\).

Results (9), (12) and (18) imply that, for the estimate \({\hat{T}}_{1}\), we have

$$\begin{aligned} \left\{ \begin{array}{l} S_{{\hat{X}} {\hat{X}}} =({\hat{T}}_{1}-T_{1}^{0})^{2} O(T), \\ S_{{\hat{X}} {\hat{U}}} =|{\hat{T}}_{1}-T_{1}^{0}| O_{p}(k_{T}T^{1 / 2}), \\ S_{{\hat{U}} {\hat{U}}} \le |{\hat{T}}_{1}-T_{1}^{0}| O_{p}(k_{T}^{2}T^{-1}). \end{array} \right. \end{aligned}$$

Suppose \({\hat{\lambda }}\) does not converge in probability to \(\lambda ^{0}\), then \(S_{{\hat{X}}{\hat{X}}}=O_p(T^{3})\), \(S_{{\hat{X}}{\hat{U}}}=O_p(k_{T}T^{3/2})\) and \(S_{{\hat{U}}{\hat{U}}}\le O_p(k_{T}^{2})\). As a result, for large enough T, the term \(S_{{\hat{X}}{\hat{X}}}\) dominates the other two terms, and the inequality (7) cannot hold with probability 1 since \(S_{{\hat{X}}{\hat{X}}}\ge 0\) almost surely. However, the inequality (7) holds for all T. Thus, we hvae \({\hat{\lambda }} {\mathop {\rightarrow }\limits ^{p}} \lambda ^{0}\). The proof is complete. \(\square\)

Proof of Theorem 2.2

Given a small \(\epsilon >0\), we define \(V(\epsilon )=\lbrace T_{1}:~|T_{1}-T_{1}^{0}|<\epsilon T \rbrace\). It follows from Theorem 2.1 that \({\text {Pr}}({\hat{T}}_{1} \in V(\epsilon )) \rightarrow 1\). Moreover, given a large \(C>0\), we define

$$\begin{aligned} V(C,\epsilon )=\left\{ T_{1}:~| T_{1}-T_{1}^{0}| <\epsilon T,~|T_{1}-T_{1}^{0}|>C k_{T} T^{-1/2} \right\} . \end{aligned}$$

Then, for proving this theorem it suffices to prove that

$$\begin{aligned}&{\text {Pr}}\left( \min _{T_{1} \in V(C,\epsilon )}\left[ {\text {SSR}}(\lambda )-{\text {SSR}}(\lambda ^{0})\right] \le 0\right) \nonumber \\&\quad ={\text {Pr}}\left( \min _{T_{1} \in V(C,\epsilon )}(S_{X X}+2S_{X U}+S_{U U}) \le 0\right) \rightarrow 0. \end{aligned}$$
(19)

Recalling (9), (12) and (18), it is not hard to see that for each \(T_{1}\) falling into the set \(V(C,\epsilon )\), we have

$$\begin{aligned} \left\{ \begin{array}{l} \frac{S_{X X}}{|T_{1}-T_{1}^{0}| k_{T} T^{1/2}}=\frac{|T_{1}-T_{1}^{0}|^{2} O(T)}{|T_{1}-T_{1}^{0}| k_{T}T^{1/2}}>\frac{C k_{T}T^{-1/2} O(T)}{k_{T}T^{-1/2} T}>aC+o(1),\\ \frac{S_{X U}}{|T_{1}-T_{1}^{0}| k_{T}T^{1/2}}=\frac{|T_{1}-T_{1}^{0}| O_{p}(k_{T}T^{1/2})}{|T_{1}-T_{1}^{0}| k_{T}T^{1/2}}=O_{p}(1),\\ \frac{S_{U U}}{|T_{1}-T_{1}^{0}| k_{T}T^{1/2}}\le \frac{|T_{1}-T_{1}^{0}| O_{p}(k_{T}^{2} T^{-1})}{|T_{1}-T_{1}^{0}| k_{T}T^{1/2}}=o_{p}(1), \end{array} \right. \end{aligned}$$

where a is a positive constant. Therefore, we can choose C large enough to have

$$\begin{aligned} \frac{S_{X X}+2S_{X U}+S_{U U}}{|T_{1}-T_{1}^{0}| k_{T}T^{1/2}}\ge aC/2+o_p(1), \end{aligned}$$

which implies (19). The proof is complete. \(\square\)

Proof of Theorem 2.3

Define the set

$$\begin{aligned} D(C)=\{T_{1}:~|T_{1}-T_{1}^{0}|<C k_{T}T^{-1/2}\} \end{aligned}$$

for some positive constant C, and

$$\begin{aligned} m_{T}=k_{T}^{-1}T^{1/2}|T_{1}-T_{1}^{0}|. \end{aligned}$$

We shall derive the limiting distribution by analyzing \(\underset{T_{1} \in D(C)}{\arg \min }[{\text {SSR}}(\lambda )-{\text {SSR}}(\lambda ^{0})]\). For any \(T_{1} \in D(C)\), we have \(|T_{1}-T_{1}^{0}|=O(k_{T}T^{-1/2})\). Hence, \(S_{X X}=|T_{1}-T_{1}^{0}|^{2} O(T)=O(k_{T}^{2})\), \(S_{X U}=|T_{1}-T_{1}^{0}| O_{p}(k_{T}T^{1/2})=O_{p}(k_{T}^{2})\) and \(S_{U U} \le |T_{1}-T_{1}^{0}| O_{p}(k_{T}^{2}T^{-1})=O_{p}(k_{T}^{3}T^{-3/2})\). Then,

$$\begin{aligned} \underset{T_{1} \in D(C)}{\arg \min }\left[ {\text {SSR}}(\lambda )-{\text {SSR}}(\lambda ^{0})\right]&=\underset{T_{1} \in D(C)}{\arg \min }[S_{X X}+2S_{X U}+S_{U U}] / k_{T}^{2}\\&=\underset{T_{1} \in D(C)}{\arg \min }\left[ S_{X X} / k_{T}^{2}+2S_{X U} / k_{T}^{2}+o_{p}(1)\right] . \end{aligned}$$

Therefore, we only need to concentrate on the terms \(S_{X X} / k_{T}^{2}\) and \(2S_{X U} / k_{T}^{2}\).

Consider the term \(S_{X X} / k_{T}^{2}\) first. Using \(|\lambda -\lambda ^{0}|=O(k_{T}T^{-3/2})\), it is true that

$$\begin{aligned} \varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}}' \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1 / 2}&=\left( \begin{array}{ccc}{1} &{} {\frac{1}{2}} &{} {\frac{(1-\lambda ^{0})^{2}}{2}} \\ {\frac{1}{2}} &{} {\frac{1}{3}} &{} {\frac{(1-\lambda ^{0})^{2}(\lambda ^{0}+2)}{6}} \\ {\frac{(1-\lambda ^{0})^{2}}{2}} &{} {\frac{(1-\lambda ^{0})^{2}(\lambda ^{0}+2)}{6}} &{} {\frac{(1-\lambda ^{0})^{3}}{3}} \end{array}\right) +o(1)\\&=:\Sigma _{a}+o(1), \end{aligned}$$

and

$$\begin{aligned} (\varvec{D}_{T}^{-1/2} \varvec{X}_{T_{1}}^{\prime } \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1/2})^{-1}=\Sigma _{a}^{-1}+o(1) \end{aligned}$$
(20)

with

$$\begin{aligned} \Sigma _{a}^{-1}=\left( \begin{array}{ccc} {\frac{\lambda ^{0}+3}{\lambda ^{0}}} &{} {-\frac{3(\lambda ^{0}+1)}{(\lambda ^{0})^{2}}} &{} {\frac{3}{(\lambda ^{0})^{2}(1-\lambda ^{0})}} \\ {-\frac{3(\lambda ^{0}+1)}{(\lambda ^{0})^{2}}} &{} {\frac{3(3 \lambda ^{0}+1)}{(\lambda ^{0})^{3}}} &{} {-\frac{3(2 \lambda ^{0}+1)}{(\lambda ^{0})^{3}(1-\lambda ^{0})}} \\ {\frac{3}{(\lambda ^{0})^{2}(1-\lambda ^{0})}} &{} {-\frac{3(2 \lambda ^{0}+1)}{(\lambda ^{0})^{3}(1-\lambda ^{0})}} &{} {\frac{3}{(\lambda ^{0})^{3}(1-\lambda ^{0})^{3}}} \end{array}\right) . \end{aligned}$$

Note that the above equations have been derived in Perron and Zhu (2005), p. 100. Recalling the first equation in (9), we have

$$\begin{aligned} S_{X X}/k_T^2&=(\beta _{b}^{0})^{2}\left[ k_T^{-2}(\varvec{B}_{T_{1}^{0}}-\varvec{B}_{T_{1}})'(\varvec{B}_{T_{1}^{0}}-\varvec{B}_{T_{1}})\right. \nonumber \\&\quad \left. -k_{T}^{-2}(\varvec{B}_{T_{1}^{0}}-\varvec{B}_{T_{1}})' \varvec{X}_{T_{1}}(\varvec{X}_{T_{1}}' \varvec{X}_{T_{1}})^{-1} \varvec{X}_{T_{1}}' (\varvec{B}_{T_{1}^{0}}-\varvec{B}_{T_{1}})\right] . \end{aligned}$$
(21)

Consider the second term on the right-hand side of (21) first. By some simple algebra, we have

$$\begin{aligned} k_{T}^{-1}(\varvec{B}_{T_{1}^{0}}-\varvec{B}_{T_{1}})'\varvec{X}_{T_{1}} \varvec{D}_{T}^{-1 / 2}&=k_{T}^{-1}T^{1/2}|T_{1}-T_{1}^{0}|T^{-1/2}{\tilde{\iota }}_{b}'\varvec{X}_{T_{1}}\varvec{D}_{T}^{-1/2}\nonumber \\&=m_T T^{-1/2}{\tilde{\iota }}_{b}'\varvec{X}_{T_{1}}\varvec{D}_{T}^{-1/2}\nonumber \\&= m_{T}\left( 1-\lambda ^{0}, \frac{1-(\lambda ^{0})^{2}}{2}, \frac{(1-\lambda ^{0})^{2}}{2}\right) +o(1), \end{aligned}$$
(22)

which together with (20) imply that

$$\begin{aligned}&k_{T}^{-1}(\varvec{B}_{T_{1}^{0}}-\varvec{B}_{T_{1}})' \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1 / 2}(\varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}}' \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1 / 2})^{-1}\nonumber \\&\quad = m_{T}\left( -\frac{1-\lambda ^{0}}{2}, \frac{3(1-\lambda ^{0})}{2 \lambda ^{0}}, \frac{3(2 \lambda ^{0}-1)}{2 \lambda ^{0}(1-\lambda ^{0})}\right) +o(1). \end{aligned}$$
(23)

Combining (22) and (23) together leads to

$$\begin{aligned}&k_{T}^{-2}(\varvec{B}_{T_{1}^{0}}-\varvec{B}_{T_{1}})' \varvec{X}_{T_{1}}(\varvec{X}_{T_{1}}' \varvec{X}_{T_{1}})^{-1} \varvec{X}_{T_{1}}' (\varvec{B}_{T_{1}^{0}}-\varvec{B}_{T_{1}})\nonumber \\&\quad =k_{T}^{-1}(\varvec{B}_{T_{1}^{0}}-\varvec{B}_{T_{1}})' \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1 / 2}(\varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}}' \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1 / 2})^{-1} \varvec{D}_{T}^{-1 / 2}\varvec{X}_{T_{1}}'(\varvec{B}_{T_{1}^{0}}-\varvec{B}_{T_{1}})k_{T}^{-1}\nonumber \\&\quad =\frac{(1-\lambda ^{0})(4-\lambda ^{0})}{4} m_{T}^{2}+o(m_T)+o(1); \end{aligned}$$
(24)

cf. Perron and Zhu (2005), p. 101. For the first term on the right-hand side of (21), we have

$$\begin{aligned} k_{T}^{-2}(\varvec{B}_{T^{0}}-\varvec{B}_{T_{1}})'(\varvec{B}_{T^{0}}-\varvec{B}_{T_{1}})= k_{T}^{-2}|T_{1}-T_{1}^{0}|^{2}{\tilde{\iota }}_{b}'{\tilde{\iota }}_{b}= (1-\lambda ^{0}) m_{T}^{2}+o(1). \end{aligned}$$
(25)

Therefore, inserting (24) and (25) into (21), we have

$$\begin{aligned} S_{X X}/k_T^2=(\beta _{b}^{0})^{2}\frac{(1-\lambda ^{0}) \lambda ^{0}}{4} m_{T}^{2}+o(m_T)+o(1). \end{aligned}$$
(26)

Consider the term \(S_{X U}/k_T^2\). Firstly, using (10), we have

$$\begin{aligned} S_{X U}&= \beta _{b}^{0}|T_{1}-T_{1}^{0}|{{\tilde{\iota }}}_{b}^{\prime }(\varvec{I}-\varvec{P}_{T_{1}}) \varvec{U}\nonumber \\&= \beta _{b}^{0}k_{T}^{2}(|T_{1}-T_{1}^{0}|T^{1/2}k_{T}^{-1})(T^{-1/2}k_{T}^{-1}){{\tilde{\iota }}}_{b}'(\varvec{I}-\varvec{P}_{T_{1}}) \varvec{U}\nonumber \\&= \beta _{b}^{0}k_{T}^{2}m_T\left[ (T^{-1/2}k_{T}^{-1}){{\tilde{\iota }}}_{b}' \varvec{U}\right. \nonumber \\&\quad -\left. T^{-1/2}{{\tilde{\iota }}}_{b}' \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1/2}(\varvec{D}_{T}^{-1/2} \varvec{X}_{T_{1}}' \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1/2})^{-1} \varvec{D}_{T}^{-1/2}\varvec{X}_{T_{1}}' \varvec{U}k_{T}^{-1}\right] . \end{aligned}$$
(27)

Applying Lemma 6.1, we have

$$\begin{aligned} (T^{-1/2}k_{T}^{-1}){{\tilde{\iota }}}_{b}' \varvec{U}= k_{T}^{-1}T^{-1 / 2} \sum _{t=T_{1}^{0}+1}^{T} u_{t}+o_p(1){\mathop {\rightarrow }\limits ^{d}}-\frac{\sigma }{c}\int _{\lambda ^{0}}^{1} d W(r), \end{aligned}$$
(28)

and

$$\begin{aligned} k_{T}^{-1}\varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}}^{\prime } \varvec{U}=&\left( \begin{array}{c} {k_T^{-1}T^{-1 / 2} \sum _{t=1}^{T} u_{t}} \\ {k_T^{-1}T^{-3 / 2} \sum _{t=1}^{T} t u_{t}} \\ {k_T^{-1}T^{-3 / 2} \sum _{t=T_{1}^{0}+1}^{T}(t-T_{1}^{0}) u_{t}+o_p(1)} \end{array} \right) \nonumber \\ {\mathop {\rightarrow }\limits ^{d}}&\left( \begin{array}{c} -\frac{\sigma }{c} W(1) \\ -\frac{\sigma }{c} \int _{0}^{1} r d W(r) \\ -\frac{\sigma }{c} \int _{\lambda ^{0}}^{1}(r-\lambda ^{0}) d W(r) \end{array} \right) . \end{aligned}$$
(29)

It is easy to check that (28) and (29) hold jointly. Then, from (20), (22), (27)-(29), it is true that

$$\begin{aligned}&S_{X U}/(k_{T}^{2}m_T)\nonumber \\&\quad {\mathop {\rightarrow }\limits ^{d}}\beta _{b}^{0}\left( -\frac{\sigma }{c}\int _{\lambda ^{0}}^{1} d W(r)-\left( 1-\lambda ^{0}, \frac{1-(\lambda ^{0})^{2}}{2}, \frac{(1-\lambda ^{0})^{2}}{2}\right) \Sigma _{a}^{-1}\left( \begin{array}{c} -\frac{\sigma }{c} W(1) \\ -\frac{\sigma }{c} \int _{0}^{1} r d W(r) \\ -\frac{\sigma }{c} \int _{\lambda ^{0}}^{1}(r-\lambda ^{0}) d W(r) \end{array} \right) \right) \nonumber \\&\quad =\beta _{b}^{0}(-\frac{\sigma }{c})\zeta \end{aligned}$$
(30)

with

$$\begin{aligned} \zeta =\int _{0}^{\lambda ^{0}} \frac{\lambda ^{0}-(\lambda ^{0})^{2}-3r+3r\lambda ^{0}}{2 \lambda ^{0}} d W(r)+ \int _{\lambda ^{0}}^{1}\frac{\lambda ^{0}(2+\lambda ^{0}-3r)}{2(1-\lambda ^{0})}d W(r)\sim N\left( 0,\frac{\lambda ^{0}(1-\lambda ^{0})}{4}\right) . \end{aligned}$$

Therefore, it follows from (26) and (30) that

$$\begin{aligned} k_{T}^{-1}T^{3/2}({\hat{\lambda }}-\lambda ^{0})=&m_{T}^{*}=\underset{m_{T} \in D(C)}{\arg \min }\left[ S_{X X}/k_{T}^{2}+2S_{X U}/k_{T}^{2}+o_{p}(1)\right] \nonumber \\ {\mathop {\rightarrow }\limits ^{d}}&\frac{4 \sigma \zeta }{c \beta _{b}^{0} \lambda ^{0}(1-\lambda ^{0})} \end{aligned}$$
(31)

by some simple algebra and the continuous mapping theorem for \(\arg \max / \arg \min\) functions, cf. Kim and Pollard (1990). The proof is complete. \(\square\)

Proof of Theorem 2.4

Firstly, we rewrite \({\hat{\gamma }}\) as follows:

$$\begin{aligned} {\hat{\gamma }}&=(\varvec{X}_{{\hat{T}}_{1}}' \varvec{X}_{{\hat{T}}_{1}})^{-1} \varvec{X}_{{\hat{T}}_{1}}' Y\\&=(\varvec{X}_{{\hat{T}}_{1}}^{\prime } \varvec{X}_{{\hat{T}}_{1}})^{-1} \varvec{X}_{{\hat{T}}_{1}}' \varvec{X}_{T_{1}^{0}} \gamma ^{0}+(\varvec{X}_{{\hat{T}}_{1}}' \varvec{X}_{{\hat{T}}_{1}})^{-1} \varvec{X}_{{\hat{T}}_{1}}' \varvec{U}\\&=\gamma ^{0}+\varvec{D}_{T}^{-1 / 2}(\varvec{D}_{T}^{-1 / 2} \varvec{X}_{{\hat{T}}_{1}}' \varvec{X}_{{\hat{T}}_{1}}' \varvec{D}_{T}^{-1/2})^{-1} \varvec{D}_{T}^{-1 / 2} \varvec{X}_{{\hat{T}}_{1}}'(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{{\hat{T}}_{1}}) \gamma ^{0}\\&\quad +\varvec{D}_{T}^{-1 / 2}(\varvec{D}_{T}^{-1/2} \varvec{X}_{{\hat{T}}_{1}}' \varvec{X}_{{\hat{T}}_{1}} \varvec{D}_{T}^{-1 / 2})^{-1} \varvec{D}_{T}^{-1 / 2} \varvec{X}_{{\hat{T}}_{1}}' \varvec{U}. \end{aligned}$$

This yields

$$\begin{aligned}&\varvec{D}_{T}^{1/2}({\hat{\gamma }}-\gamma ^{0})\nonumber \\&\quad =(\varvec{D}_{T}^{-1/2} \varvec{X}_{{\hat{T}}_{1}}^{\prime } \varvec{X}_{{\hat{T}}_{1}} \varvec{D}_{T}^{-1/2})^{-1}[ \varvec{D}_{T}^{-1/2} \varvec{X}_{{\hat{T}}_{1}}'(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{{\hat{T}}_{1}}) \gamma ^{0}+\varvec{D}_{T}^{-1 / 2} \varvec{X}_{{\hat{T}}_{1}}' \varvec{U} ]. \end{aligned}$$
(32)

It follows from (22), (29), (31) and the fact that they hold jointly that

$$\begin{aligned}&k_{T}^{-1} [\varvec{D}_{T}^{-1/2} \varvec{X}_{{\hat{T}}_{1}}'(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{{\hat{T}}_{1}}) \gamma ^{0}+\varvec{D}_{T}^{-1/2} \varvec{X}_{{\hat{T}}_{1}}' \varvec{U}] \nonumber \\&\quad =k_{T}^{-1}\varvec{D}_{T}^{-1/2} \varvec{X}_{{\hat{T}}_{1}}' \beta _{b}^{0}|{\hat{T}}_{1}-T_{1}^{0}| {\tilde{\iota }}_{b}+k_{T}^{-1} \varvec{D}_{T}^{-1/2} \varvec{X}_{{\hat{T}}_{1}}' \varvec{U}\nonumber \\&\quad =|{\hat{T}}_{1}-T_{1}^{0}| k_{T}^{-1}T^{1/2}\beta _{b}^{0}T^{-1/2}\varvec{D}_{T}^{-1/2} \varvec{X}_{{\hat{T}}_{1}}' {\tilde{\iota }}_{b}+k_{T}^{-1}\varvec{D}_{T}^{-1/2} \varvec{X}_{{\hat{T}}_{1}}' \varvec{U}\nonumber \\&\quad = m_{T}^{*} \beta _{b}^{0} T^{-1/2} \varvec{D}_{T}^{-1/2} \varvec{X}_{{\hat{T}}_{1}}' {\tilde{\iota }}_{b}+k_{T}^{-1}\varvec{D}_{T}^{-1/2} \varvec{X}_{{\hat{T}}_{1}}' \varvec{U} \nonumber \\&\qquad {\mathop {\rightarrow }\limits ^{d}}\frac{4 \sigma \zeta }{c\lambda ^{0}(1-\lambda ^{0})} \left( \begin{array}{c} {1-\lambda ^{0}} \\ {\frac{1-(\lambda ^{0})^{2}}{2}} \\ {\frac{(1-\lambda ^{0})^{2}}{2}} \end{array}\right) +(-\frac{\sigma }{c}) \left( \begin{array}{c} {\int _{0}^{1} d W(r)} \\ {\int _{0}^{1} r d W(r)} \\ {\int _{\lambda ^{0}}^{1}(r-\lambda ^{0}) d W(r)} \end{array}\right) =:\xi . \end{aligned}$$
(33)

Then, by recalling \((\varvec{D}_{T}^{-1/2} \varvec{X}_{{\hat{T}}_{1}}' \varvec{X}_{{\hat{T}}_{1}} \varvec{D}_{T}^{-1/2})^{-1} {\mathop {\rightarrow }\limits ^{p}}\Sigma _{a}^{-1}\) in (20) and applying (32) and (33), we have \(k_{T}^{-1} \varvec{D}_{T}^{1/2}({\hat{\gamma }}-\gamma ^{0}){\mathop {\rightarrow }\limits ^{d}}\Sigma _{a}^{-1} \xi\). The proof is complete. \(\square\)

Proof of (4)

Recall that the residuals

$$\begin{aligned} {\hat{u}}_t=y_t-{\hat{y}}_t=(\mu _{1}^0+\beta _{1}^0 t+\beta _{b}^0 B_{T_{1}^0}(t)+u_{t})-({\hat{\mu }}_{1}+{\hat{\beta }}_{1} t+{\hat{\beta }}_{b} B_{{\hat{T}}_{1}}(t)),\quad t=1,\ldots , T. \end{aligned}$$

Note that, under the case where \(k_T=o(T^{1/2})\), we have

$$\begin{aligned}&\frac{1}{T}\sum _{t=1}^T (\mu _{1}^0-{\hat{\mu }}_{1})^2=O_p(k_T^2/T)=o_p(1), \\&\frac{1}{T}\sum _{t=1}^T (\beta _{1}^0-{\hat{\beta }}_{1})^2 t^2=(\beta _{1}^0-{\hat{\beta }}_{1})^2 \cdot O(T^2)= O_p(k_T^2/T^3)\cdot O(T^2)=o_p(1), \\&\frac{1}{T}\sum _{t=1}^T (\beta _{b}^0-{\hat{\beta }}_{b})^2 B^2_{T_{1}^0}(t)=(\beta _{b}^0-{\hat{\beta }}_{b})^2 \cdot O(T^2)= O_p(k_T^2/T^3)\cdot O(T^2)=o_p(1), \end{aligned}$$
$$\begin{aligned}&\frac{1}{T}\sum _{t=1}^T {\hat{\beta }}^2_{b}(B_{T_{1}^0}(t)-B_{{\hat{T}}_{1}}(t))^2\nonumber \\&\quad =O_p(1)\cdot \frac{1}{T}\sum _{t=1}^T (B_{T_{1}^0}(t)-B_{{\hat{T}}_{1}}(t))^2\nonumber \\&\quad =\left\{ \begin{array}{ll} O_p(1)\cdot \frac{1}{T}\left[ \sum _{t=T_1^0+1}^{{\hat{T}}_1}(t-T_1^0)^2+\sum _{t={\hat{T}}_1+1}^{T}({\hat{T}}_1-T_1^0)^2\right] , &{}\qquad \mathrm{if}~ {\hat{T}}_1\ge T_1^0 \\ O_p(1)\cdot \frac{1}{T}\left[ \sum _{t={\hat{T}}_1+1}^{T_1^0}(t-{\hat{T}}_1)^2+\sum _{t=T_1^0+1}^{T}({\hat{T}}_1-T_1^0)^2\right] , &{}\qquad \mathrm{if}~ {\hat{T}}_1< T_1^0 \end{array} \right. \nonumber \\&\quad =O_p(1)\cdot \frac{1}{T} [o_p(1)+T\cdot o_p(1)]=o_p(1) \end{aligned}$$

and

$$\begin{aligned} \frac{1}{T}\sum _{t=1}^T u_t^2{\mathop {\rightarrow }\limits ^{p}}\sigma ^2, \end{aligned}$$

by the results in Theorems 2.3 and 2.4 and the law of large numbers. Therefore, it is easy to see that

$$\begin{aligned} {\hat{\sigma }}^2&=\frac{1}{T}\sum _{t=1}^T {\hat{u}}_t^2\nonumber \\&=\frac{1}{T}\sum _{t=1}^T [(\mu _{1}^0-{\hat{\mu }}_{1})+(\beta _{1}^0-{\hat{\beta }}_{1})t+(\beta _{b}^0-{\hat{\beta }}_{b})B_{T_{1}^0}(t)+{\hat{\beta }}_{b}(B_{T_{1}^0}(t)-B_{{\hat{T}}_{1}}(t))+u_t]^2 \end{aligned}$$

is a consistent estimate of \(\sigma ^2\) by Cauchy-Schwarz inequality. Moreover, it is not hard to show that

$$\begin{aligned} \max _{1\le t\le T}|{\hat{u}}_t-u_t|=O_p(k_T/\sqrt{T}), \end{aligned}$$

which leads to

$$\begin{aligned} {\hat{\rho }}_T=\frac{\sum _{t=2}^T{\hat{u}}_t{\hat{u}}_{t-1}}{\sum _{t=2}^T{\hat{u}}_{t-1}^2}=\frac{\sum _{t=2}^T u_t u_{t-1}}{\sum _{t=2}^T u_{t-1}^2}\cdot (1+o_p(1))=(\rho _T+o_p(1))\cdot (1+o_p(1))=\rho _T+o_p(1). \end{aligned}$$

Based on the above arguments, it follows from Theorems 2.3 and 2.4 that

$$\begin{aligned} \frac{({\hat{\rho }}_T-1){\hat{\beta }}_b\sqrt{{\hat{\lambda }}(1-{\hat{\lambda }})}}{2{\hat{\sigma }}}\sqrt{T}({\hat{T}}_1-T_1^{0}) {\mathop {\rightarrow }\limits ^{d}}N(0,1), \end{aligned}$$

as desired. \(\square\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, X., Pang, T. Inference on a structural break in trend with mildly integrated errors. J. Korean Stat. Soc. 51, 282–307 (2022). https://doi.org/10.1007/s42952-021-00140-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42952-021-00140-6

Keywords

Mathematics Subject Classification

Navigation