Abstract
In this paper, we study a regression model with a break in trend regressor, in which the model errors are assumed to be mildly integrated. To be precise, we suppose the model errors are generated by an AR(1) process with the autoregressive coefficient \(\rho _{T}=1+{c}/{k_{T}}\), where T is the sample size, c is a negative constant, and \(\{k_T, T\in {\mathbb {N}}\}\) is a sequence of positive constants diverging to infinity such that \(k_T=o(T)\). We estimate the break date/break fraction and other parameters in the model using the least squares method. The asymptotic properties, including the consistency, rates of convergence as well as the limiting distributions, of the estimates are examined. The results derived in this paper bridge the findings in Perron and Zhu (Journal of Econometrics 129:65–119, 2005) who estimated the break date/break fraction in trend regressor under I(0) and I(1) model errors. We also show that the phase transition for the estimation error of the least squares estimate of the break date occurs when \(k_{T}\) has the same order of magnitude as \(T^{1/2}\). Monte Carlo simulations and an empirical study are given to illustrate the finite-sample performance of estimates.
Similar content being viewed by others
References
Bai, J. (1994). Least squares estimation of a shift in linear processes. Journal of Time Series Analysis, 15(5), 453–472.
Bai, J. (1997). Estimation of a change point in multiple regressions. Review of Economics and Statistics, 79(4), 551–563.
Bai, J., & Perron, P. (1998). Estimating and testing linear models with multiple structural changes. Econometrica, 66(1), 47–78.
Billingsley, P. (1999). Convergence of probability measures (2nd ed.). New York: Wiley.
Bolt, J., van Zanden, J. L. (2020). Maddison style estimates of the evolution of the world economy. A new 2020 update. Maddison Project Database, version 2020.
Chan, N. H., & Wei, C. Z. (1987). Asymptotic inference for nearly nonstationary AR(1) processes. The Annals of Statistics, 15(3), 1050–1063.
Chang, S. Y., & Perron, P. (2016). Inference on a structural break in trend with fractionally integrated errors. Journal of Time Series Analysis, 37(4), 555–574.
Chong, T. T. L. (2001). Structural change in AR(1) models. Econometric Theory, 17(1), 87–155.
Enikeeva, F., & Harchaoui, Z. (2019). High-dimensional change-point detection under sparse alternatives. The Annals of Statistics, 47(4), 2051–2079.
Fryzlewicz, P., & Rao, S. S. (2014). Multiple-change-point detection for auto-regressive conditional heteroscedastic processes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(5), 903–924.
Halunga, A. G., & Osborn, D. R. (2012). Ratio-based estimators for a change point in persistence. Journal of Econometrics, 171(1), 24–31.
Hansen, B. E. (2001). The new econometrics of structural change: Dating breaks in US labor productivity. Journal of Economic Perspectives, 15(4), 117–128.
Harvey, D. I., Leybourne, S. J., & Taylor, A. M. R. (2006). Modified tests for a change in persistence. Journal of Econometrics, 134(2), 441–469 (Corrigendum, Journal of Econometrics, 168(2):407).
Iacone, F., Leybourne, S. J., & Taylor, A. M. R. (2019). Testing the order of fractional integration of a time series in the possible presence of a trend break at an unknown point. Econometric Theory, 35, 1201–1233.
Kejriwal, M., & Lopez, C. (2013). Unit roots, level shifts, and trend breaks in per capita output: A robust evaluation. Econometric Reviews, 32(8), 892–927.
Kejriwal, M., Perron, P., & Zhou, J. (2013). Wald tests for detecting multiple structural changes in persistence. Econometric Theory, 29(2), 289–323.
Kim, D. (2011). Estimating a common deterministic time trend break in large panels with cross sectional dependence. Journal of Econometrics, 164(2), 310–330.
Kim, D., & Perron, P. (2009). Unit root tests allowing for a break in the trend function at an unknown time under both the null and alternative hypotheses. Journal of Econometrics, 148(1), 1–13.
Kim, J., & Pollard, D. (1990). Cube root asymptotics. The Annals of Statistics, 18(1), 191–219.
Lee, S., Seo, M. H., & Shin, Y. (2016). The lasso for high dimensional regression with a possible change point. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(1), 193–210.
Pang, T., Chong, T. T. L., Zhang, D., & Liang, Y. (2018). Structural change in nonstationary AR(1) models. Econometric Theory, 34(5), 985–1017.
Perron, P., & Yabu, T. (2009). Testing for shifts in trend with an integrated or stationary noise component. Journal of Business and Economic Statistics, 27(3), 369–396.
Perron, P., & Zhu, X. (2005). Structural breaks with deterministic and stochastic trends. Journal of Econometrics, 129(1), 65–119.
Phillips, P. C. B. (1987). Towards a unified asymptotic theory for autoregression. Biometrika, 74(3), 535–547.
Phillips, P. C. B., & Magdalinos, T. (2007). Limit theory for moderate deviations from a unit root. Journal of Econometrics, 136(1), 115–130.
Phillips, P. C. B., & Shi, S. P. (2018). Financial bubble implosion and reverse regression. Econometric Theory, 34(4), 705–753.
Phillips, P. C. B., Shi, S., & Yu, J. (2015). Testing for multiple bubbles: Historical episodes of exuberance and collapse in the S&P 500. International Economic Review, 56(4), 1043–1078.
Phillips, P. C. B., Wu, Y., & Yu, J. (2011). Explosive behavior in the 1990s Nasdaq: When did exuberance escalate asset values? International Economic Review, 52(1), 201–226.
Stock, J. (1991). Confidence intervals for the largest autoregressive root in US macroeconomic time series. Journal of Monetary Economics, 28(3), 435–459.
Wang, T., & Samworth, R. J. (2018). High dimensional change point estimation via sparse projection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(1), 57–83.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The study is partially supported by the National Natural Science Foundation of China (No. 11871425), the Zhejiang Provincial Natural Science Foundation of China (No. LY19A010022), and the Fundamental Research Funds for the Central Universities (No. 2021XZZX002).
Appendix
Appendix
In this section, we provide the proofs of results in Sect. 2. To start with, we provide a lemma regarding the asymptotic properties of mildly integrated AR(1) processes, which is independently interested and probably has potential applications in other work.
Lemma 6.1
Under Assumption 1, the following results hold jointly:
-
(1)
\(\frac{1}{k_{T}^{1 / 2}} u_{\lfloor T s \rfloor } \Rightarrow \sigma \int _{0}^{\infty } \exp (-c r) d W(r)\), \(0< s\le 1\);
-
(2)
\(\frac{1}{k_{T}T^{1 / 2}} \sum _{t=1}^{\lfloor T s \rfloor } u_{t} \Rightarrow -\frac{\sigma }{c} W(s)\), \(0\le s\le 1\);
-
(3)
\(\frac{1}{k_{T}T^{3 / 2}} \sum _{t=1}^{\lfloor T s \rfloor } {t}u_{t} \Rightarrow -\frac{\sigma }{c}\int _{0}^{s} {r} d W(r)\), \(0\le s\le 1\).
Proof
Note that part (1) is taken from Pang et al. (2018), and we only need to prove parts (2) and (3).
To prove part (2). Firstly, it is easy to see that \(u_t-u_{t-1}=-(1-\rho _T)u_{t-1}+\varepsilon _t\), which is equivalent to
Since \(u_{0}=o_{p}(\sqrt{k_{T}})\), using (5) and the fact \(u_{\lfloor T s \rfloor }=O_p(\sqrt{k_T})\) showed in part (1), one has
Then, applying the functional central limit theorem to the sequence \(\{\varepsilon _t, t\ge 1\}\) leads to
as desired.
To prove part (3). Denote \(S_T(r)=\frac{1}{k_{T}T^{1/2}}\sum _{t=1}^{{\lfloor T r \rfloor }}u_{t}\), \(0\le r\le 1\). Then, we have
by part (2) just proved and the continuous mapping theorem. The proof is complete. \(\square\)
Next, we introduce an inequality which is taken from Perron and Zhu (2005) and plays an essential role in the proof of asymptotic theory for Model (2). Since \(\varvec{P}_{T_{1}^{0}} \varvec{X}_{T_{1}^{0}}=\varvec{X}_{T_{1}^{0}}\) and \(\varvec{X}_{{\hat{T}}_{1}}'(\varvec{I}-\varvec{P}_{{\hat{T}}_{1}})=0\), it is true for all \({\hat{T}}_1\) that
Define
Inequality (6) implies that for all \({\hat{T}}_1\),
Proof of Theorem 2.1
We deduce the conclusion by using a contradiction argument. Recalling the definition of \({\hat{T}}_1\) in (1), it is true that
since \(\mathrm{SSR}({\lambda }^{0})=\varvec{Y}^{\prime }(\varvec{I}-\varvec{P}_{T_{1}^{0}}) \varvec{Y}\) is independent of \(T_1\). Note that
In what follows, we only consider the case where \(T_{1}\ge T_{1}^{0}\) since the case where \(T_{1}<T_{1}^{0}\) can be handled similarly.
For \(T_{1}\ge T_{1}^{0}\), we define
Note that when \(T_1=T_{1}^{0}\), \({\tilde{\iota }}_{b}(t; T_1)={\tilde{\iota }}_{b}(t; T_1^0)\) is understood as
Denote
It is clear that \({\tilde{\iota }}_{b}(\lfloor T r\rfloor ; T_1)\) converges to a continuous function \(f_{{\tilde{\iota }}_{b}}(r)\) over [0, 1], where for \(\lambda >\lambda ^{0}\),
and for \(\lambda =\lambda ^{0}\),
Next, we shall deal with \(S_{XX}\), \(S_{XU}\) and \(S_{UU}\) separately, and aim to find the dominating term/terms among them.
To analyze the term \(S_{XX}\). Observing that
one has, uniformly in \(\lambda \in (0,1)\),
since \({\tilde{\iota }}_{b}'(\varvec{I}-\varvec{P}_{T_{1}}) {\tilde{\iota }}_{b}=O(T)\) (cf. Perron and Zhu 2005, p. 97).
Consider the term \(S_{X U}\). Firstly, applying (8) leads to
Define \(f_{{\tilde{\iota }}_{b}}^{*}(r)\) as the projection residual of a least squares regression of \(f_{{\tilde{\iota }}_{b}}(r)\) on \((1, r, f_{B}(r))\), where \(f_{B}(r)=(r-\lambda )I\{r\ge \lambda \}\). By the continuous mapping theorem and part (2) of Lemma 6.1, we have
Similar to the proof of Lemma 1.a in Perron and Zhu (2005), we have \(\int _{0}^{1} f_{{\tilde{\iota }}_{b}}^{*}(r) d r=\int _{0}^{1}(f_{{\tilde{\iota }}_{b}}(r)-{\hat{\alpha }}-{\hat{\beta }} r-{\hat{\psi }} f_{B}(r)) d r=O(1)\), where \({\hat{\alpha }}\), \({\hat{\beta }}\) and \({\hat{\psi }}\) are the estimated coefficients of the regression model mentioned above, and \(\int _{0}^{1}(f_{{\tilde{\iota }}_{b}}^{*}(r))^{2} d r=O(1)\) uniformly in \(\lambda \in (0,1)\). Therefore, it is easy to deduce that \(E\left( \int _{0}^{1}f_{{\tilde{\iota }}_{b}}^{*}(r) d W(r)\right) =0\) and \(Var\left( \int _{0}^{1} f_{{\tilde{\iota }}_{b}}^{*}(r) d W(r)\right) =\int _{0}^{1}(f_{{\tilde{\iota }}_{b}}^{*}(r))^{2} d r=O(1)\). The above arguments imply that \(\int _{0}^{1} f_{{\tilde{\iota }}_{b}}^{*}(r)d W(r)= O_p(1)\), which together with (11) further imply that \({\tilde{\iota }}_{b}'(\varvec{I}-\varvec{P}_{T_{1}}) \varvec{U}=O_p(k_{T}T^{1/2})\). Thus,
uniformly in \(\lambda \in (0,1)\).
Next, we consider the term \(S_{U U}\). Define
We have
Applying Lemma 6.1, we have
Additionally, it is easy to see that, uniformly in \(\lambda \in (0,1)\),
Then, we have the following consequences:
-
(1)
\(\varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}}' \varvec{X}_{T_{1}} \varvec{D}_{T}^{-1 / 2}\) is of order O(1) uniformly in \(\lambda \in (0,1)\), and \(\varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}^{0}}^{\prime } \varvec{X}_{T_{1}^{0}} \varvec{D}_{T}^{-1 / 2}\) is of order O(1); cf. Perron and Zhu (2005), p. 98.
-
(2)
\(\varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}}' \varvec{U}\) is of order \(O_p(k_{T})\) uniformly in \(\lambda \in (0,1)\), and \(\varvec{U}^{\prime } \varvec{X}_{T_{1}^{0}} \varvec{D}_{T}^{-1 / 2}\) is of order \(O_p(k_{T})\), since it follows from (14) that
$$\begin{aligned} k_{T}^{-1}\varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}}^{\prime } \varvec{U}= \left( \begin{array}{c} {k_T^{-1}T^{-1 / 2} \sum _{t=1}^{T} u_{t}} \\ {k_T^{-1}T^{-3 / 2} \sum _{t=1}^{T} t u_{t}} \\ {k_T^{-1}T^{-3 / 2} \sum _{t=T_{1}+1}^{T}(t-T_{1}) u_{t}} \end{array} \right) =O_p(1), \end{aligned}$$(15)and \(k_{T}^{-1}\varvec{D}_{T}^{-1 / 2} \varvec{X}_{T_{1}^0}^{\prime } \varvec{U}=O_p(1)\) by similar arguments.
-
(3)
The order of \(\varvec{D}_{T}^{-1/2}(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}})' \varvec{U}\) is not higher than \(|T_{1}-T_{1}^{0}| O_{p}(k_{T}T^{-1})\) uniformly in \(\lambda \in (0,1)\). The reason is as follows. Since the first two columns in \(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}}\) are zero, we only need to consider the third column. Firstly, we write
$$\begin{aligned}&T^{-3 / 2} (\varvec{B}_{T_{1}^{0}}-\varvec{B}_{T_{1}})'\varvec{U}\nonumber \\&\quad = T^{-3 / 2}(T_{1}-T_{1}^{0}){\tilde{\iota }}'_{b}\varvec{U}\nonumber \\&\quad =T^{-3 / 2} \sum _{t=T_{1}^{0}+1}^{T_{1}}(t-T_{1}^{0}) u_{t} + T^{-3 / 2}(T_{1}-T_{1}^{0}) \sum _{t=T_{1}+1}^{T} u_{t}. \end{aligned}$$(16)Then, we shall show that the stochastic order of \(\sum _{t=T_{1}^{0}+1}^{T_{1}}(t-T_{1}^{0}) u_{t}\) is not higher than that of \(\sum _{t=T_{1}^{0}+1}^{T_{1}}(T_{1}-T_{1}^{0}) u_{t}\), that is,
$$\begin{aligned} \frac{\sum _{t=T_{1}^{0}+1}^{T_{1}}(t-T_{1}^{0}) u_{t}}{\sum _{t=T_{1}^{0}+1}^{T_{1}}(T_{1}-T_{1}^{0}) u_{t}}\le O_p(1). \end{aligned}$$(17)Write
$$\begin{aligned} \sum _{t=T_{1}^{0}+1}^{T_{1}}(t-T_{1}^{0}) u_{t}=&\sum _{k=1}^{T_{1}-T_{1}^{0}} k u_{T_{1}^{0}+k}\nonumber \\ =&\sum _{k=1}^{T_{1}-T_{1}^{0}} k \rho _{T}^{k} u_{T_{1}^{0}}+\sum _{k=1}^{T_{1}-T_{1}^{0}} k\sum _{j=T_{1}^{0}+1}^{T_{1}^{0}+k} \rho _{T}^{T_{1}^{0}+k-j} \varepsilon _{j}\nonumber \\ =&\sum _{k=1}^{T_{1}-T_{1}^{0}} k \rho _{T}^{k} u_{T_{1}^{0}}+\sum _{j=T_{1}^{0}+1}^{T_{1}}\left( \sum _{k=j-T_{1}^{0}}^{T_{1}-T_{1}^{0}} k \rho _{T}^{T_{1}^{0}+k-j}\right) \varepsilon _{j}, \end{aligned}$$and
$$\begin{aligned} \sum _{t=T_{1}^{0}}^{T_{1}}(T_{1}-T_{1}^{0}) u_{t} =&\sum _{k=1}^{T_{1}-T_{1}^{0}} (T_{1}-T_{1}^{0}) u_{T_{1}^{0}+k} \nonumber \\ =&\sum _{k=1}^{T_{1}-T_{1}^{0}} (T_{1}-T_{1}^{0}) \rho _{T}^{k} u_{T_{1}^{0}}+(T_{1}-T_{1}^{0})\sum _{k=1}^{T_{1}-T_{1}^{0}} \sum _{j=T_{1}^{0}+1}^{T_{1}^{0}+k} \rho _{T}^{T_{1}^{0}+k-j} \varepsilon _{j} \nonumber \\ =&\sum _{k=1}^{T_{1}-T_{1}^{0}}(T_{1}-T_{1}^{0}) \rho _{T}^{k} u_{T_{1}^{0}}+\sum _{j=T_{1}^{0}+1}^{T_{1}}\left( \sum _{k=j-T_{1}^{0}}^{T_{1}-T_{1}^{0}} (T_{1}-T_{1}^{0}) \rho _{T}^{T_{1}^{0}+k-j}\right) \varepsilon _{j}. \end{aligned}$$It is easy to see that
$$\begin{aligned} 0\le \sum _{k=1}^{T_{1}-T_{1}^{0}} k \rho _{T}^{k}\le \sum _{k=1}^{T_{1}-T_{1}^{0}}(T_{1}-T_{1}^{0}) \rho _{T}^{k} \end{aligned}$$and
$$\begin{aligned} \sum _{j=T_{1}^{0}+1}^{T_{1}}\left( \sum _{k=j-T_{1}^{0}}^{T_{1}-T_{1}^{0}} k \rho _{T}^{T_{1}^{0}+k-j}\right) ^2\le \sum _{j=T_{1}^{0}+1}^{T_{1}}\left( \sum _{k=j-T_{1}^{0}}^{T_{1}-T_{1}^{0}} (T_{1}-T_{1}^{0}) \rho _{T}^{T_{1}^{0}+k-j}\right) ^2. \end{aligned}$$Thus, the stochastic orders of \(\sum _{k=1}^{T_{1}-T_{1}^{0}} k \rho _{T}^{k} u_{T_{1}^{0}}\) and \(\sum _{j=T_{1}^{0}+1}^{T_{1}}(\sum _{k=j-T_{1}^{0}}^{T_{1}-T_{1}^{0}} k \rho _{T}^{T_{1}^{0}+k-j}) \varepsilon _{j}\) are not higher than those of \(\sum _{k=1}^{T_{1}-T_{1}^{0}} (T_{1}-T_{1}^{0}) \rho _{T}^{k} u_{T_{1}^{0}}\) and \(\sum _{j=T_{1}^{0}+1}^{T_{1}}(\sum _{k=j-T_{1}^{0}}^{T_{1}-T_{1}^{0}} (T_{1}-T_{1}^{0}) \rho _{T}^{T_{1}^{0}+k-j}) \varepsilon _{j}\) respectively. That is, (17) is true. Next, we consider the following term:
$$\begin{aligned} T^{-3 / 2} \sum _{t=T_{1}^{0}}^{T_{1}}(T_{1}-T_{1}^{0}) u_{t} + T^{-3 / 2}(T_{1}-T_{1}^{0}) \sum _{t=T_{1}+1}^{T} u_{t}=T^{-3 / 2} \sum _{t=T_{1}^{0}+1}^{T}(T_{1}-T_{1}^{0}) u_{t}. \end{aligned}$$Recalling part (2) of Lemma 6.1, we have
$$\begin{aligned}&T^{-3 / 2}(T_{1}-T_{1}^{0}) \sum _{t=T_{1}^{0}+1}^{T} u_{t}\nonumber \\&\quad =T^{-3 / 2}(T_{1}-T_{1}^{0}) \sum _{t=1}^{T} u_{t}-T^{-3 / 2}(T_{1}-T_{1}^{0}) \sum _{t=1}^{T_{1}^{0}} u_{t}\nonumber \\&\quad =|T_{1}-T_{1}^{0}| O_{p}(k_T T^{-1}). \end{aligned}$$This implies that the stochastic order of \(\varvec{D}_{T}^{-1/2}(\varvec{X}_{T_{1}^{0}}-\varvec{X}_{T_{1}})' \varvec{U}\) is not higher than \(|T_{1}-T_{1}^{0}|O_{p}(k_{T}T^{-1})\) uniformly in \(\lambda \in (0,1)\), as desired.
-
(4)
\(\varvec{D}_{T}^{-1 / 2}(\varvec{X}_{T_{1}}' \varvec{X}_{T_{1}}-\varvec{X}_{T_{1}^{0}}' \varvec{X}_{T_{1}^{0}}) \varvec{D}_{T}^{-1 / 2}\) is of order \(|T_{1}-T_{1}^{0}| O(T^{-1})\) uniformly in \(\lambda \in (0,1)\); cf. Perron and Zhu (2005), pp. 98–99.
Combining (13) and the above results (1)–(4) together, we have
uniformly in \(\lambda \in (0,1)\).
Results (9), (12) and (18) imply that, for the estimate \({\hat{T}}_{1}\), we have
Suppose \({\hat{\lambda }}\) does not converge in probability to \(\lambda ^{0}\), then \(S_{{\hat{X}}{\hat{X}}}=O_p(T^{3})\), \(S_{{\hat{X}}{\hat{U}}}=O_p(k_{T}T^{3/2})\) and \(S_{{\hat{U}}{\hat{U}}}\le O_p(k_{T}^{2})\). As a result, for large enough T, the term \(S_{{\hat{X}}{\hat{X}}}\) dominates the other two terms, and the inequality (7) cannot hold with probability 1 since \(S_{{\hat{X}}{\hat{X}}}\ge 0\) almost surely. However, the inequality (7) holds for all T. Thus, we hvae \({\hat{\lambda }} {\mathop {\rightarrow }\limits ^{p}} \lambda ^{0}\). The proof is complete. \(\square\)
Proof of Theorem 2.2
Given a small \(\epsilon >0\), we define \(V(\epsilon )=\lbrace T_{1}:~|T_{1}-T_{1}^{0}|<\epsilon T \rbrace\). It follows from Theorem 2.1 that \({\text {Pr}}({\hat{T}}_{1} \in V(\epsilon )) \rightarrow 1\). Moreover, given a large \(C>0\), we define
Then, for proving this theorem it suffices to prove that
Recalling (9), (12) and (18), it is not hard to see that for each \(T_{1}\) falling into the set \(V(C,\epsilon )\), we have
where a is a positive constant. Therefore, we can choose C large enough to have
which implies (19). The proof is complete. \(\square\)
Proof of Theorem 2.3
Define the set
for some positive constant C, and
We shall derive the limiting distribution by analyzing \(\underset{T_{1} \in D(C)}{\arg \min }[{\text {SSR}}(\lambda )-{\text {SSR}}(\lambda ^{0})]\). For any \(T_{1} \in D(C)\), we have \(|T_{1}-T_{1}^{0}|=O(k_{T}T^{-1/2})\). Hence, \(S_{X X}=|T_{1}-T_{1}^{0}|^{2} O(T)=O(k_{T}^{2})\), \(S_{X U}=|T_{1}-T_{1}^{0}| O_{p}(k_{T}T^{1/2})=O_{p}(k_{T}^{2})\) and \(S_{U U} \le |T_{1}-T_{1}^{0}| O_{p}(k_{T}^{2}T^{-1})=O_{p}(k_{T}^{3}T^{-3/2})\). Then,
Therefore, we only need to concentrate on the terms \(S_{X X} / k_{T}^{2}\) and \(2S_{X U} / k_{T}^{2}\).
Consider the term \(S_{X X} / k_{T}^{2}\) first. Using \(|\lambda -\lambda ^{0}|=O(k_{T}T^{-3/2})\), it is true that
and
with
Note that the above equations have been derived in Perron and Zhu (2005), p. 100. Recalling the first equation in (9), we have
Consider the second term on the right-hand side of (21) first. By some simple algebra, we have
which together with (20) imply that
Combining (22) and (23) together leads to
cf. Perron and Zhu (2005), p. 101. For the first term on the right-hand side of (21), we have
Therefore, inserting (24) and (25) into (21), we have
Consider the term \(S_{X U}/k_T^2\). Firstly, using (10), we have
Applying Lemma 6.1, we have
and
It is easy to check that (28) and (29) hold jointly. Then, from (20), (22), (27)-(29), it is true that
with
Therefore, it follows from (26) and (30) that
by some simple algebra and the continuous mapping theorem for \(\arg \max / \arg \min\) functions, cf. Kim and Pollard (1990). The proof is complete. \(\square\)
Proof of Theorem 2.4
Firstly, we rewrite \({\hat{\gamma }}\) as follows:
This yields
It follows from (22), (29), (31) and the fact that they hold jointly that
Then, by recalling \((\varvec{D}_{T}^{-1/2} \varvec{X}_{{\hat{T}}_{1}}' \varvec{X}_{{\hat{T}}_{1}} \varvec{D}_{T}^{-1/2})^{-1} {\mathop {\rightarrow }\limits ^{p}}\Sigma _{a}^{-1}\) in (20) and applying (32) and (33), we have \(k_{T}^{-1} \varvec{D}_{T}^{1/2}({\hat{\gamma }}-\gamma ^{0}){\mathop {\rightarrow }\limits ^{d}}\Sigma _{a}^{-1} \xi\). The proof is complete. \(\square\)
Proof of (4)
Recall that the residuals
Note that, under the case where \(k_T=o(T^{1/2})\), we have
and
by the results in Theorems 2.3 and 2.4 and the law of large numbers. Therefore, it is easy to see that
is a consistent estimate of \(\sigma ^2\) by Cauchy-Schwarz inequality. Moreover, it is not hard to show that
which leads to
Based on the above arguments, it follows from Theorems 2.3 and 2.4 that
as desired. \(\square\)
Rights and permissions
About this article
Cite this article
Zhu, X., Pang, T. Inference on a structural break in trend with mildly integrated errors. J. Korean Stat. Soc. 51, 282–307 (2022). https://doi.org/10.1007/s42952-021-00140-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42952-021-00140-6