Skip to main content
Log in

A generalized mixture integer-valued GARCH model

  • Original Paper
  • Published:
Statistical Methods & Applications Aims and scope Submit manuscript

Abstract

We propose a generalized mixture integer-valued generalized autoregressive conditional heteroscedastic model to provide a more flexible modeling framework. This model includes many mixture integer-valued models with different distributions already studied in the literature. The conditional and unconditional moments are discussed and the necessary and sufficient first- and second-order stationary conditions are derived. We also investigate the theoretical properties such as strict stationarity and ergodicity for the mixture process. The conditional maximum likelihood estimators via the EM algorithm are derived and the performances of the estimators are studied via simulation. The model can be selected in terms of both the number of mixture regimes and the number of orders in each regime by several different criteria. A real-life data example is also given to assess the performance of the model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

Download references

Acknowledgements

We are very grateful to the Editor and the anonymous referee for providing several exceptionally helpful comments which led to a significant improvement of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yan Cui.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is supported by National Natural Science Foundation of China (Nos. 11871027, 11731015), Science and Technology Developing Plan of Jilin Province (No. 20170101057JC), and Cultivation Plan for Excellent Young Scholar Candidates of Jilin University.

Appendix

Appendix

Proof of Theorem 1

Let \(\mu _t=\mathbb {E}(X_t)=\sum _{k=1}^\infty \alpha _k\mathbb {E}(\lambda _{kt})\) for all \(t\in \mathbb {Z}\). By the idea of Lemma 1 in Doukhan et al. (2018), we know that the mean process \(\lambda _{kt}\) has an infinite presentation of \(X_t\) under Assumption 1, i.e.,

$$\begin{aligned} \lambda _{kt}&=C_{k0}+\sum _{i=1}^{L}\alpha _{ki}X_{t-i}+ \sum _{l=1}^{\infty }\sum _{j_1,\ldots ,j_{l+1}=1}^{L}\alpha _{kj_{l+1}}\beta _{kj_1} \cdots \beta _{kj_l}X_{t-j_1-\cdots -j_{l+1}} \nonumber \\&=C_{k0}+\sum _{l=0}^{\infty }\sum _{j_1,\ldots ,j_{l+1}=1}^{L} \alpha _{kj_{l+1}}\beta _{kj_1}\cdots \beta _{kj_l}X_{t-j_1-\cdots -j_{l+1}}, \end{aligned}$$

where \(C_{k0}=\alpha _{k0}+\sum _{l=1}^{\infty }\sum _{j_1,\ldots ,j_l=1}^{L} \alpha _{k0}\beta _{kj_1}\cdots \beta _{kj_l}= \alpha _{k0}/\left( 1-\sum _{j=1}^{L}\beta _{kj}\right) .\) Therefore, we have

$$\begin{aligned} \mu _t=&\sum _{k=1}^{K}\alpha _{k}C_{k0}+\sum _{i=1}^{L}\sum _{k=1}^{K}\alpha _{k}\alpha _{ki} \mu _{t-i} \\&+\sum _{l=1}^{\infty }\sum _{j_1,\ldots ,j_{l+1}=1}^{L}\sum _{k=1}^{K}\alpha _{k} \alpha _{kj_{l+1}}\beta _{kj_1}\cdots \beta _{kj_l}\mu _{t-j_1-\cdots -j_{l+1}} \\ =&\sum _{k=1}^{K}\alpha _{k}C_{k0}+\sum _{l=0}^{\infty }\sum _{j_1,\ldots ,j_{l+1}=1}^{L} \sum _{k=1}^{K}\alpha _{k}\alpha _{kj_{l+1}}\beta _{kj_1}\cdots \beta _{kj_l} \mu _{t-j_1-\cdots -j_{l+1}}. \end{aligned}$$
(A.1)

The necessary and sufficient condition for a non-homogeneous difference equation (A.1) to have a stable solution, which is finite and independent of t, is that all roots of the equation

$$\begin{aligned} 1-\sum _{k=1}^K\alpha _k\left( \sum _{i=1}^{p_k}\alpha _{ki}z^{-i}\right) \sum _{l=0}^\infty \left( \sum _{j=1}^{q_k}\beta _{kj}z^{-j}\right) ^l=0 \end{aligned}$$

should lie inside the unit circle. Since Assumption 1 guarantees that for \(k=1,\ldots ,K,~\sum _{j=1}^{q_k}\beta _{kj}<1\), then Eq. (2) follows.

Proof of Theorem 2

Let \(\gamma _{it}=\mathbb {E}(X_tX_{t-i})\), for \(i=0,1,\ldots ,L\), consider the conditional second moment,

$$\begin{aligned} \gamma _{0t}=\sum _{k=1}^{K }\alpha _{k}v_{k0}\mathbb {E}(\lambda _{kt})+\sum _{k=1}^{K }\alpha _{k}(1+v_{k1})\mathbb {E}(\lambda _{kt}^2). \end{aligned}$$

For \(k=1,\ldots ,K\), we have

$$\begin{aligned} \mathbb {E}(\lambda _{kt}^2)=\alpha _{k0}\mathbb {E}(\lambda _{kt})+\sum _{i=1}^{L}\alpha _{ki} \mathbb {E}(X_{t-i}\lambda _{kt})+\sum _{j=1}^{L}\beta _{kj}\mathbb {E}(\lambda _{k(t-j)}\lambda _{kt}). \end{aligned}$$

Recalling the infinite presentation of \(\lambda _{kt}\) under the Assumption 1, we can calculate the expectations \(X_{t-i}\lambda _{kt}\) and \(\lambda _{k(t-j)}\lambda _{kt}\) as follows:

$$\begin{aligned}&\sum _{i=1}^{L}\alpha _{ki}\mathbb {E}(X_{t-i}\lambda _{kt})\\&\quad = C_{k0}\mu \sum _{i=1}^{L}\alpha _{ki}+\mathbb {E}\left( \sum _{l=0}^{\infty } {\mathop {\mathop {\sum }\limits _{j_1,\cdots }}^{L}\limits _{j_{l+2}=1}}\alpha _{kj_{l+1}}\alpha _{kj_{l+2}} \beta _{kj_1}\cdots \beta _{kj_l}X_{t-j_1-\cdots -j_{l+1}}X_{t-j_{l+2}}\right) \\&\quad = C_{k0}\mu \sum _{i=1}^{L}\alpha _{ki} +\sum _{l=0}^{\infty }\sum _{i=1}^{L}\left( \sum _{j_1+\cdots +j_{l+1}=i}^{L} \alpha _{kj_{l+1}}\alpha _{ki}\beta _{kj_1}\cdots \beta _{kj_l}\gamma _{0(t-i)}\right. \\&\qquad +\, \left. \sum _{v=1}^{L-1}\sum _{j_1+\cdots +j_{l+1}\ne i}^{L}\alpha _{kj_{l+1}}\alpha _{ki}\beta _{kj_1}\cdots \beta _{kj_l}\gamma _{vt}\right) \\&\quad = C_{k0}\mu \sum _{i=1}^{L}\alpha _{ki}+\sum _{i=1}^{L}\left( \sum _{l=0}^{\infty } \sum _{j_1+\cdots +j_{l+1}=i}^{L}\alpha _{kj_{l+1}}\alpha _{ki}\beta _{kj_1}\cdots \beta _{kj_l}\right) \gamma _{0(t-i)}\\&\qquad + \sum _{v=1}^{L-1}\left( \sum _{l=0}^{\infty }\sum _{j_1+\cdots +j_{l+1}\ne i}^{L}\alpha _{kj_{l+1}}\alpha _{ki}\beta _{kj_1}\cdots \beta _{kj_l}\right) \gamma _{vt}. \end{aligned}$$

Similarly, we can get

$$\begin{aligned} \sum _{j=1}^{L}&\beta _{kj}\mathbb {E}(\lambda _{k(t-j)}\lambda _{kt})= C_{k0}^2\sum _{j=1}^{L}\beta _{kj}+2C_{k0}\mu \sum _{l=0}^{\infty }{\mathop {\mathop {\sum }\limits _{j_1,\cdots ,}}^{L}\limits _{j_{l+2}=1}}\alpha _{kj_{l+2}}\beta _{kj_1} \cdots \beta _{kj_{l+1}}+\\ \sum _{i=1}^{L}&\left( \sum _{l,l^{'}=0}^{\infty }{\mathop {\mathop {\sum }\limits _{j_1+\cdots +j_{l+2}=}}^{L}\limits _{j_1^{'}+\cdots +j_{l^{'}+1}^{'}=i}}\alpha _{kj_{l+2}}\beta _{kj_1} \cdots \beta _{kj_{l+1}}\alpha _{kj^{'}_{l^{'}+1}}\beta _{kj^{'}_1} \cdots \beta _{kj^{'}_{l^{'}}}\right) \gamma _{0(t-i)}+\\ \sum _{v=1}^{L-1}&\left( \sum _{l,l^{'}=0}^{\infty } \sum _{|j_1+\cdots +j_{l+2}-j_1^{'}-\cdots -j_{l^{'}+1}^{'}|=v}^{L} \alpha _{kj_{l+2}}\beta _{kj_1}\cdots \beta _{kj_{l+1}} \alpha _{kj^{'}_{l^{'}+1}}\beta _{kj^{'}_1} \cdots \beta _{kj^{'}_{l^{'}}}\right) \gamma _{vt}. \end{aligned}$$

Hence

$$\begin{aligned} \mathbb {E}(\lambda _{kt}^2)=C_k+\sum _{i=1}^{L}\Delta _{ki} \gamma _{0(t-i)}+\sum _{v=1}^{L-1}\Lambda _{kv}\gamma _{vt}, \end{aligned}$$

where \(C_{k0}=\alpha _{k0}+\sum _{l=1}^{\infty }\sum _{j_1,\ldots ,j_l=1}^{L}\alpha _{k0} \beta _{kj_1}\cdots \beta _{kj_l}, C_k=\alpha _{k0}\mathbb {E}(\lambda _{kt})+C_{k0}\mu \sum _{i=1}^{L}\alpha _{ki}+C_{k0}^2 \sum _{j=1}^{L}\beta _{kj}+2C_{k0}\mu \sum _{l=0}^{\infty }\sum _{j_1,\ldots ,j_{l+2}=1}^{L} \alpha _{kj_{l+2}}\beta _{kj_1}\cdots \beta _{kj_{l+1}}\) is a constant independent of t under the condition of first order stationary,

$$\begin{aligned}&\Delta _{ki}=\sum _{l=0}^{\infty }\sum _{j_1+\cdots +j_{l+1}=i}^{L}\alpha _{kj_{l+1}} \alpha _{ki}\beta _{kj_1}\cdots \beta _{kj_l}+\\&\sum _{l,l^{'}=0}^{\infty }\sum _{j_1+\cdots +j_{l+2}=j_1^{'}+\cdots +j_{l^{'}+1}^{'}=i}^{L} \alpha _{kj_{l+2}}\beta _{kj_1}\cdots \beta _{kj_{l+1}}\alpha _{kj^{'}_{l^{'}+1}}\beta _{kj^{'}_1} \cdots \beta _{kj^{'}_{l^{'}}}, \end{aligned}$$
$$\begin{aligned}&\Lambda _{kv}=\sum _{l=0}^{\infty }\sum _{j_1+\cdots +j_{l+1}\ne i}^{L}\alpha _{kj_{l+1}}\alpha _{ki}\beta _{kj_1}\cdots \beta _{kj_l}+~~~~~~~~~~~~~~~~~~~~~~\\&\sum _{l,l^{'}=0}^{\infty }\sum _{|j_1+\cdots +j_{l+2}-j_1^{'}-\cdots -j_{l^{'}+1}^{'}|=v}^{L} \alpha _{kj_{l+2}}\beta _{kj_1}\cdots \beta _{kj_{l+1}}\alpha _{kj^{'}_{l^{'}+1}}\beta _{kj^{'}_1} \cdots \beta _{kj^{'}_{l^{'}}}. \end{aligned}$$

Then

$$\begin{aligned} \gamma _{0t}=\sum _{k=1}^{K }\alpha _{k}v_{k0}\mathbb {E}(\lambda _{kt})+\sum _{k=1}^{K }\alpha _{k}(1+v_{k1})\left( C_k+\sum _{i=1}^{L}\Delta _{ki}\gamma _{0(t-i)}+\sum _{v=1}^{L-1} \Lambda _{kv}\gamma _{vt}\right) . \end{aligned}$$

Let \(\gamma _{it}=\mathbb {E}(X_tX_{t-i})\). Since the conditional mean of the process is \(\mathbb {E}(X_t|\mathcal {F}_{t-1})=\sum _{k=1}^{K}\alpha _{k}\lambda _{kt}\), which is the same as that of the model proposed by Diop et al. (2016). So after some tedious calculations, we can get

$$\begin{aligned} \gamma _{it}=K_1+\sum _{l=0}^{\infty }\sum _{k=1}^{K}\alpha _{k}\delta _{i0kl}\gamma _{0(t-i)}+ \sum _{l=0}^{\infty }\sum _{k=1}^{K}\sum _{v=1}^{L-1}\alpha _{k}\delta _{ivkl}\gamma _{vt}, \end{aligned}$$

where \(\delta _{ivkl}=\sum _{|i-j_1-\cdots -j_{l+1}|=v}\alpha _{kj_{l+1}}\beta _{kj_1}\cdots \beta _{kj_{l}}\) and \(K_1=\mu \sum _{l=0}^{\infty }\sum _{k=1}^{K}\sum _{j_1,\ldots ,j_l=1}^{L}\alpha _{k0}\alpha _{k} \beta _{kj_1}\cdots \beta _{kj_{l}}\).

We obtain, for \(i=1,\ldots ,L\),

$$\begin{aligned} K_1+\omega _{i0}\gamma _{0(t-i)}+\sum _{v=1}^{L-1}\omega _{iv}\gamma _{vt}=0, \end{aligned}$$

where \(\omega _{i0}=\sum _{l=0}^{\infty }\sum _{k=1}^{K}\alpha _{k}\delta _{i0kl}\), \(\omega _{iv}=\sum _{l=0}^{\infty }\sum _{k=1}^{K}\alpha _{k}\delta _{ivkl} (i\ne v)\) and \(\omega _{ii}=\sum _{l=0}^{\infty }\sum _{k=1}^{K}\alpha _{k}\delta _{iikl}-1\). Let \(\Gamma =(\omega _{ij})_{i,j=1}^{L-1}\) and \(\Gamma ^{-1}=(b_{ij})_{i,j=1}^{L-1}\) its inverse whose existence is a consequence of the first-order stationary (for more details, see Appendix B of Diop et al. (2016)). Then \(\gamma _{vt}=-K_1\sum _{u=1}^{L-1}b_{vu}-\sum _{u=1}^{L-1}b_{vu}\omega _{u0}\gamma _{0(t-u)}\). Finally

$$\begin{aligned}&\gamma _{0t}\\&\quad = \sum _{k=1}^{K }\alpha _{k}v_{k0}\mathbb {E}(\lambda _{kt})+ \sum _{k=1}^{K}\alpha _{k}(1+v_{k1})\left( C_k+\sum _{i=1}^{L} \Delta _{ki}\gamma _{0(t-i)}+\sum _{v=1}^{L-1}\Lambda _{kv}\gamma _{vt}\right) \\&\quad = \sum _{k=1}^{K }\alpha _{k}v_{k0}\mathbb {E}(\lambda _{kt}) +\sum _{k=1}^{K}\alpha _{k}(1+v_{k1})\\&\qquad \times \,\left[ C_k+\sum _{u=1}^{L}\Delta _{ku}\gamma _{0(t-u)} +\sum _{v=1}^{L-1}\Lambda _{kv}\left( -K_1\sum _{u=1}^{L-1}b_{vu}- \sum _{u=1}^{L-1}b_{vu}\omega _{u0}\gamma _{0(t-u)}\right) \right] \end{aligned}$$

Hence

$$\begin{aligned} \gamma _{0t}=c_0+\sum _{k=1}^{K}\alpha _{k}(1+v_{k1}) \left[ \sum _{u=1}^{L-1}\left( \Delta _{ku}-\sum _{v=1}^{L-1} \Lambda _{kv}b_{vu}\omega _{u0}\right) \gamma _{0(t-u)}+ \Delta _{kL}\gamma _{0(t-L)}\right] , \end{aligned}$$
(A.2)

where \(c_0=\sum _{k=1}^K\alpha _kv_{k0}\mathbb {E}(\lambda _{kt})+\sum _{k=1}^K\alpha _k (1+v_{k1})C_k-K_1\sum _{k=1}^K\alpha _k(1+v_{k1})\sum _{v=1}^{L-1} \Lambda _{kv}\sum _{u=1}^{L-1}b_{vu}\). Let \(c_u=\sum _{k=1}^K\alpha _k(1+v_{k1}) \left( \Delta _{ku}-\sum _{v=1}^{L-1}\Lambda _{kv}b_{vu}\omega _{u0}\right) \), \(u=1,\ldots ,L-1\) and \(c_L=\sum _{k=1}^K\alpha _k(1+v_{k1})\Delta _{kL}\). Then the Eq. (A.2) is equivalent to:

$$\begin{aligned} \gamma _{0t}=c_0+\sum _{u=1}^{L}c_u\gamma _{0(t-u)}. \end{aligned}$$
(A.3)

Therefore, the the non-homogeneous difference equation has a stable solution if the equation all roots of \(1-c_1Z^{-1}-\cdots -c_LZ^{-L}=0\) lie inside the unit circle.

Proof of Theorem 3

The proof is based on checking Condition 3.1 of Doukhan and Wintenberger (2008) since Condition 3.2 is assumed and Condition 3.3 of the same article holds trivially. As discussed before, the NB-MINGARCH model can be approximated by a NB-MARCH(\(\infty \)) model under the Assumption 1. So similar to (3), it is also defined as the following form:

$$\begin{aligned} X_t&=\mathrm{MN}_t(0,Z_{kt})= F(X_{t-1},X_{t-2},\ldots ;\epsilon _t),\\ \lambda _{kt}&=f_k(X_{t-1},X_{t-2},\ldots ), \end{aligned}$$

where \(\epsilon _t=(\widetilde{N}_t,\eta _t)\) is the noise sequence and \(\widetilde{N}_t,~\eta _t\) and \(Z_{kt}\) are independent for \(t\in \mathbb {Z}\). Set \(\mathbf {x}=(x_1,x_2,\ldots )\) and \(\tilde{\mathbf {x}}=(\tilde{x}_1,\tilde{x}_2,\ldots )\in \mathbb {N}^\infty \). Then, we have

$$\begin{aligned} \mathbb {E}|F(\mathbf {x},\epsilon _t)-F(\tilde{\mathbf {x}},\epsilon _t)|&= \mathbb {E}\{\mathbb {E}(|\mathrm{MN}_t(0,Z_{kt}f_k(\mathbf {x}))-\mathrm{MN}_t(0,Z_{kt}f_k(\tilde{\mathbf {x}}))|/\eta _t)\}\\&=\sum _{k=1}^K\alpha _k \mathbb {E}\{\mathbb {E}(|\widetilde{N}_t(0,Z_{kt}f_k(\mathbf {x}))- \widetilde{N}_t(0,Z_{kt}f_k(\tilde{\mathbf {x}}))|/Z_{kt})\}\\&=\sum _{k=1}^K\alpha _k \mathbb {E}(Z_{kt}|f_k(\mathbf {x})-f_k(\tilde{\mathbf {x}})|)\\&=\sum _{k=1}^K\alpha _k |f_k(\mathbf {x})-f_k(\tilde{\mathbf {x}})|\\&\le \sum _{k=1}^K\alpha _k\sum _{i=1}^\infty c_{ki}|x_i-\tilde{x}_i|, \end{aligned}$$

where \(\{c_{ki}\}\) is a sequence of coefficients defined in Sect. 2.2. The first and second equalities hold because of conditioning, the third equality follows from the properties of the Poisson process \(\widetilde{N}_t\), the fourth equality is true as the random variable \(Z_{kt}\) is positive with mean 1 and the last inequality holds by Lemma 2.1 in Doukhan et al. (2018). Hence, by Theorem 3.1 of Doukhan and Wintenberger (2008), there exists a unique \(\tau \)-dependent solution of model which is a stationary and ergodic process.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mao, H., Zhu, F. & Cui, Y. A generalized mixture integer-valued GARCH model. Stat Methods Appl 29, 527–552 (2020). https://doi.org/10.1007/s10260-019-00498-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10260-019-00498-2

Keywords

Navigation