Skip to main content

Advertisement

Log in

Dependence on a collection of Poisson random variables

  • Original Paper
  • Published:
Statistical Methods & Applications Aims and scope Submit manuscript

Abstract

We propose two novel ways of introducing dependence among Poisson counts through the use of latent variables in a three levels hierarchical model. Marginal distributions of the random variables of interest are Poisson with strict stationarity as special case. Order–p dependence is described in detail for a temporal sequence of random variables. A full Bayesian inference of the models is described and performance of the models is illustrated with a numerical analysis of maternal mortality in Mexico. Extensions to seasonal, periodic, spatial or spatio-temporal dependencies, as well as coping with overdispersion, are also discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  • Al-Osh MA, Alzaid AA (1987) First order integer-valued autoregressive (INAR(1)) process. Journal of Time Series Analysis 8:261–275

    Article  MathSciNet  Google Scholar 

  • Alzaid AA, Al-Osh MA (1990) An integer-valued \(p\)th-order autoregressive structure (INAR(p)) process. Journal of Applied Probability 27:314–324

    Article  MathSciNet  Google Scholar 

  • Box GEP, Jenkins GM (1970) Time Series Analysis, Forecasting and Control. Holden-Day, San Francisco

    MATH  Google Scholar 

  • Chen CWS, Lee S (2016) Generalized Poisson autoregressive models for time series of counts. Computational Statistics and Data Analysis 99:51–67

    Article  MathSciNet  Google Scholar 

  • Chen CWS, Lee S (2017) Bayesian causality test for integer-valued time series models with applications to climate and crime data. Journal of the Royal of Statistical Society, Series C - Applied Statistics 66:797–814

    Article  MathSciNet  Google Scholar 

  • Davis RA, Holan SH, Lund R, Ravishanker N (2016) Handbook of discrete-valued time series. Chapman and Hall, New York

    Book  Google Scholar 

  • Ferland R, Latour A, Oraichi D (2006) Integer-valued GARCH processes. Journal of Time Series Analysis 27:923–942

    Article  MathSciNet  Google Scholar 

  • Fokianos K, Kedem B (2004) Partial likelihood inference for time series following generalized linear models. Journal of Time Series Analysis 25:173–197

    Article  MathSciNet  Google Scholar 

  • Fokianos K, Rahbek A, Tjøstheim D (2009) Poisson autoregression. Journal of the American Statistical Association 104:1430–1439

    Article  MathSciNet  Google Scholar 

  • Ibrahim J, Laud P (1994) A predictive approach to the analysis of designed experiments. Journal of the American Statistical Association 89:309–319

    Article  MathSciNet  Google Scholar 

  • McKenzie E (1985) Some simple models for discrete variate time series. Water Resources Bulletin 21:645–650

    Article  Google Scholar 

  • McKenzie E (1988) Some arma models for dependent sequences of Poisson conuts. Advances in Applied probability 20:822–835

    Article  MathSciNet  Google Scholar 

  • McLeod AI (1994) Diagnostic checking of periodic autoregression models with application. Journal of Time Series Analysis 15:221–233

    Article  MathSciNet  Google Scholar 

  • Nabeya S (2001) Unit root seasonal autoregressive models with a polynomial trend of higher degree. Econometric Theory 17:357–385

    Article  MathSciNet  Google Scholar 

  • Nieto-Barajas LE, Bandyopadhyay D (2013) A zero-inflated spatial gamma process model with applications to disease mapping. Journal of Agricultural, Biological and Environmental Statistics 18:137–158

    Article  MathSciNet  Google Scholar 

  • Robert CP, Casella G (2010) Introducing Monte Carlo Methods with R. Springer, New York

    Book  Google Scholar 

  • Smith A, Roberts G (1993) Bayesian computations via the Gibbs sampler and related Markov chain Monte Carlo methods. Journal of the Royal Statistical Society, Series B 55:3–23

    MathSciNet  MATH  Google Scholar 

  • Tanner MA (1991) Tools for statistical inference: observed data and data augmentation methods. Springer,

  • Tierney L (1994) Markov chains for exploring posterior distributions. Annals of Statistics 22:1701–1762

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The author acknowledges support from Asociación Mexicana de Cultura, A.C.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luis E. Nieto-Barajas.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary material 1 (csv 3 KB)

Appendix

Appendix

Full conditional distributions for model parameters \(\varvec{\theta }\) and latent variables \((\mathbf{Y},\mathbf{W})\) to perform posterior inference for type A and type B models. For simplicity we assume that \(Y_t=0\), \(W_t=0\) and \(\alpha _t=0\) for \(t\le 0\). In the sequel, we use \(I_{\mathcal {X}}(x)\) to denote the indicator function that takes the value of one if \(x\in \mathcal {X}\) and zero otherwise.

For type A model, the required full conditional distributions are:

  1. (i)

    For \(Y_t\), \(t=1,\ldots ,T\)

    $$\begin{aligned} f(y_t\mid \text{ rest})\propto \frac{\left[ \alpha _t\mu ^{-p}\left\{ \prod _{j=0}^p\left( 1-\sum _{i=0}^p\alpha _{t+j-i}\right) \right\} ^{-1}\right] ^{y_t}}{y_t!\prod _{j=0}^p\left( x_{t+j}-\sum _{i=0}^p y_{t+j-i}\right) !}I_{\{0,\ldots ,c_t\}}(y_t), \end{aligned}$$

    with \(c_t=\min _{j=0,\ldots ,p}\{x_{t+j}-\sum _{i=0,i\ne j}^p y_{t+j-i}\}\)

  2. (ii)

    For \(\alpha _t\), \(t=1,\ldots ,T\)

    $$\begin{aligned} f(\alpha _t\mid \text{ rest})\propto & {} \alpha _t^{a_\alpha +y_t-1}(1-\alpha _t)^{b_\alpha -1}e^{p\mu \alpha _t}\\&\prod _{j=0}^p\left( 1-\sum _{i=0}^p\alpha _{t+j-i}\right) ^{x_{t+j}-\sum _{i=0}^p y_{t+j-i}}I_{(0,d_t)}(\alpha _t) \end{aligned}$$

    where \(d_t=\min _{j=0,\ldots ,p}\left\{ 1-\sum _{i=0,i\ne j}^p\alpha _{t+j-i}\right\} \)

  3. (iii)

    For \(\mu \)

    $$\begin{aligned} f(\mu \mid \text{ rest})=\text{ Ga }\left( \mu \left| a_\mu +\sum _{t=1}^T x_t-\sum _{t=1}^T\sum _{i=1}^p y_{t-i},b_\mu +T+\sum _{t=1}^T\sum _{i=1}^p\alpha _{t-i}\right. \right) \end{aligned}$$

Since (i) is a discrete distribution with bounded support, we simply evaluate at all points of the support and normalize to obtain the probability density and sample a new \(y_t^{(l)}\) at iteration l. To sample from (ii) we implement a MH step with random walk proposal distribution. If \(\alpha _t^{(l)}\) is the current state of the chain, we sample from \(\alpha _t^*\mid \alpha _t^{(l)}\sim \text{ Un }(\max (0,\alpha _t^{(l)}-\delta _\alpha ,\min (d_t,\alpha _t^{(l)}+\delta _\alpha )))\), that is a continuous uniform distribution, and accept it with probability \(\min \{1,f(\alpha _t^*\mid \text{ rest})/f(\alpha _t^{(l)}\mid \text{ rest})\}\). Sampling from (iii) is direct since it has a standard form.

For type B model, the required full conditional distributions are:

  1. (iv)

    For \(Y_t\), \(t=1,\ldots ,T\)

    $$\begin{aligned} f(y_t\mid \text{ rest})\propto \frac{\left\{ \alpha _t\mu ^{-1}(1-\alpha _t)^{-2}\right\} ^{y_t}}{(x_t-y_t)!y_t!\left( \sum _{i=0}^p w_{t-i}-y_t\right) !}I_{\{0,\ldots ,m_t\}}(y_t), \end{aligned}$$

    with \(m_t=\min \{x_{t},\sum _{i=0}^p w_{t-i}\}\)

  2. (v)

    For \(W_t\), \(t=1,\ldots ,T\)

    $$\begin{aligned} f(w_t\mid \text{ rest})\propto & {} \left\{ \prod _{j=0}^p {{\sum _{i=0}^p w_{t+j-i}}\atopwithdelims (){y_{t+j}}}\right\} \left\{ \frac{\mu }{p+1}\prod _{j=0}^p(1-\alpha _{t+j})\right\} ^{w_t}\\&\times \frac{1}{w_t!}I_{\{h_t,h_t+1\ldots ,\}}(w_t), \end{aligned}$$

    where \(h_t=\max _{j=0,\ldots ,p}\{y_{t+j}-\sum _{i=0,i\ne j}^p w_{t+j-i}\}\)

  3. (vi)

    For \(\alpha _t\), \(t=1,\ldots ,T\)

    $$\begin{aligned} f(\alpha _t\mid \text{ rest})\propto \alpha _t^{a_\alpha +y_t-1}(1-\alpha _t)^{b_\alpha +x_t+\sum _{i=0}^p w_{t-i}-2y_t-1}e^{\mu \alpha _t}I_{(0,1)}(\alpha _t) \end{aligned}$$
  4. (vii)

    For \(\mu \)

    $$\begin{aligned} f(\mu \mid \text{ rest})=\text{ Ga }\left( \mu \left| a_\mu +\sum _{t=1}^T (x_t+w_t-y_t),b_\mu +T\left( \frac{p+2}{p+1}\right) -\sum _{t=1}^T\alpha _{t}\right. \right) \end{aligned}$$

Again, since (iv) is a discrete distribution with bounded support, we proceed as for (i). To sample from (v) we note that the support is discrete but unbounded, so we implement a MH step with random walk proposal of the form \(W_t^*\mid W_t^{(l)}=w_t^{(l)}\sim \text{ Un }(\max (h_t,w_t^{(l)}-\delta _w),w_t^{(l)}+\delta _w)\) and accept it with probability \(\min \{1,f(w_t^*\mid \text{ rest})/f(w_t^{(l)}\mid \text{ rest})\}\). To sample from (vi) we proceed as for (ii) but with proposal \(\alpha _t^*\mid \alpha _t^{(l)}\sim \text{ Un }(\max (0,\alpha _t^{(l)}-\delta _\alpha ,\min (1,\alpha _t^{(l)}+\delta _\alpha )))\). Finally, sampling from (vii) is direct. In all cases, \(\delta _\alpha \) and \(\delta _w\) are tuning parameters that control the acceptance probability.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nieto-Barajas, L.E. Dependence on a collection of Poisson random variables. Stat Methods Appl 31, 21–39 (2022). https://doi.org/10.1007/s10260-021-00561-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10260-021-00561-x

Keywords

Navigation