Skip to main content
Log in

Inference using an exact distribution of test statistic for random-effects meta-analysis

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

Random-effects meta-analysis serves to integrate the results of multiple studies with methods such as moment estimation and likelihood estimation duly proposed. These existing methods are based on asymptotic normality with respect to the number of studies. However, the test and interval estimation deviate from the nominal significance level when integrating a small number of studies. Although a method for constructing more conservative intervals has been recently proposed, the exact distribution of test statistic for the overall treatment effect is not well known. In this paper, we provide an almost-exact distribution of the test statistic in random-effects meta-analysis and propose the test and interval estimation using the almost-exact distribution. Simulations demonstrate the accuracy of estimation and application to existing meta-analysis using the method proposed here. With known variance parameters, the estimation performance using the almost-exact distribution always achieves the nominal significance level regardless of the number of studies and heterogeneity. We also propose some methods to construct a conservative interval estimation, even when the variance parameters are unknown, and present their performances via simulation and an application to Alzheimer’s disease meta-analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Akkouchi, M. (2005). On the convolution of gamma distributions. Soochow Journal of Mathematics, 31, 205–211.

    MathSciNet  MATH  Google Scholar 

  • Biggerstaff, B., Jackson, D. (2008). The exact distribution of Cochran’s heterogeneity statistic in one-way random effects meta-analysis. Statistics in Medicine, 27, 6093–6110.

    Article  MathSciNet  Google Scholar 

  • Box, G. (1954). Some theorems on quadratic forms applied in the study of analysis of variance problems, I. Effect of inequality of variance in the one-way classification. The Annals of Mathematical Statistics, 25, 290–302.

    Article  MathSciNet  MATH  Google Scholar 

  • Chen, R., Chan, P., Chu, H. (2017). Treatment effects between monotherapy of donepezil versus combination with memantine for alzheimer disease: A meta-analysis. PLOS ONE, 12, 1–14.

    Google Scholar 

  • Cornell, J., Mulrow, C., Localio, R. (2014). Random-effects meta-analysis of inconsistent effects: A time for change. Annals of Internal Medicine, 160, 267–270.

    Article  Google Scholar 

  • DerSimonian, R., Kacker, R. (2007). Random-effects model for meta-analysis of clinical trials: An update. Contemporary Clinical Trials, 28, 105–114.

    Article  Google Scholar 

  • DerSimonian, R., Laird, N. (1986). Meta-analysis in clinical trials. Controlled Clinical Trials, 7, 177–188.

    Article  Google Scholar 

  • Follmann, D., Proschan, M. (1999). Valid inference in random effects meta-analysis. Biometrics, 55, 732–737.

    Article  MATH  Google Scholar 

  • Hardy, R., Thompson, S. (1996). A likelihood approach to meta-analysis with random effects. Statistics in Medicine, 15, 619–629.

    Article  Google Scholar 

  • Hartung, J. (1999). An alternative method for meta-analysis. Biometrical Journal, 41, 901–916.

    Article  MathSciNet  MATH  Google Scholar 

  • Higgins, J., Thompson, S. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21, 1539–1558.

    Article  Google Scholar 

  • Imhof, J. (1961). Computing the distribution of quadratic forms in normal variables. Biometrika, 48, 419–426.

    Article  MathSciNet  MATH  Google Scholar 

  • Jackson, D., Bowden, J., Baker, R. (2010). How does the dersimonian and laird procedure for random effects meta-analysis compare with its more efficient but harder to compute counterparts? Journal of Statistical Planning and Inference, 140, 961–970.

    Article  MathSciNet  MATH  Google Scholar 

  • Knapp, G., Hartung, J. (2003). Improved tests for a random effects meta-regression with a single covariate. Statistics in Medicine, 22, 2693–2710.

    Article  Google Scholar 

  • Li, M., Liu, Y., Li, P. (2022). Empirical likelihood meta-analysis with publication bias correction under Copas-like selection model. Annals of the Institute of Statistical Mathematics, 74(1), 93–112.

    Article  MathSciNet  MATH  Google Scholar 

  • Michael, H., Thornton, S., Xie, M. (2019). Exact inference on the random-effects model for meta-analyses with few studies. Biometrics, 75, 485–493.

    Article  MathSciNet  MATH  Google Scholar 

  • Noma, H., Nagashima, K., Furukawa, T. (2020). Permutation inference methods for multivariate meta-analysis. Biometrics, 76, 337–347.

    Article  MathSciNet  MATH  Google Scholar 

  • Paule, R., Mandel, J. (1982). Consensus values and weighting factors. Journal of Research of the National Bureau of Standards, 87, 377–385.

    Article  MATH  Google Scholar 

  • Rover, C., Knapp, G., Friede, T. (2015). Hartung-knapp-sidik-jonkman approach and its modification for random-effects meta-analysis with few studies. BMC medical research methodology, 15, 99.

    Article  Google Scholar 

  • Sanchez-Meca, J., Marin-Martinez, F. (2008). Confidence intervals for the overall effect size in random-effects meta-analysis. Psychological Methods, 13, 31–48.

    Article  Google Scholar 

  • Stijnen, T., Hamza, T., Ozdemir, P. (2010). Random effects meta-analysis of event outcome in the framework of the generalized linear mixed model with applications in sparse data. Statistics in Medicine, 29, 3046–3067.

    Article  MathSciNet  Google Scholar 

  • Thompson, S., Sharp, S. (1999). Explaining heterogeneity in meta-analysis: A comparison of methods. Statistics in Medicine, 18, 2693–2708.

    Article  Google Scholar 

  • Whitehead, A., Whitehead, J. (1991). A general parametric approach to the meta-analysis of randomized clinical trials. Statistics in Medicine, 10, 1665–1677.

    Article  Google Scholar 

Download references

Acknowledgements

We thank the editor, the associate editor and the two anonymous referees for their constructive comments. This work was supported by JSPS KAKENHI (Grant number JP21K11783).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomoyuki Sugimoto.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 947 KB)

A Theoretical details and numerical calculation

A Theoretical details and numerical calculation

1.1 A.1 Proof of Theorem 1

First, we should note that \({\hat{\theta }}_{DL}\) and \({\hat{\tau }}^2_{\mathrm{u}}\) are not mutually independent. This situation is quite complicated, unlike the case of sample mean and unbiased variance from normal data with equal error-variances, such as Student’s t test. For a such reason, in order to derive the distribution of \(T_{DL}\), we consider the marginalization

$$\begin{aligned} f_{T_{DL}}(t) = \int _{-\infty }^{\infty } f_{T_{DL}|{\hat{\tau }}^2_{\mathrm{u}}}(t|x) f_{{\hat{\tau }}^2_{\mathrm{u}}}(x) dx \end{aligned}$$

by decomposing the joint PDF of \((T_{DL},{\hat{\tau }}^2_{\mathrm{u}})\). Next, we discuss the mean and variance of \(T_{DL}|{\hat{\tau }}^2_{\mathrm{u}}\) (i.e., the conditional random variable of \(T_{DL}\) given \({\hat{\tau }}^2_{\mathrm{u}}\)). Given \({\hat{\tau }}^2_{\mathrm{u}} = x\), we have the constraint condition

$$\begin{aligned} \sum _{k=1}^K \frac{({\hat{\theta }}_k - {\bar{\theta }}_0)^2}{\sigma _k^2} = \left( \gamma _0^{(1)} - \frac{\gamma _0^{(2)}}{\gamma _0^{(1)}}\right) x + (K-1) \end{aligned}$$
(7)

imposed on \({\hat{\theta }}_k\), \(k=1,\dots ,K\). By solving (7) on \({\hat{\theta }}_{k'}\) (\(k'=1,\dots ,K\)), we obtain two cases

$$\begin{aligned} {\hat{\theta }}_{k'}&={\bar{\theta }}_{0,- k'}\pm \xi _{k'}({\bar{\theta }}_{0,- k'},v^2_{- k'};x) \end{aligned}$$

where \({\bar{\theta }}_{y,- k'}\) and \(v^2_{- k'}\) are an extended inverse-variance weighted average and its square average version except the \(k'\)-th element \({\hat{\theta }}_{k'}\), respectively, that is,

$$\begin{aligned} {\bar{\theta }}_{y,- k'} = \frac{\sum _{\{k:k\ne k'\}}(\sigma _k^{2}+y)^{-1}{\hat{\theta }}_k}{\sum _{\{k:k\ne k'\}}(\sigma _k^{2}+y)^{-1}}\,\,\,\,\,\,\mathrm{and}\,\,\,\,\,\, v^2_{- k'} = \frac{\sum _{\{k:k\ne k'\}}\sigma _k^{-2}{\hat{\theta }}_k^2}{\sum _{\{k:k\ne k'\}} \sigma _k^{-2}}, \end{aligned}$$

\(\xi _{k'}({\bar{\theta }}_{0,- k'},v^2_{- k'};x)\) is a quantity written as

$$\begin{aligned} \xi _{k'}({\bar{\theta }}_{0,- k'},v^2_{- k'};x) =\sqrt{ {\bar{\theta }}_{0,- k'}^{2} +\sigma ^2_{k'}\gamma ^{(1)}_{0,- k'} \left\{ {\bar{\theta }}_{0,- k'}^{2}- \frac{\gamma _0^{(1)}}{\gamma _{0,- k'}^{(1)}}v^2_{- k'} +\left( \frac{(\gamma _0^{(1)})^2-\gamma _0^{(2)}}{(\gamma _{0,- k'}^{(1)})^2}\right) x + (K-1)\displaystyle \frac{\gamma _0^{(1)}}{(\gamma ^{(1)}_{0,- k'})^2} \right\} }. \end{aligned}$$

See Supplemental material Sect. 5 for further details of solving (7).

For simplicity, let \(k' = K\). The PDF of \({\hat{\theta }}_{k'}|{\hat{\tau }}_u^2\) can be expressed as

$$\begin{aligned}&f_{{\hat{\theta }}_{K}|{\hat{\tau }}_u^2} (z_{K}-\theta |x) \\&= \int \cdots \int _{\Omega } f_{{\hat{\theta }}_1, \cdots , {\hat{\theta }}_{K-2}, {\hat{\tau }}_u^2, {\hat{\theta }}_K} (z_1-\theta , \dots , z_{K-2}-\theta , x, z_K-\theta ) dz_1 \cdots dz_{K-2} / f_{{\hat{\tau }}_u^2} (x) \\&= \frac{\sigma _{K-1}^2 ((\gamma ^{(1)}_{0})^2 - \gamma ^{(2)}_{0}) x}{ f_{{\hat{\tau }}_u^2} (x)} \int \cdots \int _{\Omega } \left( \prod _{k=1}^{K-2} \phi _k(z_k-\theta ) \right) \phi _K(z_K-\theta ) \\&\times \left\{ \phi _{K-1} ({\bar{z}}_{-(K-1)} + \xi _{-(K-1)}({\mathbf {z}})) + \phi _{K-1} ({\bar{z}}_{-(K-1)} - \xi _{-(K-1)}({\mathbf {z}})) \right\} \xi _{-(K-1)}^{-1}({\mathbf {z}}) dz_1 \cdots dz_{K-2} \\&= \frac{\sigma _{K-1}^2 ((\gamma ^{(1)}_{0})^2 - \gamma ^{(2)}_{0}) x}{ f_{{\hat{\tau }}_u^2} (x)} \int \cdots \int _{\Omega } \left( \prod _{k=1}^{K-2} \phi _k(-z_k-\theta ) \right) \phi _K(-z_K-\theta ) \\&\times \left\{ \phi _{K-1} ({\bar{z}}_{-(K-1)} + \xi _{-(K-1)}({\mathbf {z}})) + \phi _{K-1} ({\bar{z}}_{-(K-1)} - \xi _{-(K-1)}({\mathbf {z}})) \right\} \xi _{-(K-1)}^{-1}({\mathbf {z}}) dz_1 \cdots dz_{K-2} \\&= f_{{\hat{\theta }}_{K}|{\hat{\tau }}_u^2} (-z_{K} - \theta |x), \end{aligned}$$

where \(\Omega = \{ ({\hat{\theta }}_1, \dots , {\hat{\theta }}_K) \in {\mathbb {R}}^n| {\hat{\tau }}_u^2=x \}\), \({\bar{z}}_{-(K-1)} = {\bar{\theta }}_{0,-(K-1)}(z_1, \dots , z_{K-2}, z_K)\), \(\xi _{-(K-1)}({\mathbf {z}}) = \xi _{-(K-1)}(z_1, \dots , z_{K-2},x, z_K)\) and \(\phi _k\) is the PDF of \(N(\theta , \sigma _k^2+\tau ^2)\). See Supplemental material Sect. 6 for further derivation details of \(f_{{\hat{\theta }}_{K}|{\hat{\tau }}_u^2}\). We have \(E[{\hat{\theta }}_{K}|{\hat{\tau }}^2_u] = \theta\) because the distribution of \({\hat{\theta }}_{K}|{\hat{\tau }}_u^2\) is symmetry. That is, two cases of \({\hat{\theta }}_{K}\) obtained under the constraint condition (7) must occur fairly in probabilistic events. Hence, we observe that \(T_{DL}|{\hat{\tau }}^2_{\mathrm{u}}\) can be transformed to

$$\begin{aligned}&T_{DL}|_{{\hat{\tau }}_{\mathrm{u}}^2=x} = Y({\bar{\theta }}_{x,- k'},{\bar{\theta }}_{0,- k'};x)+U\varepsilon ({\bar{\theta }}_{0,- k'},v^2_{- k'};x) \end{aligned}$$

using U which takes either 1 or -1 with equal probability, where

$$\begin{aligned}&Y({\bar{\theta }}_{x,- k'},{\bar{\theta }}_{0,- k'};x) ={\bar{\theta }}_{x,- k'}\frac{\gamma ^{(1)}_{x,- k'}}{\sqrt{\gamma ^{(1)}_{x}}}+\frac{{\bar{\theta }}_{0,- k'}}{(\sigma _{k'}^2+x)}\frac{1}{\sqrt{\gamma ^{(1)}_{x}}},\\&\varepsilon ({\bar{\theta }}_{0,- k'},v^2_{- k'};x) =\frac{\xi _{- k'}(x,{\bar{\theta }}_{0,- k'},v^2_{- k'})}{(\sigma _{k'}^2+x)}\frac{1}{\sqrt{\gamma ^{(1)}_{x}}}. \end{aligned}$$

The distribution of \(\varepsilon ({\bar{\theta }}_{0,- k'},v^2_{- k'};x)\) is determined by two quantities \({\bar{\theta }}_{0,- k'}\) and \(v^2_{- k'}\) but cannot be expressed by some known distribution. On the other hand, because \(Y({\bar{\theta }}_{x,- k'},{\bar{\theta }}_{0,- k'};x)\) is composed of a linear sum of \({\bar{\theta }}_{x,- k'}\) and \({\bar{\theta }}_{0,- k'}\), \(Y({\bar{\theta }}_{x,- k'},{\bar{\theta }}_{0,- k'};x)\) is normally distributed and we find that it has mean \(m_{- k'}(x)\) and variance \(v^2_{- k'}(x)\). Replacing \({\hat{\theta }}_k\) with \({\hat{\theta }}_k=\theta +\sqrt{\sigma ^2_k+\tau ^2}z_k\) using mutually independent random numbers \(z_1\), \(\dots\), \(z_K\sim N(0,1)\), three quantities \({\bar{\theta }}_{x,- k'}\), \({\bar{\theta }}_{0,- k'}\) and \(v^2_{- k'}\) to determine the distribution of \(T_{DL}|_{{\hat{\tau }}_{\mathrm{u}}^2=x}\) are written as

$$\begin{aligned} {\bar{\theta }}_{x,- k'}= & {} \theta +{\gamma ^{(1)}_{x}}^{-1}\textstyle \sum _{\{k:k\ne k'\}}z_k\sqrt{\sigma ^2_k+\tau ^2}/(\sigma ^2_k+x)\\ v^2_{- k'}= & {} (\gamma ^{(1)}_{0,- k'})^{-1} \left[ \theta ^2 \gamma ^{(1)}_{0,- k'} +2\theta \sum _{\{k:k\ne k'\}}z_k\sqrt{\sigma ^2_k+\tau ^2}/\sigma ^2_k +\sum _{\{k:k\ne k'\}}z_k^2(\sigma ^2_k+\tau ^2)/\sigma ^2_k \right] . \end{aligned}$$

We obtain the result of this theorem by replacing \({\bar{\theta }}_{0,- k'}\) and \(v^2_{- k'}\) in \(\xi _{k'}({\bar{\theta }}_{0,- k'},v^2_{- k'};x)\) with

$$\begin{aligned} {\bar{\theta }}_{0,- k'} = \frac{T_{1(0)}}{\gamma ^{(1)}_{0,- k'}} + \theta \,\,\,\,\mathrm{and}\,\,\,\, v^2_{- k'} = \frac{T_2}{\gamma ^{(1)}_{0,- k'}} + 2\theta T_{1(0)} + \theta ^2 \end{aligned}$$

for \(k'=K\) and using the relation \(\gamma _0^{(1)}-\gamma ^{(1)}_{0,- K}=\sigma _K^{-2}\). \(\square\)

1.2 A.2 Proof of Theorem 2

An idea of Theorem 2 is to approximate the distribution of \(T_{DL}|_{{\hat{\tau }}^2_\mathrm{u}=x}\) by a normal distribution, because the exact conditional distribution is a complicated but symmetric distribution. Concretely, the mean and variance of \(T_{DL}|_{{\hat{\tau }}^2_\mathrm{u}=x}\) are, respectively, obtained as

$$\begin{aligned} \mu (x) = E[T_{DL}|{\hat{\tau }}_{\mathrm{u}}^2=x] = \frac{\sum _{k=1}^K \frac{E[{\hat{\theta }}_k|{\hat{\tau }}^2_{\mathrm{u}}=x]}{\sigma _k^2+x} }{\sqrt{\sum _{k=1}^K \frac{1}{\sigma _k^2+x}}} = \theta \sqrt{\sum _{k=1}^K \frac{1}{\sigma _k^2+x}}, \end{aligned}$$

and

$$\begin{aligned} \nu ^2(x) = V[T_{DL}|{\hat{\tau }}_{\mathrm{u}}^2=x] = \frac{1}{\sum _{j=1}^K (\sigma _j^2+x)^{-1}}\sum _{k=1}^K\sum _{k'=1}^K\frac{E[{\hat{\theta }}_k {\hat{\theta }}_{k'}|{\hat{\tau }}_{\mathrm{u}}^2=x]}{(\sigma _k^2+x)(\sigma _{k'}^2+x)}, \end{aligned}$$

where the form of \(E[{\hat{\theta }}_i {\hat{\theta }}_j|{\hat{\tau }}_{\mathrm{u}}^2=x]\) is

$$\begin{aligned} E[{\hat{\theta }}_i {\hat{\theta }}_j|{\hat{\tau }}_{\mathrm{u}}^2=x] = \int _{-\infty }^{\infty } \cdots \int _{-\infty }^{\infty } t_i t_j \frac{f_{({\hat{\theta }}_1, \dots , {\hat{\theta }}_{K-1}, {\hat{\tau }}^2)}(t_1, \dots , t_{K-1}, x)}{f_{{\hat{\tau }}^2}(x)} dt_1 \cdots dt_{K-1}, \end{aligned}$$
(8)

and the joint PDF of \(({\hat{\theta }}_1, \dots , {\hat{\theta }}_{K-1}, {\hat{\tau }}^2_{\mathrm{u}})\) is written as

$$\begin{aligned}&f_{({\hat{\theta }}_1, \dots , {\hat{\theta }}_{K-1}, {\hat{\tau }}^2_{\mathrm{u}})}(t_1, \dots , t_{K-1},x) \\&= \frac{ \sigma _K^2 (\gamma _0^{(1)2}-\gamma _0^{(2)}) }{ 4 \gamma _{0,-K}^{(1)} \xi _{K}({\bar{t}}_{- K},\overline{t^2}_{- K};x) } \prod _{k=1}^{K-1} \phi \left( t_k; \theta , \sigma _k^2+\tau ^2 \right) \\&\quad \times \left\{ \phi \left( {\bar{t}}_{- K} + \xi _{K}({\bar{t}}_{- K},\overline{t^2}_{- K};x); \theta , \sigma _K^2+\tau ^2 \right) \right. \\&\quad \left. + \phi \left( {\bar{t}}_{- K} - \xi _{K}({\bar{t}}_{- K},\overline{t^2}_{- K};x); \theta , \sigma _K^2+\tau ^2\right) \right\} , \end{aligned}$$

where \({\bar{t}}_{- K} = \left( \sum _{k=1}^{K-1} t_k/\sigma _k^2\right) /\gamma _{0,- K}^{(1)}\) and \(\overline{t^2}_{- K} = \left( \sum _{k=1}^{K-1} t^2_k/\sigma _k^2\right) /\gamma _{0,- K}^{(1)}\) Thus, the conditional mean \(\mu (x)\) and variance \(\nu ^2(x)\) in Theorem 2 are obtained in this manner.

Finally, we show that the distribution of \(T_{DL}|{\hat{\tau }}^2_{\mathrm{u}}\) can be approximated by normal distribution. For simplicity, assume \(\sigma _1^2 \le \cdots \le \sigma _K^2\), so that we have \(\sum _k^2 (\sigma _k^2 + x)^{-1} \ge K (\sigma _K^2 + x)^{-1}\). Denote a sum of normally distributed random variables as an approximate value of \(T_{DL}|_{{\hat{\tau }}_{\mathrm{u}}^2=x}\) by

$$\begin{aligned} {\tilde{T}}_{DL}|_{{\hat{\tau }}_{\mathrm{u}}^2=x}=\frac{\sum _{k=1}^{K-1}{\hat{\theta }}_k / (\sigma _k^2+x)}{\sqrt{\sum _{k=1}^K 1 / (\sigma _k^2+x)}}. \end{aligned}$$

Then, we have

$$\begin{aligned} \left| T_{DL}|_{{\hat{\tau }}_{\mathrm{u}}^2=x} -{\tilde{T}}_{DL}|_{{\hat{\tau }}_{\mathrm{u}}^2=x} \right|&= \left| \frac{\{{\bar{\theta }}_{0,- K}\pm \xi _{K}({\bar{\theta }}_{0,- K},v^2_{- K};x)\}/ (\sigma _K^2+x)}{\sqrt{\sum _{k=1}^K 1 / (\sigma _k^2+x)}} \right| \\&\le |c K^{-1/2}| \rightarrow 0 \,\,\,\,\, (K \rightarrow \infty ), \end{aligned}$$

where c is a constant. In Euclidean distance, \(T_{DL}|_{{\hat{\tau }}^2_{\mathrm{u}}}\) converges to \({\tilde{T}}_{DL}|_{{\hat{\tau }}_{\mathrm{u}}^2}\) (as \(K \rightarrow \infty\)). This means that \(\lim _{K \rightarrow \infty }|f_{T_{DL}|_{{\hat{\tau }}_{\mathrm{u}}^2}}(t|x)-f_{{\tilde{T}}_{DL}|_{{\hat{\tau }}_{\mathrm{u}}^2}}(t|x)|=0\) in the PDFs of two random variables by the continuity of the function \(f_{T_{DL}|_{{\hat{\tau }}_{\mathrm{u}}^2}}(t|x)\). Also, it is clear that \(f_{{\tilde{T}}_{DL}|_{{\hat{\tau }}_{\mathrm{u}}^2}}(t|x)\) and \(\phi (t;\mu (x),\nu ^2(x))\) are equivalent for sufficiently large K by showing that they have identical mean and variance. Therefore, the proof of Theorem 2 is complete. \(\square\)

1.3 A.3 Numerical calculation for Theorem 2

We have two problems in numerical calculation for Theorem 2. One is to calculate the PDF of between-study variance \({\hat{\tau }}^2_{\mathrm{u}}\), and another is to calculate the variance of \(T_{DL}|{\hat{\tau }}^2_{\mathrm{u}}\). The estimate of between-study variance, \({\hat{\tau }}^2_{\mathrm{u}}\), is the sum of weighted chi-square distribution, as shown by Lemma 1. The numerical calculation of the PDF of weighted chi-square distribution is proposed by many authors, for example, Imhof (1961), Akkouchi (2005), and so on. However, these methods are not necessarily accurate when the number of studies K is small. Therefore, we calculate the PDF by generating random numbers that follow the chi-square distribution with 1 degree of freedom using simulation. That is, if \(x_k(\ell ), k=1, \dots , K, \ell =1, \dots , L\) are random numbers that independently follow a chi-square distribution with 1 degree of freedom, then the random number \(y_1, \dots , y_L\) of L estimates of the between-study variance represents

$$\begin{aligned} y_{\ell } = \frac{\sum _{r=1}^R \lambda _r x_r(\ell ) - (K-1)}{\gamma _0^{(1)}-\gamma _0^{(2)}/\gamma _0^{(1)}}. \end{aligned}$$

Thus, the cumulative distribution function of the between-study variance estimate \({\hat{\tau }}^2_{\mathrm{u}}\) is

$$\begin{aligned} F_{{\hat{\tau }}^2_{\mathrm{u}}}(x) = \frac{1}{L} \sum _{\ell =1}^L \mathbbm {1}\{y_{\ell } \le x \}. \end{aligned}$$

The PDF \(f_{{\hat{\tau }}^2_{\mathrm{u}}}\) is also calculated using the numerical differentiation, \(f_{{\hat{\tau }}^2_{\mathrm{u}}}(x) = (F_{{\hat{\tau }}^2_{\mathrm{u}}}(x+h)-F_{{\hat{\tau }}^2_{\mathrm{u}}}(x))/h\), where h is a sufficiently small constant. In our simulation, we set \(L = 100,000\) for calculating \(f_{{\hat{\tau }}^2_{\mathrm{u}}}\).

Fig. 3
figure 3

Simulation histogram of conditional test statistic \(T_{DL}|{\hat{\tau }}^2_u=x\) and the almost-exact PDF (red line)

In the numerical calculation of the variance of \(T_{DL}|{\hat{\tau }}^2_{\mathrm{u}}\), Monte Carlo integration with importance sampling is quite useful and a handy tool in this case. That is, if \(z_{k}^{(\ell )}, \ell =1,\dots , L, k=1, \dots , K-1\) are random numbers sampled from standard normal distribution N(0, 1), the mean of \({\hat{\theta }}_i {\hat{\theta }}_j|{\hat{\tau }}_{\mathrm{u}}^2\) can be approximated as

$$\begin{aligned}&E[{\hat{\theta }}_i {\hat{\theta }}_j|{\hat{\tau }}_{\mathrm{u}}^2=x] \nonumber \\&\doteq \frac{1}{L} \sum _{\ell =1}^L \frac{ \sigma _K^2 ((\gamma _0^{(1)})^2-\gamma _0^{(2)}) }{ 4 \gamma _{0,-K}^{(1)} \xi _{K}({\bar{z}}_{- K}^{(\ell )},\overline{z^{2}}^{(\ell )}_{- K};x) } \prod _{k=1}^{K-1} \phi \left( \theta +\sqrt{\sigma _k^2+\tau ^2}z_k^{(\ell )}; \theta , \sigma _k^2+\tau ^2 \right) \nonumber \\&\quad \times \left\{ \phi \left( {\bar{z}}_{- K}^{(\ell )} + \xi _{K}({\bar{z}}_{- K}^{(\ell )},\overline{z^{2}}^{(\ell )}_{- K};x); \theta , \sigma _K^2+\tau ^2 \right) \right. \nonumber \\&\quad \left. + \phi \left( {\bar{z}}_{- K}^{(\ell )} - \xi _{K}({\bar{z}}_{- K}^{(\ell )},\overline{z^2}^{(\ell )}_{- K};x); \theta , \sigma _K^2+\tau ^2 \right) \right\} , \end{aligned}$$
(9)

where

$$\begin{aligned} {\bar{z}}_{- K}^{(\ell )}&= \theta + \gamma _{0,-K}^{(1)-1} \left( \sum _{k=1}^{K-1} \sqrt{\sigma _k^2+\tau ^2}z_k^{(\ell )}/\sigma _k^2\right) \\ \overline{z^2}^{(\ell )}_{- K}&= \gamma _{0,-K}^{(1)-1}\left( \sum _{k=1}^{K-1}(\theta + \sqrt{\sigma _k^2+\tau ^2}z_k^{(\ell )})^2/\sigma _k^2\right) . \end{aligned}$$

Thus, the conditional variance \(\nu ^2(x)\) is calculated based on (9) given sufficiently large L. In our simulation, we set \(L = 10,000\) for calculating \(E[{\hat{\theta }}_i {\hat{\theta }}_j|{\hat{\tau }}_{\mathrm{u}}^2=x]\). Using the two PDFs \(f_{{\hat{\tau }}^2_{\mathrm{u}}}\) and \(f_{T_{DL}|{\hat{\tau }}^2_{\mathrm{u}}}\), the PDF (5) of test statistic \(T_{DL}\) can be calculated, where the trapezoidal or Simpson’s rules are useful for numerical integration using these PDFs.

1.4 A.4 Simulation of conditional test statistic \(T_{DL}|{\hat{\tau }}_u^2=x\)

We compare the difference between the exact conditional distribution of \(T_{DL}|{\hat{\tau }}_u^2=x\) and its normal approximation under some x. The setting of \(\theta\) and \(\sigma _k^2\) are the same as Sect. 4.2. The number of study is set to small case \(K=5\) and the heterogeneity is set to small and large \(I^2=0.3, 0.9\). x is set to (1, 2) for comparing various situations. We sample 100,000 data of \((z_1, \dots , z_{k-1})\) from standard normal distribution and calculate the exact distribution of \(T_{DL}|{\hat{\tau }}_u^2=x\). The almost-exact distribution of \(T_{DL}|{\hat{\tau }}_u^2=x\) can be calculated by the method in Appendix A.3.

We show the simulation histogram based on the exact conditional distribution and the corresponding almost-exact PDF in Fig. 3. The almost-exact distribution of \(T_{DL}|{\hat{\tau }}_u^2=x\) approximates its exact distribution well in the small number of studies. This provides that it is reasonable to use the almost-exact distribution of \(T_{DL}|{\hat{\tau }}_u^2=x\) for our simulations.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hanada, K., Sugimoto, T. Inference using an exact distribution of test statistic for random-effects meta-analysis. Ann Inst Stat Math 75, 281–302 (2023). https://doi.org/10.1007/s10463-022-00844-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-022-00844-4

Keywords

Navigation