Skip to main content
Original Article

Robustness of Statistical Power in Group-Randomized Studies of Mediation Under an Optimal Sampling Framework

Published Online:https://doi.org/10.1027/1614-2241/a000169

Abstract. When planning group-randomized studies probing mediation, effective and efficient sample allocation is governed by several parameters including treatment-mediator and mediator-outcome path coefficients and the mediator and outcome intraclass correlation coefficients. In the design stage, these parameters are typically approximated using information from prior research and these approximations are likely to deviate from the true values eventually realized in the study. This study investigates the robustness of statistical power under an optimal sampling framework to misspecified parameter values in group-randomized designs with group- or individual-level mediators. The results suggest that estimates of statistical power are robust to misspecified parameter values across a variety of conditions and tests. Relative power remained above 90% in most conditions when the incorrect parameter value ranged between 50% and 150% of the true parameter.

References

  • Cox, K., & Kelcey, B. (2019). Optimal sample allocation in group-randomized mediation studies with a group-level mediator. The Journal of Experimental Education. Advanced Online Publication. https://doi.org/10.1080/00220973.2018.1496060 First citation in articleCrossrefGoogle Scholar

  • Desimone, L. M., Porter, A. C., Garet, M. S., Yoon, K. S., & Birman, B. F. (2002). Effects of professional development on teachers’ instruction: Results from a three-year longitudinal study. Educational Evaluation and Policy Analysis, 24, 81–112. https://doi.org/10.3102/01623737024002081 First citation in articleCrossrefGoogle Scholar

  • Early, D. M., Berg, J. K., Alicea, S., Si, Y., Aber, J. L., Ryan, R. M., & Deci, E. L. (2016). The impact of every classroom, every day on high school student achievement: Results from a school-randomized trial. Journal of Research on Educational Effectiveness, 9, 3–29. https://doi.org/10.1080/19345747.2015.1055638 First citation in articleCrossrefGoogle Scholar

  • Gottfredson, D. C., Cook, T. D., Gardner, F. E. M., Gorman-smith, D., Howe, G. W., Sandler, I. N., & Zafft, K. M. (2015). Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: Next generation. Prevention Science, 16, 893–926. https://doi.org/10.1007/s11121-015-0555-x First citation in articleCrossrefGoogle Scholar

  • Hayes, A. F., & Scharkow, M. (2013). The relative trustworthiness of inferential tests of the indirect effect in statistical mediation analysis: Does method really matter? Psychological Science, 24, 1918–1927. https://doi.org/10.1177/0956797613480187 First citation in articleCrossrefGoogle Scholar

  • Hedges, L., & Borenstein, M. (2014). Conditional optimal design in three- and four-level experiments. Journal of Educational and Behavioral Statistics, 39, 1–25. https://doi.org/10.3102/1076998614534897 First citation in articleCrossrefGoogle Scholar

  • Hedges, L., & Hedberg, E. (2007). Intraclass correlation values for planning group-randomized trials in education. Educational Evaluation and Policy Analysis, 29, 60–87. https://doi.org/10.3102/0162373707299706 First citation in articleCrossrefGoogle Scholar

  • Institute of Education Sciences, US Department of Education, & National Science Foundation. (2013). Common Guidelines for Education Research and Development (NSF 13–126). Retrieved from http://ies.ed.gov/pdf/CommonGuidelines.pdf First citation in articleGoogle Scholar

  • Kelcey, B., Dong, N., Spybrook, J., & Cox, K. (2017). Statistical power for causally-defined indirect effects in group-randomized trials with individual-level mediators. Journal of Educational and Behavioral Statistics, 42, 499–530. https://doi.org/10.3102/1076998617695506 First citation in articleCrossrefGoogle Scholar

  • Kelcey, B., Dong, N., Spybrook, J., & Shen, Z. (2017). Experimental power for indirect effects in group-randomized studies with group-level mediators. Multivariate Behavioral Research, 52, 699–719. https://doi.org/10.1080/00273171.2017.1356212 First citation in articleCrossrefGoogle Scholar

  • Kelcey, B., & Phelps, G. (2013). Considerations for designing group randomized trials of professional development with teacher knowledge outcomes. Educational Evaluation and Policy Analysis, 35, 370–390. https://doi.org/10.3102/0162373713482766 First citation in articleCrossrefGoogle Scholar

  • Kelcey, B., & Phelps, G. (2014). Strategies for improving power in school randomized studies of professional development. Evaluation Review, 37, 520–554. https://doi.org/10.1177/0193841x14528906 First citation in articleCrossrefGoogle Scholar

  • Kelcey, B., Phelps, G., Spybrook, J., Jones, N., & Zhang, J. (2017). Designing large-scale multisite and cluster-randomized studies of professional development. Journal of Experimental Education, 85, 389–410. https://doi.org/10.1080/00220973.2016.1220911 First citation in articleCrossrefGoogle Scholar

  • Kelcey, B., & Shen, Z. (2019). Strategies for efficient experimental design in studies probing 2–1–1 mediation. The Journal of Experimental Education. Advanced Online Publication. https://doi.org/10.1080/00220973.2018.1533796 First citation in articleCrossrefGoogle Scholar

  • Korendijk, E. J. H., Moerbeek, M., & Maas, C. J. M. (2010). The robustness of designs for trials with nested data against incorrect initial intracluster correlation coefficient estimates. Journal of Educational and Behavioral Statistics, 35, 566–585. https://doi.org/10.3102/1076998609360774 First citation in articleCrossrefGoogle Scholar

  • Pituch, K. A., & Stapleton, L. M. (2012). Distinguishing between cross- and cluster-level mediation processes in the cluster randomized trial. Sociological Methods & Research, 41, 630–670. https://doi.org/10.1177/0049124112460380 First citation in articleCrossrefGoogle Scholar

  • Preacher, K. J., & Selig, J. P. (2012). Advantages of Monte Carlo confidence intervals for indirect effects. Communication Methods and Measures, 6, 77–98. https://doi.org/10.1080/19312458.2012.679848 First citation in articleCrossrefGoogle Scholar

  • Raudenbush, S. W. (1997). Statistical analysis and optimal design for cluster randomized trials. Psychological Methods, 2, 173–185. https://doi.org/10.1037/1082-989x.2.2.173 First citation in articleCrossrefGoogle Scholar

  • Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods. Thousand Oaks, CA: Sage. First citation in articleGoogle Scholar

  • Sobel, M. E. (1982). Asymptotic confidence intervals for indirect effects in structural equation models. Sociological Methodology, 13, 290–312. https://doi.org/10.2307/270723 First citation in articleCrossrefGoogle Scholar

  • Spybrook, J., & Raudenbush, S. W. (2009). An examination of the precision and technical accuracy of the first wave of group-randomized trials funded by the institute of education sciences. Educational Evaluation and Policy Analysis, 31, 298–318. https://doi.org/10.3102/0162373709339524 First citation in articleCrossrefGoogle Scholar

  • VanderWeele, T. J. (2010). Direct and indirect effects for neighborhood-based clustered and longitudinal data. Sociological Methods & Research, 38, 515–544. https://doi.org/10.1177/0049124110366236 First citation in articleCrossrefGoogle Scholar

  • VanderWeele, T. J., & Vansteelandt, S. (2009). Conceptual issues concerning mediation, interventions and composition. Statistics and Its Interface, 2, 457–468. https://doi.org/10.4310/sii.2009.v2.n4.a7 First citation in articleCrossrefGoogle Scholar

  • Zhang, Z., Zyphur, M. J., & Preacher, K. J. (2009). Testing multilevel mediation using hierarchical linear models: Problems and solutions. Organizational Research Methods, 12, 695–719. https://doi.org/10.5465/ambpp.2008.33716518 First citation in articleCrossrefGoogle Scholar