Robustness of Statistical Power in Group-Randomized Studies of Mediation Under an Optimal Sampling Framework
Abstract
Abstract. When planning group-randomized studies probing mediation, effective and efficient sample allocation is governed by several parameters including treatment-mediator and mediator-outcome path coefficients and the mediator and outcome intraclass correlation coefficients. In the design stage, these parameters are typically approximated using information from prior research and these approximations are likely to deviate from the true values eventually realized in the study. This study investigates the robustness of statistical power under an optimal sampling framework to misspecified parameter values in group-randomized designs with group- or individual-level mediators. The results suggest that estimates of statistical power are robust to misspecified parameter values across a variety of conditions and tests. Relative power remained above 90% in most conditions when the incorrect parameter value ranged between 50% and 150% of the true parameter.
References
2019). Optimal sample allocation in group-randomized mediation studies with a group-level mediator. The Journal of Experimental Education. Advanced Online Publication. https://doi.org/10.1080/00220973.2018.1496060
(2002). Effects of professional development on teachers’ instruction: Results from a three-year longitudinal study. Educational Evaluation and Policy Analysis, 24, 81–112. https://doi.org/10.3102/01623737024002081
(2016). The impact of every classroom, every day on high school student achievement: Results from a school-randomized trial. Journal of Research on Educational Effectiveness, 9, 3–29. https://doi.org/10.1080/19345747.2015.1055638
(2015). Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: Next generation. Prevention Science, 16, 893–926. https://doi.org/10.1007/s11121-015-0555-x
(2013). The relative trustworthiness of inferential tests of the indirect effect in statistical mediation analysis: Does method really matter? Psychological Science, 24, 1918–1927. https://doi.org/10.1177/0956797613480187
(2014). Conditional optimal design in three- and four-level experiments. Journal of Educational and Behavioral Statistics, 39, 1–25. https://doi.org/10.3102/1076998614534897
(2007). Intraclass correlation values for planning group-randomized trials in education. Educational Evaluation and Policy Analysis, 29, 60–87. https://doi.org/10.3102/0162373707299706
(2013). Common Guidelines for Education Research and Development (NSF 13–126). Retrieved from http://ies.ed.gov/pdf/CommonGuidelines.pdf
. (2017). Statistical power for causally-defined indirect effects in group-randomized trials with individual-level mediators. Journal of Educational and Behavioral Statistics, 42, 499–530. https://doi.org/10.3102/1076998617695506
(2017). Experimental power for indirect effects in group-randomized studies with group-level mediators. Multivariate Behavioral Research, 52, 699–719. https://doi.org/10.1080/00273171.2017.1356212
(2013). Considerations for designing group randomized trials of professional development with teacher knowledge outcomes. Educational Evaluation and Policy Analysis, 35, 370–390. https://doi.org/10.3102/0162373713482766
(2014). Strategies for improving power in school randomized studies of professional development. Evaluation Review, 37, 520–554. https://doi.org/10.1177/0193841x14528906
(2017). Designing large-scale multisite and cluster-randomized studies of professional development. Journal of Experimental Education, 85, 389–410. https://doi.org/10.1080/00220973.2016.1220911
(2019). Strategies for efficient experimental design in studies probing 2–1–1 mediation. The Journal of Experimental Education. Advanced Online Publication. https://doi.org/10.1080/00220973.2018.1533796
(2010). The robustness of designs for trials with nested data against incorrect initial intracluster correlation coefficient estimates. Journal of Educational and Behavioral Statistics, 35, 566–585. https://doi.org/10.3102/1076998609360774
(2012). Distinguishing between cross- and cluster-level mediation processes in the cluster randomized trial. Sociological Methods & Research, 41, 630–670. https://doi.org/10.1177/0049124112460380
(2012). Advantages of Monte Carlo confidence intervals for indirect effects. Communication Methods and Measures, 6, 77–98. https://doi.org/10.1080/19312458.2012.679848
(1997). Statistical analysis and optimal design for cluster randomized trials. Psychological Methods, 2, 173–185. https://doi.org/10.1037/1082-989x.2.2.173
(2002). Hierarchical linear models: Applications and data analysis methods. Thousand Oaks, CA: Sage.
(1982). Asymptotic confidence intervals for indirect effects in structural equation models. Sociological Methodology, 13, 290–312. https://doi.org/10.2307/270723
(2009). An examination of the precision and technical accuracy of the first wave of group-randomized trials funded by the institute of education sciences. Educational Evaluation and Policy Analysis, 31, 298–318. https://doi.org/10.3102/0162373709339524
(2010). Direct and indirect effects for neighborhood-based clustered and longitudinal data. Sociological Methods & Research, 38, 515–544. https://doi.org/10.1177/0049124110366236
(2009). Conceptual issues concerning mediation, interventions and composition. Statistics and Its Interface, 2, 457–468. https://doi.org/10.4310/sii.2009.v2.n4.a7
(2009). Testing multilevel mediation using hierarchical linear models: Problems and solutions. Organizational Research Methods, 12, 695–719. https://doi.org/10.5465/ambpp.2008.33716518
(