Skip to main content

Advertisement

Log in

Exploring the use of network meta-analysis in education: examining the correlation between ORF and text complexity measures

  • Published:
Annals of Dyslexia Aims and scope Submit manuscript

Abstract

Calls for empirical investigations of the Common Core standards (CCSSs) for English Language Arts have been widespread, particularly in the area of text complexity in the primary grades (e.g., Hiebert & Mesmer Educational Research, 42(1), 44–51, 2013). The CCSSs mention that qualitative methods (such as Fountas and Pinnell) and quantitative methods (such as Lexiles) can be used to gauge text complexity (CCSS Initiative, 2010). However, researchers have questioned the validity of these tools for several decades (e.g., Hiebert & Pearson, 2010). In an effort to establish criterion validity of these tools, individual studies have compared how well they correlate with actual student reading performance measures, most commonly reading comprehension and/or oral-reading fluency (ORF). ORF is a key aspect of reading success and as such is often used for progress monitoring purposes. However, to date, studies have not been able to evaluate different text complexity tools and relation to reading outcomes across studies. This is challenging because the pair-wise meta-analytic model is not able to synthesize several independent variables that differ both within and across studies. Therefore, it is unable to answer pressing research questions in education, such as, which text complexity tool is most correlated with student ORF (and, thus, a good measure of text difficulty)? This question is timely given that the Common Core State Standards explicitly mention various text complexity tools; yet, the validity of such tools has been repeatedly questioned by researchers. This article provides preliminary evidence to answer that question using an approach borrowed from the field of medicine—Network Meta-Analysis (NMA; Lumley Statistics in Medicine, 21, 2313–2324, 2002). A systematic search yielded 5 studies using 19 different text complexity tools with ORF as the reading outcome measured. Both a frequentist and Bayesian NMA were conducted to pool the correlations of a given text complexity tool with students’ ORF. While the results differed slightly across the two approaches, there is preliminary evidence in support of the hypothesis that text complexity tools which incorporate more fine-grained sub-lexical variables were more strongly correlated with student outcomes. While the results of this example cannot be generalized due to the low sample size, this article shows how NMA is a promising new analytic tool for synthesizing educational research.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

References with an asterisk (*) were included in the network meta-analysis.

  • Allington, R. L. (2013). What really matters when working with struggling readers. The Reading Teacher, 66, 520–530.

    Google Scholar 

  • Amendum, S. J., Conradi, K., & Hiebert, E. (2017). Does text complexity matter in the elementary grades? A research synthesis of text difficulty and elementary students’ reading fluency and comprehension. Educational Psychology Review, 30, 121–151.

  • Anderson, R. C. (1990). Microanalysis of classroom reading instruction. Paper presented at the annual Conference on Reading Research, Atlanta.

  • *Ardoin, S. P., Suldo, S. M., Witt, J., Aldrich, S., & McDonald, E. (2005). Accuracy of readability estimates' predictions of CBM performance. School Psychology Quarterly, 20, 1–22.

    Google Scholar 

  • *Ardoin, S. P., Williams, J. C., Christ, T. J., Klubnik, C., & Wellborn, C. (2010). Examining readability estimates' predictions of students' oral reading rate: Spache, Lexile, and Forcast. School Psychology Review, 39, 277–285.

    Google Scholar 

  • Biancarosa, G., & Snow, C. E. (2004). Reading next: A vision for action and research in middle and high school literacy: a report from Carnegie Corporation of New York. Washington, DC: Alliance for Excellent Education.

  • Cain, K., Oakhill, J., & Elbro, C. (2014). Understanding and teaching reading comprehension: a handbook. Abingdon-on-Thames: Routledge.

    Google Scholar 

  • Chall, J. S., & Dale, E. (1995). Readability revisited: the new Dale-Chall readability formula. Northampton: Brookline Books.

    Google Scholar 

  • Cheatham, J. P., & Allor, J. H. (2012). The influence of decodability in early reading text on reading achievement: a review of the evidence. Reading and Writing, 25, 2223–2246.

    Google Scholar 

  • Cipriani, A., Higgins, J. P., Geddes, J. R., & Salanti, G. (2013). Conceptual and technical challenges in network meta-analysis. Annals of Internal Medicine, 159, 130–137.

    Google Scholar 

  • Common Core State Standards Initiative (2010). Common Core State Standards for English language arts & literacy in history/social studies, science, and technical subjects. Washington, DC: CCSSO & National Governors Association.

    Google Scholar 

  • *Compton, D. L., Appleton, A. C., & Hosp, M. K. (2004). Exploring the relationship between text-leveling systems and reading accuracy and fluency in second-grade students who are average and poor decoders. Learning Disabilities Research and Practice, 19, 176–184.

    Google Scholar 

  • Council for Exceptional Children (2014). Council for exceptional children standards for evidence-based practices in special education. Exceptional Children, 80, 504–511.

    Google Scholar 

  • Cunningham, J. W., Spadorcia, S. A., Erickson, K. A., Koppenhaver, D. A., Sturm, J. M., & Yoder, D. E. (2005). Investigating the instructional supportiveness of leveled texts. Reading Research Quarterly, 40, 410–427.

    Google Scholar 

  • DerSimonian, R., & Laird, N. (1986). Meta-analysis in clinical trials. Controlled Clinical Trials, 7, 177–188.

    Google Scholar 

  • Donovan, C. A., Smolkin, L. B., & Lomax, R. G. (2000). Beyond the independent-level text: considering the reader? Text match in first graders' self-selections during recreational reading. Reading Psychology, 21, 309–333.

    Google Scholar 

  • Eason, S. H., Sabatini, J., Goldberg, L., Bruce, K., & Cutting, L. E. (2013). Examining the relationship between word reading efficiency and oral reading rate in predicting comprehension among different types of readers. Scientific Studies of Reading, 17(3), 199–223.

    Google Scholar 

  • Flesch, R. (1948). A new readability yardstick. The Journal of Applied Psychology, 32, 221–233.

    Google Scholar 

  • Fountas, I. C., & Pinnell, G. S. (1999). Matching books to readers: using leveled books in guided reading. Portsmouth: Heinemann.

    Google Scholar 

  • Fry, E. (1968). A readability formula that saves time. Journal of Reading, 11, 513–516.

    Google Scholar 

  • Gilpin, A. R. (1993). Table for conversion of Kendall's tau to Spearman's rho within the context of measures of magnitude of effect for meta-analysis. Educational and Psychological Measurement, 53, 87–92.

    Google Scholar 

  • Gunning, R. (1952). The technique of clear writing. New York: McGraw-Hill.

    Google Scholar 

  • Hiebert, E. H., & Mesmer, H. A. E. (2013). Upping the ante of text complexity in the common core state standards: examining its potential impact on young readers. Educational Research, 42(1), 44–51.

    Google Scholar 

  • Hiebert, E. H., & Pearson, P. D. (2010). An examination of current text difficulty indices with early reading texts (reading research report no. 10-01). Santa Cruz: TextProject, Inc..

    Google Scholar 

  • Higgins, J. P., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analyses. BMJ [British Medical Journal], 327, 557–560.

    Google Scholar 

  • Hoffman, J. V., McCarthey, S. J., Abbott, J., Christian, C., Corman, L., Curry, C., Dressman, M., Elliott, B., Matherne, D., & Stahle, D. (1994). So what's new in the new basals? A focus on first grade. Journal of Reading Behavior, 26, 47–73.

    Google Scholar 

  • *Hoffman, J. V., Roser, N. L., Salas, R., Patterson, E., & Pennington, J. (2001). Text leveling and “little books” in first-grade reading. Journal of Literacy Research, 33, 507–528.

    Google Scholar 

  • Jackson, D., White, I. R., & Riley, R. D. (2013). A matrix-based method of moments for fitting the multivariate random effects model for meta-analysis and meta-regression. Biometrical Journal, 55, 231–245.

    Google Scholar 

  • Jansen, J. P., Fleurence, R., Devine, B., Itzler, R., Barrett, A., Hawkins, N., Lee, K., Boersma, C., Annemans, L., & Cappelleri, J. C. (2011). Interpreting indirect treatment comparisons and network meta-analysis for health-care decision making: report of the ISPOR task force on indirect treatment comparisons good research practices: part 1. Value in Health, 14(4), 417–428.

    Google Scholar 

  • Leucht, S., Chaimani, A., Cipriani, A. S., Davis, J. M., Furukawa, T. A., & Salanti, G. (2016). Network meta-analyses should be the highest level of evidence in treatment guidelines. European Archives of Psychiatry and Clinical Neuroscience, 266, 477–480.

    Google Scholar 

  • Lin, L., Zhang, J., & Chu, H. (2014). pcnetmeta: methods for patient-centered network meta-analysis. R package version 1.2.

  • Lumley, T. (2002). Network meta-analysis for indirect treatment comparisons. Statistics in Medicine, 21, 2313–2324.

    Google Scholar 

  • McLaughlin, G. H. (1969). SMOG grading: a new readability formula. Journal of Reading, 22, 639–646.

    Google Scholar 

  • Menton, S., & Hiebert, E. H. (1999). Literature anthologies: the task for first grade readers (Ciera report no. 1-009). Ann Arbor: Center for the Improvement of Early Reading Achievement.

    Google Scholar 

  • Mesmer, H. A., Cunningham, J. W., & Hiebert, E. H. (2012). Toward a theoretical model of text complexity for the early grades: learning from the past, anticipating the future. Reading Research Quarterly, 47(3), 235–258.

    Google Scholar 

  • Mesmer, H. A. E. (2008). Tools for matching readers to texts: research-based practices. New York: Guilford Press.

    Google Scholar 

  • Messick, S. (1995). Standards of validity and the validity of standards in performance assessment. Educational Measurement: Issues and Practice, 14, 5–8.

    Google Scholar 

  • Mills, E. J., Thorlund, K., & Ioannidis, J. P. (2013). Demystifying trial networks and network meta-analysis. BMJ, 346, f2914.

    Google Scholar 

  • National Reading Panel (U.S.) & National Institute of Child Health and Human Development. (2000). Report of the National Reading Panel: teaching children to read: an evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: reports of the subgroups. Washington, D.C.: National Institute of Child Health and Human Development, National Institutes of Health.

    Google Scholar 

  • Peterson, B. (1991). Selecting books for beginning readers and children's literature suitable for young readers. In D. E. DeFord, C. A. Lyons, & G. S. Pinnell (Eds.), Bridges to literacy: learning from reading recovery (pp. 119–147). Portsmouth: Heinemann.

    Google Scholar 

  • *Powel-Smith, K. A., & Bradley-Klug, K. L. (2001). Another look at the “C” in CBM: does it really matter if curriculum-based measurement reading probes are curriculum-based? Psychology in the Schools, 38, 299–312.

  • Powers, R. D., Sumners, W. A., & Kearl, B. E. (1958). A recalculation of four adult readability formulas. Journal of Education & Psychology, 49, 99–105.

    Google Scholar 

  • Riley, R. D., Jackson, D., Salanti, G., Burke, D. L., Price, M., Kirkham, J., & White, I. R. (2017). Multivariate and network meta-analysis of multiple outcomes and multiple treatments: rationale, concepts, and examples. British Medical Journal, 358(j3932), 1–13.

    Google Scholar 

  • Rücker, G., Schwarzer, G., Krahn, U., & Jochem König, J. (2015). Package ‘netmeta’, version 0.8-0, network meta-analysis using frequentist methods. R Library, Repository CRAN, 18, 23.

    Google Scholar 

  • Schmidt, F. L., & Hunter, J. E. (2014). Methods of meta-analysis: correcting error and bias in research findings. Thousand Oaks: Sage Publications.

    Google Scholar 

  • Schulze, R. (2004). Meta-analysis: a comparison of approaches. Gottingen: Hogrefe Publishing.

    Google Scholar 

  • Shaywitz, S. E., Morris, R., & Shaywitz, B. A. (2008). The education of dyslexic children from childhood to young adulthood. Annual Review of Psychology, 59, 451–475.

    Google Scholar 

  • Spache, G. (1953). A new readability formula for primary-grade reading materials. The Elementary School Journal, 53, 410–413.

    Google Scholar 

  • Spache, G. D. (1968). Good reading for poor readers. Champaign: Garrard Publishing Company.

    Google Scholar 

  • Spinelli, D., De Luca, M., Di Filippo, G., Mancini, M., Martelli, M., & Zoccolotti, P. (2005). Length effect in word naming in reading: role of reading experience and reading deficit in Italian readers. Developmental Neuropsychology, 27, 217–235.

    Google Scholar 

  • Sticht, T. G. (1973). Research toward the design, development and evaluation of a job-functional literacy program for the US Army. Literacy Discussion, 4, 339–369.

    Google Scholar 

  • Stenner, A. J., Smith, D. R., Horabin, I., & Smith,M., III. (1987). Fit of the lexile theory to sequenced units from eleven basal series. Durham: MetaMetrics, Inc. Retrieved January, 30, 2006.

  • Storkel, H. L., & Lee, S. Y. (2011a). The independent effects of phonotactic probability and neighbourhood density on lexical acquisition by preschool children. Language & Cognitive Processes, 26, 191–211.

    Google Scholar 

  • Storkel, H. L., & Lee, S. Y. (2011b). The independent effects of phonotactic probability and neighbourhood density on lexical acquisition by preschool children. Language & Cognitive Processes, 26, 191–211.

    Google Scholar 

  • Stuebing, K. K., Barth, A. E., Trahan, L. H., Reddy, R. R., Miciak, J., & Fletcher, J. M. (2015). Are child cognitive characteristics strong predictors of responses to intervention? A meta-analysis. Review of Educational Research, 85, 395–429.

    Google Scholar 

  • Tonin, F. S., Rotta, I., Mendes, A. M., & Pontarolo, R. (2017). Network meta-analysis: a technique to gather evidence from direct and indirect comparisons. Pharmacy Practice (Granada), 15(1).

  • Torgesen, J. K., Rashotte, C. A., & Alexander, A. W. (2001). Principles of fluency instruction in reading: Relationships with established empirical outcomes. In M. Wolf (Ed.), Dyslexia, Fluency, and the Brain, (pp. 333–355). Timonium, MD: York Press.

  • Vadasy, P. F., Sanders, E. A., & Peyton, J. A. (2005). Relative effectiveness of reading practice or word-level instruction in supplemental tutoring: how text matters. Journal of Learning Disabilities, 38, 364–380.

    Google Scholar 

  • Valencia, S. W., Smith, A. T., Reece, A. M., Li, M., Wixson, K. K., & Newman, H. (2010). Oral reading fluency assessment: issues of construct, criterion, and consequential validity. Reading Research Quarterly, 45, 270–291.

    Google Scholar 

  • Vellutino, F. R., Fletcher, J. M., Snowling, M. J., & Scanlon, D. M. (2004). Specific reading disability (dyslexia): what have we learned in the past fourdecades? Journal of Child Psychology and Psychiatry, 45, 2–40.

  • Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36, 1–48 URL: http://www.jstatsoft.org/v36/i03/. Accessed Apr 2018.

  • Yoder, P. J., Lloyd, B. P., & Symons, F. R. (2018). Observational Measurement of Behavior. Baltimore, Maryland: Brookes Publishing, Inc.

  • Ziegler, J. C., Perry, C., Ma-Wyatt, A., Ladner, D., & Schulte-Korne, G. (2003). Developmental dyslexia in different languages: language-specific or universal? Journal of Experimental Child Psychology, 86, 169–193.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neena Saha.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Saha, N., Cutting, L. Exploring the use of network meta-analysis in education: examining the correlation between ORF and text complexity measures. Ann. of Dyslexia 69, 335–354 (2019). https://doi.org/10.1007/s11881-019-00180-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11881-019-00180-y

Keywords

Navigation