当前位置: X-MOL 学术J. Res. Educ. Eff. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Average Effect Sizes in Developer-Commissioned and Independent Evaluations
Journal of Research on Educational Effectiveness ( IF 2.217 ) Pub Date : 2020-04-06 , DOI: 10.1080/19345747.2020.1726537
Rebecca Wolf 1 , Jennifer Morrison 1 , Amanda Inns 1 , Robert Slavin 1 , Kelsey Risman 1
Affiliation  

Abstract

Rigorous evidence of program effectiveness has become increasingly important with the 2015 passage of the Every Student Succeeds Act (ESSA). One question that has not yet been fully explored is whether program evaluations carried out or commissioned by developers produce larger effect sizes than evaluations conducted by independent third parties. Using study data from the What Works Clearinghouse, we find evidence of a “developer effect,” where program evaluations carried out or commissioned by developers produced average effect sizes that were substantially larger than those identified in evaluations conducted by independent parties. We explore potential reasons for the existence of a “developer effect” and provide evidence that interventions evaluated by developers were not simply more effective than those evaluated by independent parties. We conclude by discussing plausible explanations for this phenomenon as well as providing suggestions for researchers to mitigate potential bias in evaluations moving forward.



中文翻译:

开发人员委托和独立评估中的平均效果大小

摘要

随着2015年《每个学生成功法案》(ESSA)的通过,严格的计划有效性证据变得越来越重要。尚未完全探讨的一个问题是,开发人员执行或委托进行的程序评估是否会比独立第三方进行的评估产生更大的效果。使用来自What Works信息交换所的研究数据,我们发现了“开发者效应”的证据,其中,由开发者执行或委托的程序评估产生的平均效应大小要比独立方进行的评估所确定的平均效应大小大得多。我们探讨了存在“开发者效应”的潜在原因,并提供证据表明,开发者评估的干预不仅比独立方评估的干预更为有效。

更新日期:2020-04-06
down
wechat
bug