当前位置: X-MOL 学术Educ. Psychol. Meas. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reevaluating the SIBTEST Classification Heuristics for Dichotomous Differential Item Functioning
Educational and Psychological Measurement ( IF 2.1 ) Pub Date : 2021-06-02 , DOI: 10.1177/00131644211017267
James D Weese 1 , Ronna C Turner 1 , Allison Ames 1 , Brandon Crawford 2 , Xinya Liang 1
Affiliation  

A simulation study was conducted to investigate the heuristics of the SIBTEST procedure and how it compares with ETS classification guidelines used with the Mantel–Haenszel procedure. Prior heuristics have been used for nearly 25 years, but they are based on a simulation study that was restricted due to computer limitations and that modeled item parameters from estimates of ACT and ASVAB tests from 1987 and 1984, respectively. Further, suggested heuristics for data fitting a two-parameter logistic model (2PL) have essentially went unused since their original presentation. This simulation study incorporates a wide range of data conditions to recommend heuristics for both 2PL and three-parameter logistic (3PL) data that correspond with ETS’s Mantel–Haenszel heuristics. Levels of agreement between the new SIBTEST heuristics and Mantel–Haenszel heuristics were similar for 2PL data and higher than prior SIBTEST heuristics for 3PL data. The new recommendations provide higher true-positive rates for 2PL data. Conversely, they displayed decreased true-positive rates for 3PL data. False-positive rates, overall, remained below the level of significance for the new heuristics. Unequal group sizes resulted in slightly larger false-positive rates than balanced designs for both prior and new SIBTEST heuristics, with rates less than alpha levels for equal ability distributions and unbalanced designs versus false-positive rates slightly higher than alpha with unequal ability distributions and unbalanced designs.



中文翻译:

重新评估二分差异项功能的 SIBTEST 分类启发式

进行了一项模拟研究,以调查 SIBTEST 程序的启发式方法,以及它如何与 Mantel–Haenszel 程序使用的 ETS 分类指南进行比较。先前的启发式方法已经使用了将近 25 年,但它们基于一项模拟研究,该研究由于计算机的限制而受到限制,并且分别根据 1987 年和 1984 年的 ACT 和 ASVAB 测试估计对项目参数进行建模。此外,建议的用于数据拟合双参数逻辑模型 (2PL) 的启发式算法自最初提出以来基本上未被使用。该模拟研究结合了广泛的数据条件,为 2PL 和三参数逻辑 (3PL) 数据推荐启发式方法,这些数据与 ETS 的 Mantel–Haenszel 启发式方法相对应。新的 SIBTEST 启发式算法和 Mantel-Haenszel 启发式算法之间的协议级别对于 2PL 数据相似,并且高于之前的 SIBTEST 启发式算法对于 3PL 数据。新建议为 2PL 数据提供了更高的真阳性率。相反,他们显示 3PL 数据的真阳性率降低。总体而言,误报率仍低于新启发式的显着性水平。对于先前和新的 SIBTEST 启发式算法,不相等的组大小导致的假阳性率略高于平衡设计,对于相等的能力分布和不平衡的设计,假阳性率低于 alpha 水平,而假阳性率略高于具有不等能力分布和不平衡的 alpha设计。新建议为 2PL 数据提供了更高的真阳性率。相反,他们显示 3PL 数据的真阳性率降低。总体而言,误报率仍低于新启发式的显着性水平。对于先前和新的 SIBTEST 启发式算法,不相等的组大小导致的假阳性率略高于平衡设计,对于相等的能力分布和不平衡的设计,假阳性率低于 alpha 水平,而假阳性率略高于具有不等能力分布和不平衡的 alpha设计。新建议为 2PL 数据提供了更高的真阳性率。相反,他们显示 3PL 数据的真阳性率降低。总体而言,误报率仍低于新启发式的显着性水平。对于先前和新的 SIBTEST 启发式算法,不相等的组大小导致的假阳性率略高于平衡设计,对于相等的能力分布和不平衡的设计,假阳性率低于 alpha 水平,而假阳性率略高于具有不等能力分布和不平衡的 alpha设计。

更新日期:2021-06-03
down
wechat
bug