当前位置: X-MOL 学术Journal of Educational Measurement › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Logistic Regression Procedure Using Penalized Maximum Likelihood Estimation for Differential Item Functioning
Journal of Educational Measurement ( IF 1.188 ) Pub Date : 2019-09-08 , DOI: 10.1111/jedm.12253
Sunbok Lee 1
Affiliation  

In the logistic regression (LR) procedure for differential item functioning (DIF), the parameters of LR have often been estimated using maximum likelihood (ML) estimation. However, ML estimation suffers from the finite‐sample bias. Furthermore, ML estimation for LR can be substantially biased in the presence of rare event data. The bias of ML estimation due to small samples and rare event data can degrade the performance of the LR procedure, especially when testing the DIF of difficult items in small samples. Penalized ML (PML) estimation was originally developed to reduce the finite‐sample bias of conventional ML estimation and also was known to reduce the bias in the estimation of LR for the rare events data. The goal of this study is to compare the performances of the LR procedures based on the ML and PML estimation in terms of the statistical power and Type I error. In a simulation study, Swaminathan and Rogers's Wald test based on PML estimation (PSR) showed the highest statistical power in most of the simulation conditions, and LRT based on conventional PML estimation (PLRT) showed the most robust and stable Type I error. The discussion about the trade‐off between bias and variance is presented in the discussion section.

中文翻译:

使用惩罚最大似然估计的Logistic回归程序用于差分项功能

在用于差异项功能(DIF)的逻辑回归(LR)过程中,通常使用最大似然(ML)估计来估计LR的参数。但是,机器学习估计受有限样本偏差的影响。此外,在存在罕见事件数据的情况下,对LR的ML估计可能会出现明显偏差。由于小样本和稀有事件数据而导致的ML估计偏差可能会降低LR过程的性能,尤其是在测试小样本中困难项目的DIF时。惩罚性ML(PML)估计最初是为减少常规ML估计的有限样本偏差而开发的,还众所周知的是减少稀有事件数据的LR估计中的偏差。这项研究的目的是根据统计能力和I类错误比较基于ML和PML估计的LR程序的性能。在一项仿真研究中,基于PML估计(PSR)的Swaminathan和Rogers的Wald检验在大多数模拟条件下显示出最高的统计功效,而基于常规PML估计(PLRT)的LRT显示出最稳健的I型误差。讨论部分介绍了偏差和方差之间的权衡问题。
更新日期:2019-09-08
down
wechat
bug