当前位置: X-MOL 学术Psychometrika › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Deep Learning Algorithm for High-Dimensional Exploratory Item Factor Analysis
Psychometrika ( IF 3 ) Pub Date : 2021-02-02 , DOI: 10.1007/s11336-021-09748-3
Christopher J Urban 1 , Daniel J Bauer 1
Affiliation  

Marginal maximum likelihood (MML) estimation is the preferred approach to fitting item response theory models in psychometrics due to the MML estimator’s consistency, normality, and efficiency as the sample size tends to infinity. However, state-of-the-art MML estimation procedures such as the Metropolis–Hastings Robbins–Monro (MH-RM) algorithm as well as approximate MML estimation procedures such as variational inference (VI) are computationally time-consuming when the sample size and the number of latent factors are very large. In this work, we investigate a deep learning-based VI algorithm for exploratory item factor analysis (IFA) that is computationally fast even in large data sets with many latent factors. The proposed approach applies a deep artificial neural network model called an importance-weighted autoencoder (IWAE) for exploratory IFA. The IWAE approximates the MML estimator using an importance sampling technique wherein increasing the number of importance-weighted (IW) samples drawn during fitting improves the approximation, typically at the cost of decreased computational efficiency. We provide a real data application that recovers results aligning with psychological theory across random starts. Via simulation studies, we show that the IWAE yields more accurate estimates as either the sample size or the number of IW samples increases (although factor correlation and intercepts estimates exhibit some bias) and obtains similar results to MH-RM in less time. Our simulations also suggest that the proposed approach performs similarly to and is potentially faster than constrained joint maximum likelihood estimation, a fast procedure that is consistent when the sample size and the number of items simultaneously tend to infinity.



中文翻译:

一种用于高维探索性项目因素分析的深度学习算法

边际最大似然 (MML) 估计是在心理测量学中拟合项目响应理论模型的首选方法,因为 MML 估计器的一致性、正态性和效率,因为样本量趋于无穷大。然而,最先进的 MML 估计程序,如 Metropolis-Hastings Robbins-Monro (MH-RM) 算法以及近似的 MML 估计程序,如变分推理 (VI)并且潜在因素的数量非常大。在这项工作中,我们研究了一种基于深度学习的 VI 算法,用于探索性项目因子分析 (IFA),即使在具有许多潜在因子的大型数据集中计算速度也很快。所提出的方法将一种称为重要性加权自动编码器 (IWAE) 的深度人工神经网络模型应用于探索性 IFA。IWAE 使用重要性采样技术来近似 MML 估计器,其中增加拟合期间绘制的重要性加权 (IW) 样本的数量可以改进近似值,通常以降低计算效率为代价。我们提供了一个真实的数据应用程序,可以在随机开始时恢复符合心理学理论的结果。通过模拟研究,我们表明,随着样本大小或 IW 样本数量的增加,IWAE 产生更准确的估计(尽管因子相关性和截距估计表现出一些偏差),并在更短的时间内获得与 MH-RM 相似的结果。我们的模拟还表明,所提出的方法与受约束的联合最大似然估计相似,并且可能比受约束的联合最大似然估计更快,

更新日期:2021-02-02
down
wechat
bug