当前位置: X-MOL 学术Psychological Methods › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Efficient selection between hierarchical cognitive models: Cross-validation with variational Bayes.
Psychological Methods ( IF 7.6 ) Pub Date : 2022-04-21 , DOI: 10.1037/met0000458
Viet Hung Dao 1 , David Gunawan 2 , Minh-Ngoc Tran 2 , Robert Kohn 1 , Guy E Hawkins 3 , Scott D Brown 3
Affiliation  

Model comparison is the cornerstone of theoretical progress in psychological research. Common practice overwhelmingly relies on tools that evaluate competing models by balancing in-sample descriptive adequacy against model flexibility, with modern approaches advocating the use of marginal likelihood for hierarchical cognitive models. Cross-validation is another popular approach but its implementation remains out of reach for cognitive models evaluated in a Bayesian hierarchical framework, with the major hurdle being its prohibitive computational cost. To address this issue, we develop novel algorithms that make variational Bayes (VB) inference for hierarchical models feasible and computationally efficient for complex cognitive models of substantive theoretical interest. It is well known that VB produces good estimates of the first moments of the parameters, which gives good predictive densities estimates. We thus develop a novel VB algorithm with Bayesian prediction as a tool to perform model comparison by cross-validation, which we refer to as CVVB. In particular, CVVB can be used as a model screening device that quickly identifies bad models. We demonstrate the utility of CVVB by revisiting a classic question in decision making research: what latent components of processing drive the ubiquitous speed-accuracy tradeoff? We demonstrate that CVVB strongly agrees with model comparison via marginal likelihood, yet achieves the outcome in much less time. Our approach brings cross-validation within reach of theoretically important psychological models, making it feasible to compare much larger families of hierarchically specified cognitive models than has previously been possible. To enhance the applicability of the algorithm, we provide Matlab code together with a user manual so users can easily implement VB and/or CVVB for the models considered in this article and their variants.

中文翻译:


分层认知模型之间的有效选择:与变分贝叶斯的交叉验证。



模型比较是心理学研究理论进步的基石。常见实践绝大多数依赖于通过平衡样本内描述充分性与模型灵活性来评估竞争模型的工具,而现代方法提倡对分层认知模型使用边际似然。交叉验证是另一种流行的方法,但其实现对于在贝叶斯分层框架中评估的认知模型来说仍然遥不可及,主要障碍是其高昂的计算成本。为了解决这个问题,我们开发了新颖的算法,使分层模型的变分贝叶斯(VB)推理变得可行,并且对于具有实质性理论意义的复杂认知模型来说计算效率高。众所周知,VB 可以对参数的一阶矩进行良好的估计,从而给出良好的预测密度估计。因此,我们开发了一种新颖的 VB 算法,以贝叶斯预测为工具,通过交叉验证进行模型比较,我们将其称为 CVVB。特别是,CVVB可以用作模型筛选设备,快速识别不良模型。我们通过重新审视决策研究中的一个经典问题来展示 CVVB 的实用性:处理的哪些潜在组件驱动了普遍存在的速度与精度权衡?我们证明 CVVB 非常同意通过边际似然进行模型比较,但在更短的时间内实现了结果。我们的方法将交叉验证纳入理论上重要的心理模型的范围内,使得比较比以前更大的分层指定认知模型系列变得可行。 为了增强算法的适用性,我们提供了 Matlab 代码和用户手册,以便用户可以轻松地为本文中考虑的模型及其变体实现 VB 和/或 CVVB。
更新日期:2022-04-22
down
wechat
bug