当前位置: X-MOL 学术Soc. Cogn. Affect. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices.
Social Cognitive and Affective Neuroscience ( IF 3.9 ) Pub Date : 2020-06-29 , DOI: 10.1093/scan/nsaa089
Lei Zhang 1, 2 , Lukas Lengersdorff 1, 2 , Nace Mikus 1 , Jan Gläscher 3 , Claus Lamm 1, 2, 4
Affiliation  

The recent years have witnessed a dramatic increase in the use of reinforcement learning (RL) models in social, cognitive and affective neuroscience. This approach, in combination with neuroimaging techniques such as functional magnetic resonance imaging, enables quantitative investigations into latent mechanistic processes. However, increased use of relatively complex computational approaches has led to potential misconceptions and imprecise interpretations. Here, we present a comprehensive framework for the examination of (social) decision-making with the simple Rescorla–Wagner RL model. We discuss common pitfalls in its application and provide practical suggestions. First, with simulation, we unpack the functional role of the learning rate and pinpoint what could easily go wrong when interpreting differences in the learning rate. Then, we discuss the inevitable collinearity between outcome and prediction error in RL models and provide suggestions of how to justify whether the observed neural activation is related to the prediction error rather than outcome valence. Finally, we suggest posterior predictive check is a crucial step after model comparison, and we articulate employing hierarchical modeling for parameter estimation. We aim to provide simple and scalable explanations and practical guidelines for employing RL models to assist both beginners and advanced users in better implementing and interpreting their model-based analyses.

中文翻译:


在社会神经科学中使用强化学习模型:框架、陷阱和最佳实践建议。



近年来,强化学习(RL)模型在社会、认知和情感神经科学中的使用急剧增加。这种方法与功能磁共振成像等神经影像技术相结合,可以对潜在的机械过程进行定量研究。然而,相对复杂的计算方法的使用增加导致了潜在的误解和不精确的解释。在这里,我们提出了一个使用简单的 Rescorla-Wagner RL 模型来检查(社会)决策的综合框架。我们讨论其应用中的常见陷阱并提供实用的建议。首先,通过模拟,我们解开了学习率的功能作用,并查明在解释学习率差异时容易出错的地方。然后,我们讨论 RL 模型中结果和预测误差之间不可避免的共线性,并提供如何证明观察到的神经激活是否与预测误差而不是结果效价相关的建议。最后,我们建议后验预测检查是模型比较后的关键步骤,并且我们阐明采用分层模型进行参数估计。我们的目标是为使用强化学习模型提供简单且可扩展的解释和实用指南,以帮助初学者和高级用户更好地实施和解释基于模型的分析。
更新日期:2020-07-31
down
wechat
bug