当前位置: X-MOL 学术Psychon. Bull. Rev. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reward prediction errors drive declarative learning irrespective of agency
Psychonomic Bulletin & Review ( IF 3.2 ) Pub Date : 2021-06-15 , DOI: 10.3758/s13423-021-01952-7
Kate Ergo 1 , Luna De Vilder 1 , Esther De Loof 1 , Tom Verguts 1
Affiliation  

Recent years have witnessed a steady increase in the number of studies investigating the role of reward prediction errors (RPEs) in declarative learning. Specifically, in several experimental paradigms, RPEs drive declarative learning, with larger and more positive RPEs enhancing declarative learning. However, it is unknown whether this RPE must derive from the participant’s own response, or whether instead, any RPE is sufficient to obtain the learning effect. To test this, we generated RPEs in the same experimental paradigm where we combined an agency and a nonagency condition. We observed no interaction between RPE and agency, suggesting that any RPE (irrespective of its source) can drive declarative learning. This result holds implications for declarative learning theory.



中文翻译:

无论机构如何,奖励预测错误都会推动声明式学习

近年来,研究奖励预测错误 (RPE) 在声明式学习中的作用的研究数量稳步增加。具体来说,在几个实验范式中,RPEs 推动了陈述性学习,更大和更积极的 RPEs 增强了陈述性学习。然而,不知道这个 RPE 是否必须来自参与者自己的反应,或者任何 RPE 都足以获得学习效果。为了测试这一点,我们在相同的实验范式中生成了 RPE,其中我们结合了代理和非代理条件。我们观察到 RPE 和代理之间没有相互作用,这表明任何 RPE(无论其来源如何)都可以推动陈述性学习。这一结果对陈述性学习理论有影响。

更新日期:2021-06-15
down
wechat
bug