当前位置: X-MOL 学术Bull. Math. Biol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Rescorla–Wagner Models with Sparse Dynamic Attention
Bulletin of Mathematical Biology ( IF 2.0 ) Pub Date : 2020-06-01 , DOI: 10.1007/s11538-020-00743-w
Joel Nishimura 1 , Amy L Cochran 2
Affiliation  

The Rescorla–Wagner (R–W) model describes human associative learning by proposing that an agent updates associations between stimuli, such as events in their environment or predictive cues, proportionally to a prediction error. While this model has proven informative in experiments, it has been posited that humans selectively attend to certain cues to overcome a problem with the R–W model scaling to large cue dimensions. We formally characterize this scaling problem and provide a solution that involves limiting attention in a R–W model to a sparse set of cues. Given the universal difficulty in selecting features for prediction, sparse attention faces challenges beyond those faced by the R–W model. We demonstrate several ways in which a naive attention model can fail explain those failures and leverage that understanding to produce a Sparse Attention R–W with Inference framework (SAR-WI). The SAR-WI framework not only satisfies a constraint on the number of attended cues, it also performs as well as the R–W model on a number of natural learning tasks, can correctly infer associative strengths, and focuses attention on predictive cues while ignoring uninformative cues. Given the simplicity of proposed alterations, we hope this work informs future development and empirical validation of associative learning models that seek to incorporate sparse attention.

中文翻译:

具有稀疏动态注意的 Rescorla-Wagner 模型

Rescorla-Wagner (R-W) 模型通过提出代理更新刺激之间的关联来描述人类联想学习,例如环境中的事件或预测线索,与预测误差成比例。虽然该模型已在实验中证明提供信息,但有人认为人类有选择地关注某些线索,以克服 R-W 模型缩放到大线索维度的问题。我们正式描述了这个缩放问题,并提供了一个解决方案,该解决方案涉及将 R-W 模型中的注意力限制在一组稀疏线索上。鉴于选择特征进行预测的普遍困难,稀疏注意力面临的挑战超出了 R-W 模型所面临的挑战。我们展示了一些简单的注意力模型可以失败的方式来解释这些失败,并利用这种理解来产生具有推理框架(SAR-WI)的稀疏注意力 R-W。SAR-WI 框架不仅满足对参与线索数量的约束,而且在许多自然学习任务上的表现与 R-W 模型一样好,可以正确推断关联强度,并且将注意力集中在预测线索上而忽略无信息提示。鉴于提议的更改很简单,我们希望这项工作能够为寻求融合稀疏注意力的关联学习模型的未来发展和实证验证提供信息。它在许多自然学习任务上的表现也与 R-W 模型一样好,可以正确推断关联强度,并将注意力集中在预测线索上,同时忽略无信息线索。鉴于提议的更改很简单,我们希望这项工作能够为寻求融合稀疏注意力的关联学习模型的未来发展和实证验证提供信息。它在许多自然学习任务上的表现也与 R-W 模型一样好,可以正确推断关联强度,并将注意力集中在预测线索上,同时忽略无信息线索。鉴于提议的更改很简单,我们希望这项工作能够为寻求融合稀疏注意力的关联学习模型的未来发展和实证验证提供信息。
更新日期:2020-06-01
down
wechat
bug