当前位置: X-MOL 学术Neuropsychopharmacology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reinforcement-learning in fronto-striatal circuits
Neuropsychopharmacology ( IF 6.6 ) Pub Date : 2021-08-05 , DOI: 10.1038/s41386-021-01108-0
Bruno Averbeck 1 , John P O'Doherty 2
Affiliation  

We review the current state of knowledge on the computational and neural mechanisms of reinforcement-learning with a particular focus on fronto-striatal circuits. We divide the literature in this area into five broad research themes: the target of the learning—whether it be learning about the value of stimuli or about the value of actions; the nature and complexity of the algorithm used to drive the learning and inference process; how learned values get converted into choices and associated actions; the nature of state representations, and of other cognitive machinery that support the implementation of various reinforcement-learning operations. An emerging fifth area focuses on how the brain allocates or arbitrates control over different reinforcement-learning sub-systems or “experts”. We will outline what is known about the role of the prefrontal cortex and striatum in implementing each of these functions. We then conclude by arguing that it will be necessary to build bridges from algorithmic level descriptions of computational reinforcement-learning to implementational level models to better understand how reinforcement-learning emerges from multiple distributed neural networks in the brain.



中文翻译:


额纹状体回路中的强化学习



我们回顾了强化学习的计算和神经机制的知识现状,特别关注额纹状体回路。我们将这一领域的文献分为五个广泛的研究主题:学习的目标——无论是学习刺激的价值还是行动的价值;用于驱动学习和推理过程的算法的性质和复杂性;学到的价值观如何转化为选择和相关行动;状态表征的本质,以及支持实施各种强化学习操作的其他认知机制的本质。新兴的第五个领域关注大脑如何分配或仲裁对不同强化学习子系统或“专家”的控制。我们将概述前额皮质和纹状体在实现这些功能中的作用的已知信息。然后我们得出的结论是,有必要在计算强化学习的算法级描述和实现级模型之间建立桥梁,以更好地理解强化学习如何从大脑中的多个分布式神经网络中产生。

更新日期:2021-08-05
down
wechat
bug