当前位置: X-MOL 学术Nat. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning with reinforcement prediction errors in a model of the Drosophila mushroom body
Nature Communications ( IF 16.6 ) Pub Date : 2021-05-07 , DOI: 10.1038/s41467-021-22592-4
James E. M. Bennett , Andrew Philippides , Thomas Nowotny

Effective decision making in a changing environment demands that accurate predictions are learned about decision outcomes. In Drosophila, such learning is orchestrated in part by the mushroom body, where dopamine neurons signal reinforcing stimuli to modulate plasticity presynaptic to mushroom body output neurons. Building on previous mushroom body models, in which dopamine neurons signal absolute reinforcement, we propose instead that dopamine neurons signal reinforcement prediction errors by utilising feedback reinforcement predictions from output neurons. We formulate plasticity rules that minimise prediction errors, verify that output neurons learn accurate reinforcement predictions in simulations, and postulate connectivity that explains more physiological observations than an experimentally constrained model. The constrained and augmented models reproduce a broad range of conditioning and blocking experiments, and we demonstrate that the absence of blocking does not imply the absence of prediction error dependent learning. Our results provide five predictions that can be tested using established experimental methods.



中文翻译:

果蝇蘑菇体模型中的增强预测误差学习

在不断变化的环境中进行有效的决策需要了解有关决策结果的准确预测。在果蝇中因此,这种学习在某种程度上是由蘑菇体组织的,其中多巴胺神经元发出信号增强信号刺激,以调节突触前与蘑菇体输出神经元的可塑性。在先前的蘑菇体模型(其中多巴胺神经元发出信号来增强绝对信号)的基础上,我们建议通过利用来自输出神经元的反馈增强预测来预测多巴胺神经元发出信号来预测增强错误。我们制定可塑性规则,以最小化预测误差,验证输出神经元在模拟中学习准确的增强预测,并假设连通性比实验约束模型解释了更多的生理观察结果。受约束和增强的模型可再现各种条件和封闭实验,并且我们证明了没有阻塞并不意味着没有依赖预测误差的学习。我们的结果提供了五个可以使用既定的实验方法进行检验的预测。

更新日期:2021-05-07
down
wechat
bug